Datasets:

Modalities:
Text
Languages:
English
Libraries:
Datasets
License:
proof-pile / books /hott /induction.tex
zhangir-azerbayev
added books
afd65d6
raw
history blame
94.7 kB
\chapter{Induction}
\label{cha:induction}
In \cref{cha:typetheory}, we introduced many ways to form new types from old ones.
Except for (dependent) function types and universes, all these rules are special cases of the general notion of \emph{inductive definition}.\index{definition!inductive}\index{inductive!definition}
In this chapter we study inductive definitions more generally.
\section{Introduction to inductive types}
\label{sec:bool-nat}
\index{type!inductive|(}%
\index{generation!of a type, inductive|(}
An \emph{inductive type} $X$ can be intuitively understood as a type ``freely generated'' by a certain finite collection of \emph{constructors}, each of which is a function (of some number of arguments) with codomain $X$.
This includes functions of zero arguments, which are simply elements of $X$.
When describing a particular inductive type, we list the constructors with bullets.
For instance, the type \bool from \cref{sec:type-booleans} is inductively generated by the following constructors:
\begin{itemize}
\item $\bfalse:\bool$
\item $\btrue:\bool$
\end{itemize}
Similarly, $\unit$ is inductively generated by the constructor:
\begin{itemize}
\item $\ttt:\unit$
\end{itemize}
while $\emptyt$ is inductively generated by no constructors at all.
An example where the constructor functions take arguments is the coproduct $A+B$, which is generated by the two constructors
\begin{itemize}
\item $\inl:A\to A+B$
\item $\inr:B\to A+B$.
\end{itemize}
And an example with a constructor taking multiple arguments is the cartesian product $A\times B$, which is generated by one constructor
\begin{itemize}
\item $\pairr{\blank, \blank} : A\to B \to A\times B$.
\end{itemize}
Crucially, we also allow constructors of inductive types that take arguments from the inductive type being defined.
For instance, the type $\nat$ of natural numbers has constructors
\begin{itemize}
\item $0:\nat$
\item $\suc:\nat\to\nat$.
\end{itemize}
\symlabel{lst}
\index{type!of lists}%
\indexsee{list}{type of lists}%
Another useful example is the type $\lst A$ of finite lists of elements of some type $A$, which has constructors
\begin{itemize}\label{list_constructors}
\item $\nil:\lst A$
\item $\cons:A\to \lst A \to \lst A$.
\end{itemize}
\index{free!generation of an inductive type}%
Intuitively, we should understand an inductive type as being \emph{freely generated by} its constructors.
That is, the elements of an inductive type are exactly what can be obtained by starting from nothing and applying the constructors repeatedly.
(We will see in \cref{sec:identity-systems,cha:hits} that this conception has to be modified slightly for more general kinds of inductive definitions, but for now it is sufficient.)
For instance, in the case of \bool, we should expect that the only elements are $\bfalse$ and $\btrue$.
Similarly, in the case of \nat, we should expect that every element is either $0$ or obtained by applying $\suc$ to some ``previously constructed'' natural number.
\index{induction principle}%
Rather than assert properties such as this directly, however, we express them by means of an \emph{induction principle}, also called a \emph{(dependent) elimination rule}.
We have seen these principles already in \cref{cha:typetheory}.
For instance, the induction principle for \bool is:
%
\begin{itemize}
\item When proving a statement $E : \bool \to \type$ about \emph{all} inhabitants of \bool, it suffices to prove it for $\bfalse$ and $\btrue$, i.e., to give proofs $e_0 : E(\bfalse)$ and $e_1 : E(\btrue)$.
\end{itemize}
Furthermore, the resulting proof $\ind\bool(E,e_0,e_1): \prd{b : \bool}E(b)$ behaves as expected when applied to the constructors $\bfalse$ and $\btrue$; this principle is expressed by the \emph{computation rules}\index{computation rule!for type of booleans}:
\begin{itemize}
\item We have $\ind\bool(E,e_0,e_1,\bfalse) \jdeq e_0$.
\item We have $\ind\bool(E,e_0,e_1,\btrue) \jdeq e_1$.
\end{itemize}
\index{case analysis}%
Thus, the induction principle for the type $\bool$ of booleans allows us to reason by \emph{case analysis}.
Since neither of the two constructors takes any arguments, this is all we need for booleans.
\index{natural numbers}%
For natural numbers, however, case analysis is generally not sufficient: in the case corresponding to the inductive step $\suc(n)$, we also want to presume that the statement being proven has already been shown for $n$.
This gives us the following induction principle:
\begin{itemize}
\item When proving a statement $E : \nat \to \type$ about \emph{all} natural numbers, it suffices to prove it for $0$ and for $\suc(n)$, assuming it holds
for $n$, i.e., we construct $e_z : E(0)$ and $e_s : \prd{n : \nat} E(n) \to E(\suc(n))$.
\end{itemize}
As in the case of booleans, we also have the associated computation rules for the function $\ind\nat(E,e_z,e_s) : \prd{x:\nat} E(x)$:
\index{computation rule!for natural numbers}%
\begin{itemize}
\item $\ind\nat(E,e_z,e_s,0) \jdeq e_z$.
\item $\ind\nat(E,e_z,e_s,\suc(n)) \jdeq e_s(n,\ind\nat(E,e_z,e_s,n))$ for any $n : \nat$.
\end{itemize}
The dependent function $\ind\nat(E,e_z,e_s)$ can thus be understood as being defined recursively on the argument $x : \nat$, via the functions $e_z$ and $e_s$ which we call the \define{recurrences}\indexdef{recurrence}.
When $x$ is zero,\index{zero} the function simply returns $e_z$.
When $x$ is the successor\index{successor} of another natural number $n$, the result is obtained by taking the recurrence $e_s$ and substituting the specific predecessor\index{predecessor} $n$ and the recursive call value $\ind\nat(E,e_z,e_s,n)$.
The induction principles for all the examples mentioned above share this family resemblance.
In \cref{sec:strictly-positive} we will discuss a general notion of ``inductive definition'' and how to derive an appropriate \emph{induction principle} for it, but first we investigate various commonalities between inductive definitions.
\index{recursion principle}%
For instance, we have remarked in every case in \cref{cha:typetheory} that from the induction principle we can derive a \emph{recursion principle} in which the codomain is a simple type (rather than a family).
Both induction and recursion principles may seem odd, since they yield only the \emph{existence} of a function without seeming to characterize it uniquely.
However, in fact the induction principle is strong enough also to prove its own \emph{uniqueness principle}\index{uniqueness!principle, propositional!for functions on N@for functions on $\nat$}, as in the following theorem.
\begin{thm}\label{thm:nat-uniq}
Let $f,g : \prd{x:\nat} E(x)$ be two functions which satisfy the recurrences
%
\begin{equation*}
e_z : E(0)
\qquad\text{and}\qquad
e_s : \prd{n : \nat} E(n) \to E(\suc(n))
\end{equation*}
%
up to propositional equality, i.e., such that
\begin{equation*}
\id{f(0)}{e_z}
\qquad\text{and}\qquad
\id{g(0)}{e_z}
\end{equation*}
as well as
\begin{gather*}
\prd{n : \nat} \id{f(\suc(n))}{e_s(n, f(n))},\\
\prd{n : \nat} \id{g(\suc(n))}{e_s(n, g(n))}.
\end{gather*}
Then $f$ and $g$ are equal.
\end{thm}
\begin{proof}
We use induction on the type family $D(x) \defeq \id{f(x)}{g(x)}$. For the base case, we have
\[ f(0) = e_z = g(0). \]
For the inductive case, assume $n : \nat$ such that $f(n) = g(n)$. Then
\[ f(\suc(n)) = e_s(n, f(n)) = e_s(n, g(n)) = g(\suc(n)). \]
The first and last equality follow from the assumptions on $f$ and $g$. The middle equality follows from the inductive hypothesis and the fact that application preserves equality. This gives us pointwise equality between $f$ and $g$; invoking function extensionality finishes the proof.
\end{proof}
Note that the uniqueness principle applies even to functions that only satisfy the recurrences \emph{up to propositional equality}, i.e.\ a path.
Of course, the particular function obtained from the induction principle satisfies these recurrences judgmentally\index{judgmental equality}; we will return to this point in \cref{sec:htpy-inductive}.
On the other hand, the theorem itself only asserts a propositional equality between functions (see also \cref{ex:same-recurrence-not-defeq}).
From a homotopical viewpoint it is natural to ask whether this path is \emph{coherent}, i.e.\ whether the equality $f=g$ is unique up to higher paths; in \cref{sec:initial-alg} we will see that this is in fact the case.
Of course, similar uniqueness theorems for functions can generally be formulated and shown for other inductive types as well.
In the next section, we show how this uniqueness property, together with univalence, implies that an inductive type such as the natural numbers is completely characterized by its introduction, elimination, and computation rules.
\index{type!inductive|)}%
\index{generation!of a type, inductive|)}%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Uniqueness of inductive types}
\label{sec:appetizer-univalence}
\index{uniqueness!of identity types}%
We have defined ``the'' natural numbers to be a particular type \nat with particular inductive generators $0$ and $\suc$.
However, by the general principle of inductive definitions in type theory described in the previous section, there is nothing preventing us from defining \emph{another} type in an identical way.
\index{natural numbers!isomorphic definition of}
That is, suppose we let $\natp$ be the inductive type generated by the constructors
\begin{itemize}
\item $\zerop:\natp$
\item $\sucp:\natp\to\natp$.
\end{itemize}
Then $\natp$ will have identical-looking induction and recursion principles to $\nat$.
When proving a statement $E : \natp \to \type$ for all of these ``new'' natural numbers, it suffices to give the proofs $e_z : E(\zerop)$ and \narrowequation{e_s : \prd{n : \natp} E(n) \to E(\sucp(n)).}
And the function $\rec\natp(E,e_z,e_s) : \prd{n:\natp} E(n)$ has the following computation rules:
\begin{itemize}
\item $\rec\natp(E,e_z,e_s,\zerop) \jdeq e_z$,
\item $\rec\natp(E,e_z,e_s,\sucp(n)) \jdeq e_s(n,\rec\natp(E,e_z,e_s,n))$ for any $n : \natp$.
\end{itemize}
But what is the relation between $\nat$ and $\natp$?
This is not just an academic question, since structures that ``look like'' the natural numbers can be found in many other places.
\index{type!of lists}%
For instance, we may identify natural numbers with lists over the type with one element (this is arguably the oldest appearance, found on walls of caves\index{caves, walls of}), with the non-negative integers, with subsets of the rationals and the reals, and so on.
And from a programming\index{programming} point of view, the ``unary'' representation of our natural numbers is very inefficient, so we might prefer sometimes to use a binary one instead.
We would like to be able to identify all of these versions of ``the natural numbers'' with each other, in order to transfer constructions and results from one to another.
Of course, if two versions of the natural numbers satisfy identical induction principles, then they have identical induced structure.
For instance, recall the example of the function $\dbl$ defined in \cref{sec:inductive-types}. A similar function
for our new natural numbers is readily defined by duplication and adding primes:
\[ \dblp \defeq \rec\natp(\natp, \; \zerop, \; \lamu{n:\natp}{m:\natp} \sucp(\sucp(m))). \]
Simple as this may seem, it has the obvious drawback of leading to a
proliferation of duplicates. Not only functions have to be
duplicated, but also all lemmas and their proofs. For example,
an easy result such as $\prd{n : \nat} \dbl(\suc(n))=\suc(\suc(\dbl(n)))$, as well
as its proof by induction, also has to be ``primed''.
In traditional mathematics, one just proclaims that $\nat$ and $\natp$ are obviously ``the same'', and can be substituted for each other whenever the need arises.
This is usually unproblematic, but it sweeps a fair amount under the rug, widening the gap between informal mathematics and its precise description.
In homotopy type theory, we can do better.
First observe that we have the following definable maps:
\begin{itemize}
\item $f \defeq \rec\nat(\natp, \; \zerop, \; \lamu{n:\nat} \sucp)
: \nat \to\natp$,
\item $g \defeq \rec\natp(\nat, \; 0, \; \lamu{n:\natp} \suc)
: \natp \to\nat$.
\end{itemize}
Since the composition of $g$ and $f$ satisfies the same recurrences as the identity function on $\nat$, \cref{thm:nat-uniq} gives that $\prd{n : \nat} \id{g(f(n))}{n}$, and the ``primed'' version of the same theorem gives $\prd{n : \natp} \id{f(g(n))}{n}$.
Thus, $f$ and $g$ are quasi-inverses, so that $\eqv{\nat}{\natp}$.
We can now transfer functions on $\nat$ directly to functions on $\natp$ (and vice versa) along this equivalence, e.g.
\[ \dblp \defeq \lamu{n:\natp} f(\dbl(g(n))). \]
It is an easy exercise to show that this version of $\dblp$ is equal to the earlier one.
Of course, there is nothing surprising about this; such an isomorphism is exactly how a mathematician will envision ``identifying'' $\nat$ with $\natp$.
However, the mechanism of ``transfer'' across an isomorphism depends on the thing being transferred; it is not always as simple as pre- and post-composing a single function with $f$ and $g$.
Consider, for instance, a simple lemma such as
\[\prd{n : \natp} \dblp(\sucp(n))=\sucp(\sucp(\dblp(n))).\]
Inserting the correct $f$s and $g$s is only a little easier than re-proving it by induction on $n:\natp$ directly.
\index{isomorphism!transfer across}%
\index{univalence axiom}%
Here is where the univalence axiom steps in: since $\eqv{\nat}{\natp}$, we also have $\id[\type]{\nat}{\natp}$, i.e.\ $\nat$ and $\natp$ are
\emph{equal} as types.
Now the induction principle for identity guarantees that any construction or proof relating to $\nat$ can automatically be transferred to $\natp$ in the same way.
We simply consider the type of the function or theorem as a type-indexed family of types $P:\type\to\type$, with the given object being an element of $P(\nat)$, and transport along the path $\id \nat\natp$.
This involves considerably less overhead.
For simplicity, we have described this method in the case of two types \nat and \natp with \emph{identical}-looking definitions.
However, a more common situation in practice is when the definitions are not literally identical, but nevertheless one induction principle implies the other.
\index{type!unit}%
\index{natural numbers!encoded as list(unit)@encoded as $\lst\unit$}%
Consider, for instance, the type of lists from a one-element type, $\lst\unit$, which is generated by
\begin{itemize}
\item an element $\nil:\lst\unit$, and
\item a function $\cons:\unit \times \lst\unit \to\lst\unit$.
\end{itemize}
This is not identical to the definition of \nat, and it does not give rise to an identical induction principle.
The induction principle of $\lst\unit$ says that for any $E:\lst\unit\to\type$ together with recurrence data $e_\nil:E(\nil)$ and $e_\cons : \prd{u:\unit}{\ell:\lst\unit} E(\ell) \to E(\cons(u,\ell))$, there exists $f:\prd{\ell:\lst\unit} E(\ell)$ such that $f(\nil)\jdeq e_\nil$ and $f(\cons(u,\ell))\jdeq e_\cons(u,\ell,f(\ell))$.
(We will see how to derive the induction principle of an inductive definition in \cref{sec:strictly-positive}.)
Now suppose we define $0'' \defeq \nil: \lst\unit$, and $\suc'':\lst\unit\to\lst\unit$ by $\suc''(\ell) \defeq \cons(\ttt,\ell)$.
Then for any $E:\lst\unit\to\type$ together with $e_0:E(0'')$ and $e_s:\prd{\ell:\lst\unit} E(\ell) \to E(\suc''(\ell))$, we can define
\begin{align*}
e_\nil &\defeq e_0\\
e_\cons(\ttt,\ell,x) &\defeq e_s(\ell,x).
\end{align*}
(In the definition of $e_\cons$ we use the induction principle of \unit to assume that $u$ is $\ttt$.)
Now we can apply the induction principle of $\lst\unit$, obtaining $f:\prd{\ell:\lst\unit} E(\ell)$ such that
\begin{gather*}
f(0'') \jdeq f(\nil) \jdeq e_\nil \jdeq e_0\\
f(\suc''(\ell)) \jdeq f(\cons(\ttt,\ell)) \jdeq e_\cons(\ttt,\ell,f(\ell)) \jdeq e_s(\ell,f(\ell)).
\end{gather*}
Thus, $\lst\unit$ satisfies the same induction principle as $\nat$, and hence (by the same arguments above) is equal to it.
Finally, these conclusions are not confined to the natural numbers: they apply to any inductive type.
If we have an inductively defined type $W$, say, and some other type $W'$ which satisfies the same induction principle as $W$, then it follows that $\eqv{W}{W'}$, and hence $W=W'$.
We use the derived recursion principles for $W$ and $W'$ to construct maps $W\to W'$ and $W'\to W$, respectively, and then the induction principles for each to prove that both composites are equal to identities.
For instance, in \cref{cha:typetheory} we saw that the coproduct $A+B$ could also have been defined as $\sm{x:\bool} \rec{\bool}(\UU,A,B,x)$.
The latter type satisfies the same induction principle as the former; hence they are canonically equivalent.
This is, of course, very similar to the familiar fact in category theory that if two objects have the same \emph{universal property}, then they are equivalent.
In \cref{sec:initial-alg} we will see that inductive types actually do have a universal property, so that this is a manifestation of that general principle.
\index{universal property}
\section{\texorpdfstring{$\w$}{W}-types}
\label{sec:w-types}
Inductive types are very general, which is excellent for their usefulness and applicability, but makes them difficult to study as a whole.
Fortunately, they can all be formally reduced to a few special cases.
It is beyond the scope of this book to discuss this reduction --- which is anyway irrelevant to the mathematician using type theory in practice --- but we will take a little time to discuss one of the basic special cases that we have not yet met.
These are Martin-L{\"o}f's \emph{$\w$-types}, also known as the types of \emph{well-founded trees}.
\index{tree, well-founded}%
$\w$-types are a generalization of such types as natural numbers, lists, and binary trees, which are sufficiently general to encapsulate the ``recursion'' aspect of \emph{any} inductive type.
A particular $\w$-type is specified by giving two parameters $A : \type$ and $B : A \to \type$, in which case the resulting $\w$-type is written $\wtype{a:A} B(a)$.
The type $A$ represents the type of \emph{labels} for $\wtype{a :A} B(a)$, which function as constructors (however, we reserve that word for the actual functions which arise in inductive definitions). For instance, when defining natural numbers as a $\w$-type,%
\index{natural numbers!encoded as a W-type@encoded as a $\w$-type}
the type $A$ would be the type $\bool$ inhabited by the two elements $\bfalse$ and $\btrue$, since there are precisely two ways to obtain a natural number --- either it will be zero or a successor of another natural number.
The type family $B : A \to \type$ is used to record the arity\index{arity} of labels: a label $a : A$ will take a family of inductive arguments, indexed over $B(a)$. We can therefore think of the ``$B(a)$-many'' arguments of $a$. These arguments are represented by a function $f : B(a) \to \wtype{a :A} B(a)$, with the understanding that for any $b : B(a)$, $f(b)$ is the ``$b$-th'' argument to the label $a$. The $\w$-type $\wtype{a :A} B(a)$ can thus be thought of as the type of well-founded trees, where nodes are labeled by elements of $A$ and each node labeled by $a : A$ has $B(a)$-many branches.
In the case of natural numbers, the label $\bfalse$ has arity 0, since it constructs the constant zero\index{zero}; the label $\btrue$ has arity 1, since it constructs the successor\index{successor} of its argument. We can capture this by using simple elimination on $\bool$ to define a function $\rec\bool(\bbU,\emptyt,\unit)$ into a universe of types; this function returns the empty type $\emptyt$ for $\bfalse$ and the unit\index{type!unit} type $\unit$ for $\btrue$. We can thus define
\symlabel{natw}
\index{type!unit}%
\index{type!empty}%
\[ \natw \defeq \wtype{b:\bool} \rec\bool(\bbU,\emptyt,\unit,b) \]
where the superscript $\mathbf{w}$ serves to distinguish this version of natural numbers from the previously used one.
Similarly, we can define the type of lists\index{type!of lists} over $A$ as a $\w$-type with $\unit + A$ many labels: one nullary label for the empty list, plus one unary label for each $a : A$, corresponding to appending $a$ to the head of a list:
\[ \lst A \defeq \wtype{x: \unit + A} \rec{\unit + A}(\bbU, \; \emptyt, \; \lamu{a:A} \unit,\; x). \]
%
\index{W-type@$\w$-type}%
\indexsee{type!W-@$\w$-}{$\w$-type}%
In general, the $\w$-type $\wtype{x:A} B(x)$ specified by $A : \type$ and $B : A \to \type$ is the inductive type generated by the following constructor:
\begin{itemize}
\item \label{defn:supp}
$\supp : \prd{a:A} \Big(B(a) \to \wtype{x:A} B(x)\Big) \to \wtype{x:A} B(x)$.
\end{itemize}
%
The constructor $\supp$ (short for supremum\index{supremum!constructor of a W-type@constructor of a $\w$-type}) takes a label $a : A$ and a function $f : B(a) \to \wtype{x:A} B(x)$ representing the arguments to $a$, and constructs a new element of $\wtype{x:A} B(x)$. Using our previous encoding of natural numbers as $\w$-types, we can for instance define
\begin{equation*}
\zerow \defeq \supp(\bfalse, \; \lamu{x:\emptyt} \rec\emptyt(\natw,x)).
\end{equation*}
Put differently, we use the label $\bfalse$ to construct $\zerow$. Then, $\rec\bool(\bbU,\emptyt,\unit, \bfalse)$ evaluates to $\emptyt$, as it should since $\bfalse$ is a nullary label. Thus, we need to construct a function $f : \emptyt \to \natw$, which represents the (zero) arguments supplied to $\bfalse$. This is of course trivial, using simple elimination on $\emptyt$ as shown. Similarly, we can define $1^{\mathbf{w}}$ and a successor function $\sucw$
\begin{align*}
1^{\mathbf{w}} &\defeq \supp(\btrue, \; \lamu{x:\unit} 0^{\mathbf{w}}) \\
\sucw&\defeq \lamu{n:\natw} \supp(\btrue, \; \lamu{x:\unit} n).
\end{align*}
\index{induction principle!for W-types@for $\w$-types}%
We have the following induction principle for $\w$-types:
\begin{itemize}
\item When proving a statement $E : \big(\wtype{x:A} B(x)\big) \to \type$ about \emph{all} elements of the $\w$-type $\wtype{x:A} B(x)$, it suffices to prove it for $\supp(a,f)$, assuming it holds for all $f(b)$ with $b : B(a)$.
In other words, it suffices to give a proof
\begin{equation*}
e : \prd{a:A}{f : B(a) \to \wtype{x:A} B(x)}{g : \prd{b : B(a)} E(f(b))} E(\supp(a,f))
\end{equation*}
\end{itemize}
\index{variable}%
The variable $g$ represents our inductive hypothesis, namely that all arguments of $a$ satisfy $E$. To state this, we quantify over all elements of type $B(a)$, since each $b : B(a)$ corresponds to one argument $f(b)$ of $a$.
How would we define the function $\dbl$ on natural numbers encoded as a $\w$-type? We would like to use the recursion principle of $\natw$ with a codomain of $\natw$ itself. We thus need to construct a suitable function
\[e : \prd{a : \bool}{f : B(a) \to \natw}{g : B(a) \to \natw} \natw\]
which will represent the recurrence for the $\dbl$ function; for simplicity we denote the type family $\rec\bool(\bbU,\emptyt,\unit)$ by $B$.
Clearly, $e$ will be a function taking $a : \bool$ as its first argument. The next step is to perform case analysis on $a$ and proceed based on whether it is $\bfalse$ or $\btrue$. This suggests the following form
\[ e \defeq \lamu{a:\bool} \rec\bool(C,e_0,e_1,a) \]
where
\[ C \defeq \prd{f : B(a) \to \natw}{g : B(a) \to \natw} \natw. \]
If $a$ is $\bfalse$, the type $B(a)$ becomes $\emptyt$. Thus, given $f : \emptyt \to \natw$ and $g : \emptyt \to \natw$, we want to construct an element of $\natw$. Since the label $\bfalse$ represents $\emptyt$, it needs zero inductive arguments and the variables $f$ and $g$ are irrelevant. We return $\zerow$\index{zero} as a result:
\[ e_0 \defeq \lamu{f:\emptyt \to \natw}{g:\emptyt \to \natw} \zerow. \]
Analogously, if $a$ is $\btrue$, the type $B(a)$ becomes $\unit$.
Since the label $\btrue$ represents the successor\index{successor} operator, it needs one inductive argument --- the predecessor\index{predecessor} --- which is represented by the variable $f : \unit \to \natw$.
The value of the recursive call on the predecessor is represented by the variable $g : \unit \to \natw$.
Thus, taking this value (namely $g(\ttt)$) and applying the successor function twice thus yields the desired result:
\begin{equation*}
e_1 \defeq \; \lamu{f:\unit \to \natw}{g:\unit \to \natw} \sucw(\sucw(g(\ttt))).
\end{equation*}
Putting this together, we thus have
\[ \dbl \defeq \rec\natw(\natw, e) \]
with $e$ as defined above.
\symlabel{defn:recursor-wtype}
The associated computation rule for the function $\rec{\wtype{x:A} B(x)}(E,e) : \prd{w : \wtype{x:A} B(x)} E(w)$ is as follows.
\index{computation rule!for W-types@for $\w$-types}%
\begin{itemize}
\item
For any $a : A$ and $f : B(a) \to \wtype{x:A} B(x)$ we have
\begin{equation*}
\rec{\wtype{x:A} B(x)}(E,e,\supp(a,f)) \jdeq
e(a,f,\big(\lamu{b:B(a)} \rec{\wtype{x:A} B(x)}(E,e,f(b))\big)).
\end{equation*}
\end{itemize}
In other words, the function $\rec{\wtype{x:A} B(x)}(E,e)$ satisfies the recurrence $e$.
By the above computation rule, the function $\dbl$ behaves as expected:
\begin{align*}
\dbl(\zerow) & \jdeq \rec\natw(\natw, e, \supp(\bfalse, \; \lamu{x:\emptyt} \rec\emptyt(\natw,x))) \\
& \jdeq e(\bfalse, \big(\lamu{x:\emptyt} \rec\emptyt(\natw,x)\big),
\big(\lamu{x:\emptyt} \dbl(\rec\emptyt(\natw,x))\big)) \\
& \jdeq e_0(\big(\lamu{x:\emptyt} \rec\emptyt(\natw,x)\big), \big(\lamu{x:\emptyt} \dbl(\rec\emptyt(\natw,x))\big))\\
& \jdeq \zerow \\
\intertext{and}
\dbl(1^{\mathbf{w}}) & \jdeq \rec\natw(\natw, e, \supp(\btrue, \; \lamu{x:\unit} \zerow)) \\
& \jdeq e(\btrue, \big(\lamu{x:\unit} \zerow\big), \big(\lamu{x:\unit} \dbl(\zerow)\big)) \\
& \jdeq e_1(\big(\lamu{x:\unit} \zerow\big), \big(\lamu{x:\unit} \dbl(\zerow)\big)) \\
& \jdeq \sucw(\sucw\big((\lamu{x:\unit} \dbl(\zerow))(\ttt)\big))\\
& \jdeq \sucw(\sucw(\zerow))
\end{align*}
and so on.
Just as for natural numbers, we can prove a uniqueness theorem for
$\w$-types:
\begin{thm}\label{thm:w-uniq}
\index{uniqueness!principle, propositional!for functions on W-types@for functions on $\w$-types}%
Let $g,h : \prd{w:\wtype{x:A}B(x)} E(w)$ be two functions which satisfy the recurrence
%
\begin{equation*}
e : \prd{a,f} \Parens{\prd{b : B(a)} E(f(b))} \to E(\supp(a,f)),
\end{equation*}
%
propositionally, i.e., such that
%
\begin{gather*}
\prd{a,f} \id{g(\supp(a,f))} {e(a,f,\lamu{b:B(a)} g(f(b)))}, \\
\prd{a,f} \id{h(\supp(a,f))}{e(a,f,\lamu{b:B(a)} h(f(b)))}.
\end{gather*}
Then $g$ and $h$ are equal.
\end{thm}
\section{Inductive types are initial algebras}
\label{sec:initial-alg}
\indexsee{initial!algebra characterization of inductive types}{homotopy-initial}%
As suggested earlier, inductive types also have a category-theoretic universal property.
They are \emph{homotopy-initial algebras}: initial objects (up to coherent homotopy) in a category of ``algebras'' determined by the specified constructors.
As a simple example, consider the natural numbers.
The appropriate sort of ``algebra'' here is a type equipped with the same structure that the constructors of $\nat$ give to it.
\index{natural numbers!as homotopy-initial algebra}
\begin{defn}\label{defn:nalg}
A \define{$\nat$-algebra}
\indexdef{N-algebra@$\nat$-algebra}%
\indexdef{algebra!N-@$\nat$-}%
is a type $C$ with two elements $c_0 : C$, $c_s : C \to C$. The type of such algebras is
\begin{equation*}
\nalg \defeq \sm {C : \type} C \times (C \to C).
\end{equation*}
\end{defn}
\begin{defn}\label{defn:nhom}
A \define{$\nat$-homomorphism}
\indexdef{N-homomorphism@$\nat$-homomorphism}%
\indexdef{homomorphism!N-@$\nat$-}%
between $\nat$-algebras $(C,c_0,c_s)$ and $(D,d_0,d_s)$ is a function $h : C \to D$ such that $h(c_0) = d_0$ and $h(c_s(c)) = d_s(h(c))$ for all $c : C$. The type of such homomorphisms is
\begin{narrowmultline*}
\nhom((C,c_0,c_s),(D,d_0,d_s)) \defeq \narrowbreak
\dsm {h : C \to D} (\id{h(c_0)}{d_0}) \times \tprd{c:C} (\id{h(c_s(c))}{d_s(h(c))}).
\end{narrowmultline*}
\end{defn}
We thus have a category of $\nat$-algebras and $\nat$-homomorphisms, and the claim is that $\nat$ is the initial object of this category.
A category theorist will immediately recognize this as the definition of a \emph{natural numbers object} in a category.
Of course, since our types behave like $\infty$-groupoids\index{.infinity-groupoid@$\infty$-groupoid}, we actually have an $(\infty,1)$-category\index{.infinity1-category@$(\infty,1)$-category} of $\nat$-algebras, and we should ask $\nat$ to be initial in the appropriate $(\infty,1)$-categorical sense.
Fortunately, we can formulate this without needing to define $(\infty,1)$-categories.
\begin{defn}
\index{universal!property!of natural numbers}%
A $\nat$-algebra $I$ is called \define{homotopy-initial},
\indexdef{homotopy-initial!N-algebra@$\nat$-algebra}%
\indexdef{N-algebra@$\nat$-algebra!homotopy-initial (h-initial)}%
or \define{h-initial}
\indexsee{h-initial}{homotopy-initial}%
for short, if for any other $\nat$-algebra $C$, the type of $\nat$-homomorphisms from $I$ to $C$ is contractible. Thus,
\begin{equation*}
\ishinitn(I) \defeq \prd{C : \nalg} \iscontr(\nhom(I,C)).
\end{equation*}
\end{defn}
When they exist, h-initial algebras are unique --- not just up to isomorphism, as usual in category theory, but up to equality, by the univalence axiom.
\begin{thm}
Any two h-initial $\nat$-algebras are equal.
Thus, the type of h-initial $\nat$-algebras is a mere proposition.
\end{thm}
\begin{proof}
Suppose $I$ and $J$ are h-initial $\nat$-algebras.
Then $\nhom(I,J)$ is contractible, hence inhabited by some $\nat$-homomorphism $f:I\to J$, and likewise we have an $\nat$-homomorphism $g:J\to I$.
Now the composite $g\circ f$ is a $\nat$-homomorphism from $I$ to $I$, as is $\idfunc[I]$; but $\nhom(I,I)$ is contractible, so $g\circ f = \idfunc[I]$.
Similarly, $f\circ g = \idfunc[J]$.
Hence $\eqv IJ$, and so $I=J$. Since being contractible is a mere proposition and dependent products preserve mere propositions, it follows that being h-initial is itself a mere proposition. Thus any two proofs that $I$ (or $J$) is h-initial are necessarily equal, which finishes the proof.
\end{proof}
We now have the following theorem.
\begin{thm}\label{thm:nat-hinitial}
The $\nat$-algebra $(\nat, \emptyt, \suc)$ is homotopy initial.
\end{thm}
\begin{proof}[Sketch of proof]
Fix an arbitrary $\nat$-algebra $(C,c_0,c_s)$.
The recursion principle of $\nat$ yields a function $f:\nat\to C$ defined by
\begin{align*}
f(0) &\defeq c_0\\
f(\suc(n)) &\defeq c_s(f(n)).
\end{align*}
These two equalities make $f$ an $\nat$-homomorphism, which we can take as the center of contraction for $\nhom(\nat,C)$.
The uniqueness theorem (\cref{thm:nat-uniq}) then implies that any other $\nat$-homomorphism is equal to $f$.
\end{proof}
To place this in a more general context, it is useful to consider the notion of \emph{algebra for an endofunctor}. \index{algebra!for an endofunctor}
Note that to make a type $C$ into a $\nat$-algebra is the same as to give a function $c:C+\unit\to C$, and a function $f:C\to D$ is a $\nat$-homomorphism just when $f \circ c \htpy d \circ (f+\unit)$.
In categorical language, this means the \nat-algebras are the algebras for the endofunctor $F(X)\defeq X+1$ of the category of types.
\indexsee{functor!polynomial}{endofunctor, polynomial}%
\indexsee{polynomial functor}{endofunctor, polynomial}%
\indexdef{endofunctor!polynomial}%
\index{W-type@$\w$-type!as homotopy-initial algebra}
For a more generic case, consider the $\w$-type associated to $A : \type$ and $B : A \to \type$.
In this case we have an associated \define{polynomial functor}:
\begin{equation}
\label{eq:polyfunc}
P(X) = \sm{x : A} (B(x) \rightarrow X).
\end{equation}
Actually, this assignment is functorial only up to homotopy, but this makes no difference in what follows.
By definition, a \define{$P$-algebra}
\indexdef{algebra!for a polynomial functor}%
\indexdef{algebra!W-@$\w$-}%
is then a type $C$ equipped with a function $s_C : PC \rightarrow C$.
By the universal property of $\Sigma$-types, this is equivalent to giving a function $\prd{a:A} (B(a) \to C) \to C$.
We will also call such objects \define{$\w$-algebras}
\indexdef{W-algebra@$\w$-algebra}%
for $A$ and $B$, and we write
\symlabel{walg}
\begin{equation*}
\walg(A,B) \defeq \sm {C : \type} \prd{a:A} (B(a) \to C) \to C.
\end{equation*}
Similarly, for $P$-algebras $(C,s_C)$ and $(D,s_D)$, a \define{homomorphism}
\indexdef{homomorphism!of algebras for a functor}%
between them $(f, s_f) : (C, s_C) \rightarrow (D, s_D)$ consists of a function $f : C \rightarrow D$ and a homotopy between maps $PC \rightarrow D$
\[
s_f : f \circ s_C \, = s_{D} \circ Pf,
\]
where $Pf : PC\rightarrow PD$ is the result of the easily-definable action of $P$ on $f: C \rightarrow D$. Such an algebra homomorphism can be represented suggestively in the form:
\[
\xymatrix{
PC \ar[d]_{s_C} \ar[r]^{Pf} \ar@{}[dr]|{s_f} & PD \ar[d]^{s_D}\\
C \ar[r]_{f} & D }
\]
In terms of elements, $f$ is a $P$-homomorphism (or \define{$\w$-homomorphism}) if
\indexdef{W-homomorphism@$\w$-homomorphism}%
\indexdef{homomorphism!W-@$\w$-}%
\[f(s_C(a,h)) = s_D(a,f \circ h).\]
We have the type of $\w$-homomorphisms:
\symlabel{whom}
\begin{equation*}
\whom_{A,B}((C, s_C),(D,s_D)) \defeq \sm{f : C \to D} \prd{a:A}{h:B(a)\to C} \id{f(s_C(a,h))}{s_D(a, f\circ h)}
\end{equation*}
\index{universal!property!of $\w$-type}%
Finally, a $P$-algebra $(C, s_C)$ is said to be \define{homotopy-initial}
\indexdef{homotopy-initial!algebra for a functor}%
\indexdef{homotopy-initial!W-algebra@$\w$-algebra}%
if for every $P$-algebra $(D, s_D)$, the type of all algebra homomorphisms $(C, s_C) \rightarrow (D, s_D)$ is contractible.
That is,
\begin{equation*}
\ishinitw(A,B,I) \defeq \prd{C : \walg(A,B)} \iscontr(\whom_{A,B}(I,C)).
\end{equation*}
%
Now the analogous theorem to \cref{thm:nat-hinitial} is:
\begin{thm}\label{thm:w-hinit}
For any type $A : \type$ and type family $B : A \to \type$, the $\w$-algebra $(\wtype{x:A}B(x), \supp)$ is h-initial.
\end{thm}
\begin{proof}[Sketch of proof]
Suppose we have $A : \type$ and $B : A \to \type$,
and consider the associated polynomial functor $P(X)\defeq\sm{x:A}(B(x)\to X)$.
Let $W \defeq \wtype{x:A}B(x)$. Then using
the $\w$-introduction rule from \cref{sec:w-types}, we have a structure map $s_W\defeq\supp: PW \rightarrow W$.
We want to show that the algebra $(W, s_W)$ is
h-initial. So, let us consider another algebra $(C,s_C)$ and show that the type $T\defeq \whom_{A,B}((W, s_W),(C,s_C)) $
of $\w$-homomorphisms from $(W, s_W)$ to $(C, s_C)$ is contractible. To do
so, observe that the $\w$-elimination rule and the $\w$-computation
rule allow us to define a $\w$-homomorphism $(f, s_f) : (W, s_W) \rightarrow (C, s_C)$,
thus showing that $T$ is inhabited. It is furthermore necessary to show that for every $\w$-ho\-mo\-mor\-phism $(g, s_g) : (W, s_W) \rightarrow (C, s_C)$, there is an identity proof
\begin{equation}
\label{equ:prequired}
p : (f,s_f) = (g,s_g).
\end{equation}
This uses the fact that, in general, a type of the form $(f,s_f) = (g,s_g) $
is equivalent to the type of what we call \define{algebra $2$-cells}
\indexdef{algebra!2-cell}%
from $f$ to $g$, whose canonical
elements are pairs of the form $(e, s_e)$, where $e : f=g$ and $s_e$ is a higher identity proof between the identity proofs represented by the following pasting diagrams:
\[
\xymatrix{
PW \ar@/^1pc/[r]^{Pg} \ar[d]_{s_W} \ar@/_1pc/[r]_{Pf} \ar@{}[r]|{Pe}
& PC \ar[d]^{s_C} \\
W \ar@/_1pc/[r]_f \ar@{}[r]^{s_f} & C } \qquad
\xymatrix{
PW \ar@/^1pc/[r]^{Pg} \ar[d]_{s_W} \ar@{}[r]_(.52){s_g} & PC \ar[d]^{s_C} \\
W \ar@/^1pc/[r]^g \ar@/_1pc/[r]_f \ar@{}[r]|{e} & C }
\]
In light of this fact, to prove that there exists an element as in~\eqref{equ:prequired}, it is
sufficient to show that there is an algebra 2-cell $(e,s_e)$ from $f$ to $g$.
The identity proof $e : f=g$ is now constructed by function extensionality and
$\w$-elimination so as to guarantee the existence of the required identity
proof $s_e$.
\end{proof}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Homotopy-inductive types}
\label{sec:htpy-inductive}
In \cref{sec:w-types} we showed how to encode natural numbers as $\w$-types, with
\begin{align*}
\natw & \defeq \wtype{b:\bool} \rec\bool(\bbU,\emptyt,\unit,b), \\
\zerow & \defeq \supp(\bfalse, (\lamu{x:\emptyt} \rec\emptyt(\natw,x))), \\
\sucw & \defeq \lamu{n:\natw} \supp(\btrue, (\lamu{x:\unit} n)).
\end{align*}
We also showed how one can define a $\dbl$ function on $\natw$ using the recursion principle.
When it comes to the induction principle, however, this encoding is no longer satisfactory: given $E : \natw \to \type$ and recurrences $e_z : E(\zerow)$ and $e_s : \prd{n : \natw} E(n) \to E(\sucw(n))$, we can only construct a dependent function $r(E,e_z,e_s) : \prd{n : \natw} E(n)$ satisfying the given recurrences \emph{propositionally}, i.e.\ up to a path.
This means that the computation rules for natural numbers, which give judgmental equalities, cannot be derived from the rules for $\w$-types in any obvious way.
\index{type!homotopy-inductive}%
\index{homotopy-inductive type}%
This problem goes away if instead of the conventional inductive types we consider \emph{homotopy-inductive types}, where all computation rules are stated up to a path, i.e.\ the symbol $\jdeq$ is replaced by $=$. For instance, the computation rule for the homotopy version of $\w$-types $\mathsf{W^h}$ becomes:
\index{computation rule!propositional}%
\begin{itemize}
\item For any $a : A$ and $f : B(a) \to \wtypeh{x:A} B(x)$ we have
\begin{equation*}
\rec{\wtypeh{x:A} B(x)}(E,e,\supp(a,f)) = e\Big(a,f,\big(\lamu{b:B(a)} \rec{\wtypeh{x:A} B(x)}(E,f(b))\big)\Big)
\end{equation*}
\end{itemize}
Homotopy-inductive types have an obvious disadvantage when it comes to computational properties --- the behavior of any function constructed using the induction principle can now only be characterized propositionally.
But numerous other considerations drive us to consider homotopy-inductive types as well.
For instance, while we showed in \cref{sec:initial-alg} that inductive types are homotopy-initial algebras, not every homotopy-initial algebra is an inductive type (i.e.\ satisfies the corresponding induction principle) --- but every homotopy-initial algebra \emph{is} a homotopy-inductive type.
Similarly, we might want to apply the uniqueness argument from \cref{sec:appetizer-univalence} when one (or both) of the types involved is only a homotopy-inductive type --- for instance, to show that the $\w$-type encoding of $\nat$ is equivalent to the usual $\nat$.
Additionally, the notion of a homotopy-inductive type is now internal to the type theory.
For example, this means we can form a type of all natural numbers objects and make assertions about it.
In the case of $\w$-types, we can characterize a homotopy $\w$-type $\wtype{x:A} B(x)$ as any type endowed with a supremum function and an induction principle satisfying the appropriate (propositional) computation rule:
\begin{multline*}
\w_d(A,B) \defeq \sm{W : \type}
\sm{\supp : \prd {a} (B(a) \to W) \to W}
\prd{E : W \to \type} \\
\prd{e : \prd{a,f} (\prd{b : B(a)} E(f(b))) \to E(\supp(a,f))}
\sm{\ind{} : \prd{w : W} E(w)}
\prd{a,f} \\
\ind{}(\supp(a,f)) = e(a,\lamu{b:B(a)} \ind{}(f(b))).
\end{multline*}
In \cref{cha:hits} we will see some other reasons why propositional computation rules are worth considering.
In this section, we will state some basic facts about homotopy-inductive types.
We omit most of the proofs, which are somewhat technical.
\begin{thm}
For any $A : \type$ and $B : A \to \type$, the type $\w_d(A,B)$ is a mere proposition.
\end{thm}
It turns out that there is an equivalent characterization of $\w$-types using a recursion principle, plus certain \emph{uniqueness} and \emph{coherence} laws. First we give the recursion principle:
%
\begin{itemize}
\item When constructing a function from the $\w$-type $\wtypeh{x:A} B(x)$ into the type $C$, it suffices to give its value for $\supp(a,f)$, assuming we are given the values of all $f(b)$ with $b : B(a)$.
In other words, it suffices to construct a function
\begin{equation*}
c : \prd{a:A} (B(a) \to C) \to C.
\end{equation*}
\end{itemize}
\index{computation rule!propositional}%
The associated computation rule for $\rec{\wtypeh{x:A} B(x)}(C,c) : (\wtype{x:A} B(x)) \to C$ is as follows:
\begin{itemize}
\item For any $a : A$ and $f : B(a) \to \wtypeh{x:A} B(x)$ we have
a witness $\beta(C,c,a,f)$ for equality
\begin{equation*}
\rec{\wtypeh{x:A} B(x)}(C,c,\supp(a,f)) =
c(a,\lamu{b:B(a)} \rec{\wtypeh{x:A} B(x)}(C,c,f(b))).
\end{equation*}
\end{itemize}
Furthermore, we assert the following uniqueness principle, saying that any two functions defined by the same recurrence are equal:
\index{uniqueness!principle, propositional!for homotopy W-types@for homotopy $\w$-types}%
\begin{itemize}
\item Let $C : \type$ and $c : \prd{a:A} (B(a) \to C) \to C$ be given. Let $g,h : (\wtypeh{x:A} B(x)) \to C$ be two functions which satisfy the recurrence $c$ up to propositional equality, i.e., such that we have
\begin{align*}
\beta_g &: \prd{a,f} \id{g(\supp(a,f))}{c(a,\lamu{b: B(a)} g(f(b)))}, \\
\beta_h &: \prd{a,f} \id{h(\supp(a,f))}{c(a,\lamu{b: B(a)} h(f(b)))}.
\end{align*}
Then $g$ and $h$ are equal, i.e.\ there is $\alpha(C,c,f,g,\beta_g,\beta_h)$ of type $g = h$.
\end{itemize}
\index{coherence}%
Recall that when we have an induction principle rather than only a recursion principle, this propositional uniqueness principle is derivable (\cref{thm:w-uniq}).
But with only recursion, the uniqueness principle is no longer derivable --- and in fact, the statement is not even true (exercise). Hence, we postulate it as an axiom.
We also postulate the following coherence\index{coherence} law, which tells us how the proof of uniqueness behaves on canonical elements:
\begin{itemize}
\item
For any $a : A$ and $f : B(a) \to C$, the following diagram commutes propositionally:
\[\xymatrix{
g(\supp(a,f)) \ar_{\alpha(\supp(a,f))}[d] \ar^-{\beta_g}[r] & c(a,\lamu{b:B(a)} g(f(b)))
\ar^{c(a,\blank)(\funext (\lam{b} \alpha(f(b))))}[d] \\
h(\supp(a,f)) \ar_-{\beta_h}[r] & c(a,\lamu{b: B(a)} h(f(b))) \\
}\]
where $\alpha$ abbreviates the path $\alpha(C,c,f,g,\beta_g,\beta_h) : g = h$.
\end{itemize}
Putting all of this data together yields another characterization of $\wtype{x:A} B(x)$, as a type with a supremum function, satisfying simple elimination, computation, uniqueness, and coherence rules:
\begin{multline*}
\w_s(A,B) \defeq \sm{W : \type}
\sm{\supp : \prd {a} (B(a) \to W) \to W}
\prd{C : \type}
\prd{c : \prd{a} (B(a) \to C) \to C}\\
\sm{\rec{} : W \to C}
\sm{\beta : \prd{a,f} \rec{}(\supp(a,f)) = c(a,\lamu{b: B(a)} \rec{}(f(b)))} \narrowbreak
\prd{g : W \to C}
\prd{h : W \to C}
\prd{\beta_g : \prd{a,f} g(\supp(a,f)) = c(a,\lamu{b: B(a)} g(f(b)))} \\
\prd{\beta_h : \prd{a,f} h(\supp(a,f)) = c(a,\lamu{b: B(a)} h(f(b)))}
\sm{\alpha : \prd {w : W} g(w) = h(w)}
\prd{a,f} \\
\alpha(\supp(a,f)) \ct \beta_h = \beta_g \ct c(a,-)(\funext \; \lam{b} \alpha(f(b)))
\end{multline*}
\begin{thm}
For any $A : \type$ and $B : A \to \type$, the type $\w_s (A,B)$ is a mere proposition.
\end{thm}
Finally, we have a third, very concise characterization of $\wtype{x:A} B(x)$ as an h-initial $\w$-algebra:
\begin{equation*}
\w_h(A,B) \defeq \sm{I : \walg(A,B)} \ishinitw(A,B,I).
\end{equation*}
\begin{thm}
For any $A : \type$ and $B : A \to \type$, the type $\w_h (A,B)$ is a mere proposition.
\end{thm}
It turns out all three characterizations of $\w$-types are in fact equivalent:
\begin{lem}\label{lem:homotopy-induction-times-3}
For any $A : \type$ and $B : A \to \type$, we have
\[ \w_d(A,B) \eqvsym \w_s(A,B) \eqvsym \w_h(A,B) \]
\end{lem}
Indeed, we have the following theorem, which is an improvement over \cref{thm:w-hinit}:
\begin{thm}
The types satisfying the formation, introduction, elimination, and propositional computation rules for $\w$-types are precisely the homotopy-initial $\w$-algebras.
\end{thm}
%%%%%
\begin{proof}[Sketch of proof]
%%%%%
Inspecting the proof of \cref{thm:w-hinit}, we see that only the \emph{propositional} computation rule was required to establish the h-initiality of $\wtype{x:A}B(x)$.
For the converse implication, let us assume that the polynomial functor associated
to $A : \type$ and $B : A \to \UU$, has an h-initial algebra $(W,s_W)$; we show that $W$ satisfies the propositional rules of $\w$-types.
The $\w$-introduction rule is simple; namely, for $a : A$ and $t : B(a) \rightarrow W$, we define $\supp(a,t) : W$ to be the
result of applying the structure map $s_W : PW \rightarrow W$ to $(a,t) : PW$.
For the $\w$-elimination rule, let us assume its premisses and in particular that $C' : W \to \type$.
Using the other premisses, one shows that the type $C \defeq \sm{ w : W} C'(w)$
can be equipped with a structure map $s_C : PC \rightarrow C$. By the h-initiality of $W$,
we obtain an algebra homomorphism $(f, s_f) : (W, s_W) \rightarrow (C, s_C)$. Furthermore,
the first projection $\proj1 : C \rightarrow W$ can be equipped with the structure of a
homomorphism, so that we obtain a diagram of the form
\[
\xymatrix{
PW \ar[r]^{Pf} \ar[d]_{s_W} & PC \ar[d]^{s_C} \ar[r]^{P \proj1} & PW \ar[d]^{s_W} \\
W \ar[r]_f & C \ar[r]_{\proj1} & W.}
\]
But the identity function $1_W : W \rightarrow W$ has a canonical structure of an
algebra homomorphism and so, by the contractibility of the type of homomorphisms
from $(W,s_W)$ to itself, there must be an identity proof between the composite
of $(f,s_f)$ with $(\proj1, s_{\proj1})$ and $(1_W, s_{1_W})$. This implies, in particular,
that there is an identity proof $p : \proj1 \circ f = 1_W$.
Since $(\proj2 \circ f) w : C( (\proj1 \circ f) w)$, we can define
\[
\rec{}(w,c) \defeq
p_{\, * \,}( ( \proj2 \circ f) w ) : C(w)
\]
where the transport $p_{\, * \,}$ is with respect to the family
\[
\lamu{u}C\circ u : (W\to W)\to W\to \UU.
\]
The verification of the propositional $\w$-computation rule is a calculation,
involving the naturality properties of operations of the form $p_{\, * \,}$.
\end{proof}
%%%%%
\index{natural numbers!encoded as a W-type@encoded as a $\w$-type}%
Finally, as desired, we can encode homotopy-natural-numbers as homo\-topy-$\w$-types:
\begin{thm}
The rules for natural numbers with propositional computation rules can be derived from the rules for $\w$-types with propositional computation rules.
\end{thm}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{The general syntax of inductive definitions}
\label{sec:strictly-positive}
\index{type!inductive|(}%
\indexsee{inductive!type}{type, inductive}%
So far, we have been discussing only particular inductive types: $\emptyt$, $\unit$, $\bool$, $\nat$, coproducts, products, $\Sigma$-types, $\w$-types, etc.
However, an important aspect of type theory is the ability to define \emph{new} inductive types, rather than being restricted only to some particular fixed list of them.
In order to be able to do this, however, we need to know what sorts of ``inductive definitions'' are valid or reasonable.
To see that not everything which ``looks like an inductive definition'' makes sense, consider the following ``constructor'' of a type $C$:
\begin{itemize}
\item $g:(C\to \nat) \to C$.
\end{itemize}
The recursion principle for such a type $C$ ought to say that given a type $P$, in order to construct a function $f:C\to P$, it suffices to consider the case when the input $c:C$ is of the form $g(\alpha)$ for some $\alpha:C\to\nat$.
Moreover, we would expect to be able to use the ``recursive data'' of $f$ applied to $\alpha$ in some way.
However, it is not at all clear how to ``apply $f$ to $\alpha$'', since both are functions with domain $C$.
We could write down a ``recursion principle'' for $C$ by just supposing (unjustifiably) that there is some way to apply $f$ to $\alpha$ and obtain a function $P\to\nat$.
Then the input to the recursion rule would ask for a type $P$ together with a function
\begin{equation}
h:(C\to\nat) \to (P\to\nat) \to P\label{eq:fake-recursor}
\end{equation}
where the two arguments of $h$ are $\alpha$ and ``the result of applying $f$ to $\alpha$''.
However, what would the computation rule for the resulting function $f:C\to P$ be?
Looking at other computation rules, we would expect something like ``$f(g(\alpha)) \jdeq h(\alpha,f(\alpha))$'' for $\alpha:C\to\nat$, but as we have seen, ``$f(\alpha)$'' does not make sense.
The induction principle of $C$ is even more problematic; it's not even clear how to write down the hypotheses.
On the other hand, we could write down a different ``recursion principle'' for $C$ by ignoring the ``recursive'' presence of $C$ in the domain of $\alpha$, considering it as merely an indexing type for a family of natural numbers.
In this case the input would ask for a type $P$ together with a function
\begin{equation*}
h:(C\to \nat) \to P,
\end{equation*}
so the type of the recursion principle would be $\rec{C}:\prd{P:\UU} ((C\to \nat) \to P) \to C\to P$, and similarly for the induction principle.
Now it is possible to write down a computation rule, namely $\rec{C}(P,h,g(\alpha))\jdeq h(\alpha)$.
However, the existence of a type $C$ with this recursor and computation rule turns out to be inconsistent.
See \cref{ex:loop,ex:loop2,ex:inductive-lawvere,ex:ilunit} for proofs of this and other variations.
This example suggests one restriction on inductive definitions: the domains of all the constructors must be \emph{covariant functors}\index{functor!covariant}\index{covariant functor} of the type being defined, so that we can ``apply $f$ to them'' to get the result of the ``recursive call''.
In other words, if we replace all occurrences of the type being defined with a variable
\index{variable!type}%
$X:\type$, then each domain of a constructor
\index{domain!of a constructor}%
must be an expression that can be made into a covariant functor of $X$.
This is the case for all the examples we have considered so far.
For instance, with the constructor $\inl:A\to A+B$, the relevant functor is constant at $A$ (i.e.\ $X\mapsto A$), while for the constructor $\suc:\nat\to\nat$, the functor is the identity functor ($X\mapsto X$).
However, this necessary condition is also not sufficient.
Covariance prevents the inductive type from occurring on the left of a single function type, as in the argument $C\to\nat$ of the ``constructor'' $g$ considered above, since this yields a contravariant\index{functor!contravariant}\index{contravariant functor} functor rather than a covariant one.
However, since the composite of two contravariant functors is covariant, \emph{double} function types such as $((X\to \nat)\to \nat)$ are once again covariant.
This enables us to reproduce Cantorian-style paradoxes\index{paradox}.
For instance, consider an ``inductive type'' $D$ with the following constructor:
\begin{itemize}
\item $k:((D\to\prop)\to\prop)\to D$.
\end{itemize}
Assuming such a type exists, we define functions
\begin{align*}
r&:D\to (D\to\prop)\to\prop,\\
f&:(D\to\prop) \to D,\\
p&:(D\to \prop) \to (D\to\prop)\to \prop,\\
\intertext{by}
r(k(\theta)) &\defeq \theta,\\
f(\delta) &\defeq k(\lam{x} (x=\delta)),\\
p(\delta) &\defeq \lam{x} \delta(f(x)).
\end{align*}
Here $r$ is defined by the recursion principle of $D$, while $f$ and $p$ are defined explicitly.
Then for any $\delta:D\to\prop$, we have $r(f(\delta)) = \lam{x}(x=\delta)$.
In particular, therefore, if $f(\delta)=f(\delta')$, then we have a path $s:(\lam{x}(x=\delta)) = (\lam{x}(x=\delta'))$.
Thus, $\happly(s,\delta) : (\delta=\delta) = (\delta=\delta')$, and so in particular $\delta=\delta'$ holds.
Hence, $f$ is ``injective'' (although \emph{a priori} $D$ may not be a set).
This already sounds suspicious --- we have an ``injection'' of the ``power set''\index{power set} of $D$ into $D$ --- and with a little more work we can massage it into a contradiction.
Suppose given $\theta:(D\to\prop)\to\prop$, and define $\delta:D\to\prop$ by
\begin{equation}
\delta(d) \defeq \exis{\gamma:D\to\prop} (f(\gamma) = d) \times \theta(\gamma).\label{eq:Pinj}
\end{equation}
We claim that $p(\delta)=\theta$.
By function extensionality, it suffices to show $p(\delta)(\gamma) =_\prop \theta(\gamma)$ for any $\gamma:D\to\prop$.
And by univalence, for this it suffices to show that each implies the other.
Now by definition of $p$, we have
\begin{align*}
p(\delta)(\gamma) &\jdeq \delta(f(\gamma))\\
&\jdeq \exis{\gamma':D\to\prop} (f(\gamma') = f(\gamma)) \times \theta(\gamma').
\end{align*}
Clearly this holds if $\theta(\gamma)$, since we may take $\gamma'\defeq \gamma$.
On the other hand, if we have $\gamma'$ with $f(\gamma') = f(\gamma)$ and $\theta(\gamma')$, then $\gamma'=\gamma$ since $f$ is injective, hence also $\theta(\gamma)$.
This completes the proof that $p(\delta)=\theta$.
Thus, every element $\theta:(D\to\prop)\to\prop$ is the image under $p$ of some element $\delta:D\to\prop$.
However, if we define $\theta$ by a classic diagonalization:
\[ \theta(\gamma) \defeq \neg p(\gamma)(\gamma) \quad\text{for all $\gamma:D\to\prop$} \]
then from $\theta = p(\delta)$ we deduce $p(\delta)(\delta) = \neg p(\delta)(\delta)$.
This is a contradiction: no proposition can be equivalent to its negation.
(Supposing $P\Leftrightarrow \neg P$, if $P$, then $\neg P$, and so $\emptyt$; hence $\neg P$, but then $P$, and so $\emptyt$.)
\begin{rmk}
There is a question of universe size to be addressed.
In general, an inductive type must live in a universe that already contains all the types going into its definition.
Thus if in the definition of $D$, the ambiguous notation \prop means $\prop_{\UU}$, then we do not have $D:\UU$ but only $D:\UU'$ for some larger universe $\UU'$ with $\UU:\UU'$.
\index{mathematics!predicative}%
\indexsee{impredicativity}{mathematics, predicative}%
\indexsee{predicative mathematics}{mathematics, predicative}%
In a predicative theory, therefore, the right-hand side of~\eqref{eq:Pinj} lives in $\prop_{\UU'}$, not $\prop_\UU$.
So this contradiction does require the propositional resizing axiom
\index{propositional!resizing}%
mentioned in \cref{subsec:prop-subsets}.
\end{rmk}
\index{consistency}%
This counterexample suggests that we should ban an inductive type from ever appearing on the left of an arrow in the domain of its constructors, even if that appearance is nested in other arrows so as to eventually become covariant.
(Similarly, we also forbid it from appearing in the domain of a dependent function type.)
This restriction is called \define{strict positivity}
\indexdef{strict!positivity}%
\indexsee{positivity, strict}{strict positivity}%
(ordinary ``positivity'' being essentially covariance), and it turns out to suffice.
\index{constructor}%
In conclusion, therefore, a valid inductive definition of a type $W$ consists of a list of \emph{constructors}.
Each constructor is assigned a type that is a function type taking some number (possibly zero) of inputs (possibly dependent on one another) and returning an element of $W$.
Finally, we allow $W$ itself to occur in the input types of its constructors, but only strictly positively.
This essentially means that each argument of a constructor is either a type not involving $W$, or some iterated function type with codomain $W$.
For instance, the following is a valid constructor type:
\begin{equation}
c:(A\to W) \to (B\to C \to W) \to D \to W \to W.\label{eq:example-constructor}
\end{equation}
All of these function types can also be dependent functions ($\Pi$-types).%
\footnote{In the language of \cref{sec:initial-alg}, the condition of strict positivity ensures that the relevant endofunctor is polynomial.\indexfoot{endofunctor!polynomial}\indexfoot{algebra!initial}\indexsee{algebra!initial}{homotopy-initial} It is well-known in category theory that not \emph{all} endofunctors can have initial algebras; restricting to polynomial functors ensures consistency.
One can consider various relaxations of this condition, but in this book we will restrict ourselves to strict positivity as defined here.}
Note we require that an inductive definition is given by a \emph{finite} list of constructors.
This is simply because we have to write it down on the page.
If we want an inductive type which behaves as if it has an infinite number of constructors, we can simply parametrize one constructor by some infinite type.
For instance, a constructor such as $\nat \to W \to W$ can be thought of as equivalent to countably many constructors of the form $W\to W$.
(Of course, the infinity is now \emph{internal} to the type theory, but this is as it should be for any foundational system.)
Similarly, if we want a constructor that takes ``infinitely many arguments'', we can allow it to take a family of arguments parametrized by some infinite type, such as $(\nat\to W) \to W$ which takes an infinite sequence\index{sequence} of elements of $W$.
\index{recursion principle!for an inductive type}%
Now, once we have such an inductive definition, what can we do with it?
Firstly, there is a \define{recursion principle} stating that in order to define a function $f:W\to P$, it suffices to consider the case when the input $w:W$ arises from one of the constructors, allowing ourselves to recursively call $f$ on the inputs to that constructor.
For the example constructor~\eqref{eq:example-constructor}, we would require $P$ to be equipped with a function of type
\begin{narrowmultline}\label{eq:example-rechyp}
d : (A\to W) \to (A\to P) \to (B\to C\to W) \to
\narrowbreak
(B\to C \to P) \to D \to W \to P \to P.
\end{narrowmultline}
Under these hypotheses, the recursion principle yields $f:W\to P$, which moreover ``preserves the constructor data'' in the evident way --- this is the computation rule, where we use covariance of the inputs.
\index{computation rule!for inductive types}%
For instance, in the example~\eqref{eq:example-constructor}, the computation rule says that for any $\alpha:A\to W$, $\beta:B\to C\to W$, $\delta:D$, and $\omega:W$, we have
\begin{equation}
f(c(\alpha,\beta,\delta,\omega)) \jdeq d(\alpha,f\circ \alpha,\beta, f\circ \beta, \delta, \omega,f(\omega)).\label{eq:example-comp}
\end{equation}
% As we have before in particular cases, when defining a particular function $f$, we may write these rules with $\defeq$ as a way of specifying the data $d$, and say that $f$ is defined by them.
\index{induction principle!for an inductive type}%
The \define{induction principle} for a general inductive type $W$ is only a little more complicated.
Of course, we start with a type family $P:W\to\type$, which we require to be equipped with constructor data ``lying over'' the constructor data of $W$.
That means the ``recursive call'' arguments such as $A\to P$ above must be replaced by dependent functions with types such as $\prd{a:A} P(\alpha(a))$.
In the full example of~\eqref{eq:example-constructor}, the corresponding hypothesis for the induction principle would require
\begin{multline}\label{eq:example-indhyp}
d : \prd{\alpha:A\to W}\Parens{\prd{a:A} P(\alpha(a))} \to \narrowbreak
\prd{\beta:B\to C\to W} \Parens{\prd{b:B}{c:C} P(\beta(b,c))} \to\\
\prd{\delta:D}
\prd{\omega:W} P(\omega) \to
P(c(\alpha,\beta,\delta,\omega)).
\end{multline}
The corresponding computation rule looks identical to~\eqref{eq:example-comp}.
Of course, the recursion principle is the special case of the induction principle where $P$ is a constant family.
As we have mentioned before, the induction principle is also called the \define{eliminator}, and the recursion principle the \define{non-dependent eliminator}.
As discussed in \cref{sec:pattern-matching}, we also allow ourselves to invoke the induction and recursion principles implicitly, writing a definitional equation with $\defeq$ for each expression that would be the hypotheses of the induction principle.
This is called giving a definition by (dependent) \define{pattern matching}.
\index{pattern matching}%
\index{definition!by pattern matching}%
In our running example, this means we could define $f:\prd{w:W} P(w) $ by
\[ f(c(\alpha,\beta,\delta,\omega)) \defeq \cdots \]
where $\alpha:A\to W$ and $\beta:B\to C\to W$ and $\delta:D$ and $\omega:W$ are variables
\index{variable}%
that are bound in the right-hand side.
Moreover, the right-hand side may involve recursive calls to $f$ of the form $f(\alpha(a))$, $f(\beta(b,c))$, and $f(\omega)$.
When this definition is repackaged in terms of the induction principle, we replace such recursive calls by $\bar\alpha(a)$, $\bar\beta(b,c)$, and $\bar\omega$, respectively, for new variables
\begin{align*}
\bar\alpha &: \prd{a:A} P(\alpha(a))\\
\bar\beta &: \prd{b:B}{c:C} P(\beta(b,c))\\
\bar\omega &: P(\omega).
\end{align*}
\symlabel{defn:induction-wtype}%
Then we could write
\[ f \defeq \ind{W}(P,\, \lam{\alpha}{\bar\alpha}{\beta}{\bar\beta}{\delta}{\omega}{\bar\omega} \cdots ) \]
where the second argument to $\ind{W}$ has the type of~\eqref{eq:example-indhyp}.
We will not attempt to give a formal presentation of the grammar of a valid inductive definition and its resulting induction and recursion principles and pattern matching rules.
This is possible to do (indeed, it is necessary to do if implementing a computer proof assistant), but provides no additional insight.
With practice, one learns to automatically deduce the induction and recursion principles for any inductive definition, and to use them without having to think twice.
\section{Generalizations of inductive types}
\label{sec:generalizations}
\index{type!inductive!generalizations}%
The notion of inductive type has been studied in type theory for many years, and admits many, many generalizations: inductive type families, mutual inductive types, inductive-inductive types, inductive-recur\-sive types, etc.
In this section we give an overview of some of these, a few of which will be used later in the book.
(In \cref{cha:hits} we will study in more depth a very different generalization of inductive types, which is particular to \emph{homotopy} type theory.)
Most of these generalizations involve allowing ourselves to define more than one type by induction at the same time.
One very simple example of this, which we have already seen, is the coproduct $A+B$.
It would be tedious indeed if we had to write down separate inductive definitions for $\nat+\nat$, for $\nat+\bool$, for $\bool+\bool$, and so on every time we wanted to consider the coproduct of two types.
Instead, we make one definition in which $A$ and $B$ are variables standing for types;
\index{variable!type}%
in type theory they are called \define{parameters}.%
\indexdef{parameter!of an inductive definition}
Thus technically speaking, what results from the definition is not a single type, but a family of types $+ : \type\to\type\to\type$, taking two types as input and producing their coproduct.
Similarly, the type $\lst A$ of lists\index{type!of lists} is a family $\lst{\blank}:\type\to\type$ in which the type $A$ is a parameter.
In mathematics, this sort of thing is so obvious as to not be worth mentioning, but we bring it up in order to contrast it with the next example.
Note that each type $A+B$ is \emph{independently} defined inductively, as is each type $\lst A$.
\index{type!family of!inductive}%
\index{inductive!type family}%
By contrast, we might also consider defining a whole type family $B:A\to\type$ by induction \emph{together}.
The difference is that now the constructors may change the index $a:A$, and as a consequence we cannot say that the individual types $B(a)$ are inductively defined, only that the entire family is inductively defined.
\index{type!of vectors}%
\index{vector}%
The standard example is the type of \emph{lists of specified length}, traditionally called \define{vectors}.
We fix a parameter type $A$, and define a type family $\vect n A$, for $n:\nat$, generated by the following constructors:
\begin{itemize}
\item a vector $\nil:\vect 0 A$ of length zero,
\item a function $\cons:\prd{n:\nat} A\to \vect n A \to \vect{\suc (n)} A$.
\end{itemize}
In contrast to lists, vectors (with elements from a fixed type $A$) form a family of types indexed by their length.
While $A$ is a parameter, we say that $n:\nat$ is an \define{index}
\indexdef{index of an inductive definition}%
of the inductive family.
An individual type such as $\vect3A$ is not inductively defined: the constructors which build elements of $\vect3A$ take input from a different type in the family, such as $\cons:A \to \vect2A \to \vect3A$.
\index{induction principle!for type of vectors}
\index{vector!induction principle for}
In particular, the induction principle must refer to the entire type family as well; thus the hypotheses and the conclusion must quantify over the indices appropriately.
In the case of vectors, the induction principle states that given a type family $C:\prd{n:\nat} \vect n A \to \type$, together with
\begin{itemize}
\item an element $c_\nil : C(0,\nil)$, and
\item a function \narrowequation{c_\cons : \prd{n:\nat}{a:A}{\ell:\vect n A} C(n,\ell) \to C(\suc(n),\cons(a,\ell))}
\end{itemize}
there exists a function $f:\prd{n:\nat}{\ell:\vect n A} C(n,\ell)$ such that
\begin{align*}
f(0,\nil) &\jdeq c_\nil\\
f(\suc(n),\cons(a,\ell)) &\jdeq c_\cons(n,a,\ell,f(\ell)).
\end{align*}
\index{predicate!inductive}%
\index{inductive!predicate}%
One use of inductive families is to define \emph{predicates} inductively.
For instance, we might define the predicate $\mathsf{iseven}:\nat\to\type$ as an inductive family indexed by $\nat$, with the following constructors:
\begin{itemize}
\item an element $\mathsf{even}_0 : \mathsf{iseven}(0)$,
\item a function $\mathsf{even}_{ss} : \prd{n:\nat} \mathsf{iseven}(n) \to \mathsf{iseven}(\suc(\suc(n)))$.
\end{itemize}
In other words, we stipulate that $0$ is even, and that if $n$ is even then so is $\suc(\suc(n))$.
These constructors ``obviously'' give no way to construct an element of, say, $\mathsf{iseven}(1)$, and since $\mathsf{iseven}$ is supposed to be freely generated by these constructors, there must be no such element.
(Actually proving that $\neg \mathsf{iseven}(1)$ is not entirely trivial, however).
The induction principle for $\mathsf{iseven}$ says that to prove something about all even natural numbers, it suffices to prove it for $0$ and verify that it is preserved by adding two.
\index{mathematics!formalized}%
Inductively defined predicates are much used in computer formalization of mathematics and software verification.
But we will not have much use for them, with a couple of exceptions in \cref{sec:ordinals,sec:compactness-interval}.
\index{type!mutual inductive}%
\index{mutual inductive type}%
Another important special case is when the indexing type of an inductive family is finite.
In this case, we can equivalently express the inductive definition as a finite collection of types defined by \emph{mutual induction}.
For instance, we might define the types $\mathsf{even}$ and $\mathsf{odd}$ of even and odd natural numbers by mutual induction, where $\mathsf{even}$ is generated by constructors
\begin{itemize}
\item $0:\mathsf{even}$ and
\item $\mathsf{esucc} : \mathsf{odd}\to\mathsf{even}$,
\end{itemize}
while $\mathsf{odd}$ is generated by the one constructor
\begin{itemize}
\item $\mathsf{osucc} : \mathsf{even}\to \mathsf{odd}$.
\end{itemize}
Note that $\mathsf{even}$ and $\mathsf{odd}$ are simple types (not type families), but their constructors can refer to each other.
If we expressed this definition as an inductive type family $\mathsf{paritynat} : \bool \to \type$, with $\mathsf{paritynat}(\bfalse)$ and $\mathsf{paritynat}(\btrue)$ representing $\mathsf{even}$ and $\mathsf{odd}$ respectively, it would instead have constructors:
\begin{itemize}
\item $0 : \mathsf{paritynat}(\bfalse)$,
\item $\mathsf{esucc} : \mathsf{paritynat}(\btrue) \to \mathsf{paritynat}(\bfalse)$,
\item $\mathsf{osucc} : \mathsf{paritynat}(\bfalse) \to \mathsf{paritynat}(\btrue)$.
\end{itemize}
When expressed explicitly as a mutual inductive definition, the induction principle for $\mathsf{even}$ and $\mathsf{odd}$ says that given $C:\mathsf{even}\to\type$ and $D:\mathsf{odd}\to\type$, along with
\begin{itemize}
\item $c_0 : C(0)$,
\item $c_s : \prd{n:\mathsf{odd}} D(n) \to C(\mathsf{esucc}(n))$,
\item $d_s : \prd{n:\mathsf{even}} C(n) \to D(\mathsf{osucc}(n))$,
\end{itemize}
there exist $f:\prd{n:\mathsf{even}} C(n)$ and $g:\prd{n:\mathsf{odd}}D(n)$ such that
\begin{align*}
f(0) &\jdeq c_0\\
f(\mathsf{esucc}(n)) &\jdeq c_s(g(n))\\
g(\mathsf{osucc}(n)) &\jdeq d_s(f(n)).
\end{align*}
In particular, just as we can only induct over an inductive family ``all at once'', we have to induct on $\mathsf{even}$ and $\mathsf{odd}$ simultaneously.
We will not have much use for mutual inductive definitions in this book either.
\index{type!inductive-inductive}%
\index{inductive-inductive type}%
A further, more radical, generalization is to allow definition of a type family $B:A\to \type$ in which not only the types $B(a)$, but the type $A$ itself, is defined as part of one big induction.
In other words, not only do we specify constructors for the $B(a)$s which can take inputs from other $B(a')$s, as with inductive families, we also at the same time specify constructors for $A$ itself, which can take inputs from the $B(a)$s.
This can be regarded as an inductive family in which the indices are inductively defined simultaneously with the indexed types, or as a mutual inductive definition in which one of the types can depend on the other.
More complicated dependency structures are also possible.
In general, these are called \define{inductive-inductive definitions}.
For the most part, we will not use them in this book, but their higher variant (see \cref{cha:hits}) will appear in a couple of experimental examples in \cref{cha:real-numbers}.
\index{type!inductive-recursive}%
\index{inductive-recursive type}%
The last generalization we wish to mention is \define{inductive-recursive definitions}, in which a type is defined inductively at the same time as a \emph{recursive} function on it.
That is, we fix a known type $P$, and give constructors for an inductive type $A$ and at the same time define a function $f:A\to P$ using the recursion principle for $A$ resulting from its constructors --- with the twist that the constructors of $A$ are allowed to refer also to the values of $f$.
We do not yet know how to justify such definitions from a homotopical perspective, and we will not use any of them in this book.
\index{type!inductive|)}%
\section{Identity types and identity systems}
\label{sec:identity-systems}
\index{type!identity!as inductive}%
We now wish to point out that the \emph{identity types}, which play so central a role in homotopy type theory, may also be considered to be defined inductively.
Specifically, they are an ``inductive family'' with indices, in the sense of \cref{sec:generalizations}.
In fact, there are \emph{two} ways to describe identity types as an
inductive family, resulting in the two induction principles described in
\cref{cha:typetheory}, path induction and based path induction.
In both definitions, the type $A$ is a parameter.
For the first definition, we inductively define a family $=_A : A\to A\to \type$, with two indices belonging to $A$, by the following constructor:
\begin{itemize}
\item for any $a:A$, an element $\refl a : a=_A a$.
\end{itemize}
By analogy with the other inductive families, we may extract the induction principle from this definition.
It states that given any \narrowequation{C:\prd{a,b:A} (a=_A b) \to \type,} along with $d:\prd{a:A} C(a,a,\refl{a})$, there exists \narrowequation{f:\prd{a,b:A}{p:a=_A b} C(a,b,p)} such that $f(a,a,\refl a)\jdeq d(a)$.
This is exactly the path induction principle for identity types.
For the second definition, we consider one element $a_0:A$ to be a parameter along with $A:\type$, and we inductively define a family $(a_0 =_A \blank):A\to \type$, with \emph{one} index belonging to $A$, by the following constructor:
\begin{itemize}
\item an element $\refl{a_0} : a_0 =_A a_0$.
\end{itemize}
Note that because $a_0:A$ was fixed as a parameter, the constructor $\refl{a_0}$ does not appear inside the inductive definition as a function, but only as an element.
The induction principle for this definition says that given $C:\prd{b:A} (a_0 =_A b) \to \type$ along with an element $d:C(a_0,\refl{a_0})$, there exists $f:\prd{b:A}{p:a_0 =_A b} C(b,p)$ with $f(a_0,\refl{a_0})\jdeq d$.
This is exactly the based path induction principle for identity types.
The view of identity types as inductive types has historically caused some confusion, because of the intuition mentioned in \cref{sec:bool-nat} that all the elements of an inductive type should be obtained by repeatedly applying its constructors.
For ordinary inductive types such as \bool and \nat, this is the case: we saw in \cref{thm:allbool-trueorfalse} that indeed every element of \bool is either $\bfalse$ or $\btrue$, and similarly one can prove that every element of \nat is either $0$ or a successor.
However, this is \emph{not} true for identity types: there is only one constructor $\refl{}$, but not every path is equal to the constant path.
More precisely, we cannot prove, using only the induction principle for identity types (either one), that every inhabitant of $a=_A a$ is equal to $\refl a$.
In order to actually exhibit a counterexample, we need some additional principle such as the univalence axiom --- recall that in \cref{thm:type-is-not-a-set} we used univalence to exhibit a particular path $\bool=_\type\bool$ which is not equal to $\refl{\bool}$.
\index{free!generation of an inductive type}%
\index{generation!of a type, inductive|(}%
The point is that, as validated by the study of homotopy-initial algebras, an inductive definition should be regarded as \emph{freely generated} by its constructors.
Of course, a freely generated structure may contain elements other than its generators: for instance, the free group on two symbols $x$ and $y$ contains not only $x$ and $y$ but also words such as $xy$, $yx^{-1}y$, and $x^3y^2x^{-2}yx$.
In general, the elements of a free structure are obtained by applying not only the generators, but also the operations of the ambient structure, such as the group operations if we are talking about free groups.
In the case of inductive types, we are talking about freely generated \emph{types} --- so what are the ``operations'' of the structure of a type?
If types are viewed as like \emph{sets}, as was traditionally the case in type theory, then there are no such operations, and hence we expect there to be no elements in an inductive type other than those resulting from its constructors.
In homotopy type theory, we view types as like \emph{spaces} or $\infty$-groupoids,%
\index{.infinity-groupoid@$\infty$-groupoid}
in which case there are many operations on the \emph{paths} (concatenation, inversion, etc.) --- this will be important in \cref{cha:hits} --- but there are still no operations on the \emph{objects} (elements).
Thus, it is still true for us that, e.g., every element of \bool is either $\bfalse$ or $\btrue$, and every element of $\nat$ is either $0$ or a successor.
However, as we saw in \cref{cha:basics}, viewing types as $\infty$-groupoids entails also viewing functions as functors, and this includes type families $B:A\to\type$.
Thus, the identity type $(a_0 =_A \blank)$, viewed as an inductive type family, is actually a \emph{freely generated functor} $A\to\type$.
Specifically, it is the functor $F:A\to\type$ freely generated by one element $\refl{a_0}: F(a_0)$.
And a functor does have operations on objects, namely the action of the morphisms (paths) of $A$.
In category theory, the \emph{Yoneda lemma}\index{Yoneda!lemma} tells us that for any category $A$ and object $a_0$, the functor freely generated by an element of $F(a_0)$ is the representable functor $\hom_A(a_0,\blank)$.
Thus, we should expect the identity type $(a_0 =_A \blank)$ to be this representable functor, and this is indeed exactly how we view it: $(a_0 =_A b)$ is the space of morphisms (paths) in $A$ from $a_0$ to $b$.
\index{generation!of a type, inductive|)}
\mentalpause
One reason for viewing identity types as inductive families is to apply the uniqueness principles of \cref{sec:appetizer-univalence,sec:htpy-inductive}.
Specifically, we can characterize the family of identity types of a type $A$, up to equivalence, by giving another family of types over $A\times A$ satisfying the same induction principle.
This suggests the following definitions and theorem.
\indexsee{system, identity}{identity system}%
\index{identity!system!at a point|(defstyle}%
\begin{defn}\label{defn:identity-systems}
Let $A$ be a type and $a_0:A$ an element.
\begin{itemize}
\item A \define{pointed predicate}
\indexdef{predicate!pointed}%
\indexdef{pointed!predicate}%
over $(A,a_0)$ is a family $R:A\to\type$ equipped with an element $r_0:R(a_0)$.
\item For pointed predicates $(R,r_0)$ and $(S,s_0)$, a family of maps $g:\prd{b:A} R(b) \to S(b)$ is \define{pointed} if $g(a_0, r_0)=s_0$.
We have
\[ \mathsf{ppmap}(R,S) \defeq \sm{g:\prd{b:A} R(b) \to S(b)} (g(a_0, r_0)=s_0).\]
\item An \define{identity system at $a_0$}
is a pointed predicate $(R,r_0)$ such that for any type family $D:\prd{b:A} R(b) \to \type$ and $d:D(a_0,r_0)$, there exists a function $f:\prd{b:A}{r:R(b)} D(b,r)$ such that $f(a_0,r_0)=d$.
\end{itemize}
\end{defn}
\begin{thm}\label{thm:identity-systems}
For a pointed predicate $(R,r_0)$ over $(A,a_0)$, the following are logically equivalent.
\begin{enumerate}
\item $(R,r_0)$ is an identity system at $a_0$.\label{item:identity-systems1}
\item For any pointed predicate $(S,s_0)$, the type $\mathsf{ppmap}(R,S)$ is contractible.\label{item:identity-systems2}
\item For any $b:A$, the function $\transfib{R}{\blank}{r_0} : (a_0 =_A b) \to R(b)$ is an equivalence.\label{item:identity-systems3}
\item The type $\sm{b:A} R(b)$ is contractible.\label{item:identity-systems4}
\end{enumerate}
\end{thm}
Note that the equivalences~\ref{item:identity-systems1}$\Leftrightarrow$\ref{item:identity-systems2}$\Leftrightarrow$\ref{item:identity-systems3} are a version of \cref{lem:homotopy-induction-times-3} for identity types $a_0 =_A \blank$, regarded as inductive families varying over one element of $A$.
Of course,~\ref{item:identity-systems2}--\ref{item:identity-systems4} are mere propositions, so that logical equivalence implies actual equivalence.
(Condition~\ref{item:identity-systems1} is also a mere proposition, but we will not prove this.)
Note also that unlike~\ref{item:identity-systems1}--\ref{item:identity-systems3}, statement~\ref{item:identity-systems4} doesn't refer to $a_0$ or $r_0$.
\begin{proof}
First, assume~\ref{item:identity-systems1} and let $(S,s_0)$ be a pointed predicate.
Define $D(b,r) \defeq S(b)$ and $d\defeq s_0: S(a_0) \jdeq D(a_0,r_0)$.
Since $R$ is an identity system, we have $f:\prd{b:A} R(b) \to S(b)$ with $f(a_0,r_0) = s_0$; hence $\mathsf{ppmap}(R,S)$ is inhabited.
Now suppose $(f,f_r),(g,g_r) : \mathsf{ppmap}(R,S)$, and define $D(b,r) \defeq (f(b,r) = g(b,r))$, and let $d \defeq f_r \ct \opp{g_r} : f(a_0,r_0) = s_0 = g(a_0,r_0)$.
Then again since $R$ is an identity system, we have $h:\prd{b:A}{r:R(b)} D(b,r)$ such that $h(a_0,r_0) = f_r \ct \opp{g_r}$.
By the characterization of paths in $\Sigma$-types and path types, these data yield an equality $(f,f_r) = (g,g_r)$.
Hence $\mathsf{ppmap}(R,S)$ is an inhabited mere proposition, and thus contractible; so~\ref{item:identity-systems2} holds.
Now suppose~\ref{item:identity-systems2}, and define $S(b) \defeq (a_0=b)$ with $s_0 \defeq \refl{a_0}:S(a_0)$.
Then $(S,s_0)$ is a pointed predicate, and $\lamu{b:B}{p:a_0=b} \transfib{R}{p}{r} : \prd{b:A} S(b) \to R(b)$ is a pointed family of maps from $S$ to $R$.
By assumption, $\mathsf{ppmap}(R,S)$ is contractible, hence inhabited, so there also exists a pointed family of maps from $R$ to $S$.
And the composites in either direction are pointed families of maps from $R$ to $R$ and from $S$ to $S$, respectively, hence equal to identities since $\mathsf{ppmap}(R,R)$ and $\mathsf{ppmap}(S,S)$ are contractible.
Thus~\ref{item:identity-systems3} holds.
Now supposing~\ref{item:identity-systems3}, condition~\ref{item:identity-systems4} follows from \cref{thm:contr-paths}, using the fact that $\Sigma$-types respect equivalences (the ``if'' direction of \cref{thm:total-fiber-equiv}).
Finally, assume~\ref{item:identity-systems4}, and let $D:\prd{b:A} R(b)\to \type$ and $d:D(a_0,r_0)$.
We can equivalently express $D$ as a family $D':(\sm{b:A} R(b)) \to \type$.
Now since $\sm{b:A} R(b)$ is contractible, we have
\[p:\prd{u:\sm{b:A} R(b)} (a_0,r_0) = u. \]
Moreover, since the path types of a contractible type are again contractible, we have $p((a_0,r_0)) = \refl{(a_0,r_0)}$.
Define $f(u) \defeq \transfib{D'}{p(u)}{d}$, yielding $f:\prd{u:\sm{b:A} R(b)} D'(u)$, or equivalently $f:\prd{b:A}{r:R(b)} D(b,r)$.
Finally, we have
\[f(a_0,r_0) \jdeq \transfib{D'}{p((a_0,r_0))}{d} = \transfib{D'}{\refl{(a_0,r_0)}}{d} = d.\]
Thus,~\ref{item:identity-systems1} holds.
\end{proof}
\index{identity!system!at a point|)}%
We can deduce a similar result for identity types $=_A$, regarded as a family varying over two elements of $A$.
\index{identity!system|(defstyle}%
\begin{defn}
An \define{identity system}
over a type $A$ is a family $R:A\to A\to \type$ equipped with a function $r_0:\prd{a:A} R(a,a)$ such that for any type family $D:\prd{a,b:A} R(a,b) \to \type$ and $d:\prd{a:A} D(a,a,r_0(a))$, there exists a function $f:\prd{a,b:A}{r:R(a,b)} D(a,b,r)$ such that $f(a,a,r_0(a))=d(a)$ for all $a:A$.
\end{defn}
\begin{thm}\label{thm:ML-identity-systems}
For $R:A\to A\to\type$ equipped with $r_0:\prd{a:A} R(a,a)$, the following are logically equivalent.
\begin{enumerate}
\item $(R,r_0)$ is an identity system over $A$.\label{item:MLis1}
\item For all $a_0:A$, the pointed predicate $(R(a_0),r_0(a_0))$ is an identity system at $a_0$.\label{item:MLis2}
\item For any $S:A\to A\to\type$ and $s_0:\prd{a:A} S(a,a)$, the type
\[ \sm{g:\prd{a,b:A} R(a,b) \to S(a,b)} \prd{a:A} g(a,a,r_0(a)) = s_0(a) \]
is contractible.\label{item:MLis3}
\item For any $a,b:A$, the map $\transfib{R(a)}{\blank}{r_0(a)} : (a =_A b) \to R(a,b)$ is an equivalence.\label{item:MLis4}
\item For any $a:A$, the type $\sm{b:A} R(a,b)$ is contractible.\label{item:MLis5}
\end{enumerate}
\end{thm}
\begin{proof}
The equivalence~\ref{item:MLis1}$\Leftrightarrow$\ref{item:MLis2} follows exactly the proof of equivalence between the path induction and based path induction principles for identity types; see \cref{sec:identity-types}.
The equivalence with~\ref{item:MLis4} and~\ref{item:MLis5} then follows from \cref{thm:identity-systems}, while~\ref{item:MLis3} is straightforward.
\end{proof}
\index{identity!system|)}%
One reason this characterization is interesting is that it provides an alternative way to state univalence and function extensionality.
\index{univalence axiom}%
The univalence axiom for a universe \UU says exactly that the type family
\[ (\eqv{\blank}{\blank}) : \UU\to\UU\to\UU \]
together with $\idfunc : \prd{A:\UU} (\eqv AA)$ satisfies \cref{thm:ML-identity-systems}\ref{item:MLis4}.
Therefore, it is equivalent to the corresponding version of~\ref{item:MLis1}, which we can state as follows.
\begin{cor}[Equivalence induction]\label{thm:equiv-induction}
\index{induction principle!for equivalences}%
\index{equivalence!induction}%
Given any type family \narrowequation{D:\prd{A,B:\UU} (\eqv AB) \to \type} and function $d:\prd{A:\UU} D(A,A,\idfunc[A])$, there exists \narrowequation{f : \prd{A,B:\UU}{e:\eqv AB} D(A,B,e)} such that $f(A,A,\idfunc[A]) = d(A)$ for all $A:\UU$.
\end{cor}
In other words, to prove something about all equivalences, it suffices to prove it about identity maps.
We have already used this principle (without stating it in generality) in \cref{lem:qinv-autohtpy}.
Similarly, function extensionality says that for any $B:A\to\type$, the type family
\[ (\blank\htpy\blank) : \Parens{\prd{a:A} B(a)} \to \Parens{\prd{a:A} B(a)} \to \type
\]
together with $\lamu{f:\prd{a:A} B(a)}{a:A} \refl{f(a)}$ satisfies \cref{thm:ML-identity-systems}\ref{item:MLis4}.
Thus, it is also equivalent to the corresponding version of~\ref{item:MLis1}.
\begin{cor}[Homotopy induction]\label{thm:htpy-induction}
\index{induction principle!for homotopies}%
\index{homotopy!induction}%
Given any \narrowequation{D:\prd{f,g:\prd{a:A} B(a)} (f\htpy g) \to \type} and $d:\prd{f:\prd{a:A} B(a)} D(f,f,\lam{x}\refl{f(x)})$, there exists
%
\begin{equation*}
k:\prd{f,g:\prd{a:A} B(a)}{h:f\htpy g} D(f,g,h)
\end{equation*}
%
such that $k(f,f,\lam{x}\refl{f(x)}) = d(f)$ for all $f$.
\end{cor}
\sectionNotes
Inductive definitions have a long pedigree in mathematics, arguably going back at least to Frege and Peano's axioms for the natural numbers.\index{Frege}\index{Peano} %
More general ``inductive predicates'' are not uncommon, but in set theoretic foundations they are usually constructed explicitly, either as an intersection of an appropriate class of subsets or using transfinite iteration along the ordinals, rather than regarded as a basic notion.
In type theory, particular cases of inductive definitions date back to Martin-L\"of's original papers: \cite{martin-lof-hauptsatz} presents a general notion of inductively defined predicates and relations; the notion of inductive type was present (but only with instances, not as a general notion) in Martin-L\"of's first papers in type theory \cite{Martin-Lof-1973};
and then as a general notion with $\w$-types in \cite{Martin-Lof-1979}.\index{Martin-L\"of}%
A general notion of inductive type was introduced in 1985 by Constable and Mendler~\cite{DBLP:conf/lop/ConstableM85}. A general schema for inductive types in intensional type theory was suggested in
\cite{PfenningPaulinMohring}. Further developments included \cite{CoquandPaulin, Dybjer:1991}.
The notion of inductive-recursive definition appears in \cite{Dybjer:2000}. An important type-theoretic notion is the notion of tree types (a general expression of the notion of Post system in type theory) which appears in \cite{PeterssonSynek}.
The universal property of the natural numbers as an initial object of the category of $\nat$-algebras is due to Lawvere \cite{lawvere:adjinfound}\index{Lawvere}.
This was later generalized to a description of $\w$-types as initial algebras for polynomial endofunctors by~\cite{mp:wftrees}.\index{endofunctor!algebra for}
The coherently homotopy-theoretic equivalence between such universal properties and the corresponding induction principles (\cref{sec:initial-alg,sec:htpy-inductive}) is due to~\cite{ags:it-hott}.
For actual constructions of inductive types in homotopy-theoretic semantics of type theory, see~\cite{klv:ssetmodel,mvdb:wtypes,ls:hits}.
\sectionExercises
\begin{ex}\label{ex:ind-lst}
Derive the induction principle for the type $\lst{A}$ of lists from its definition as an inductive type in \cref{sec:bool-nat}.\index{type!of lists}
\end{ex}
\begin{ex}\label{ex:same-recurrence-not-defeq}
Construct two functions on natural numbers which satisfy the same recurrence\index{recurrence} $(e_z, e_s)$ judgmentally, but are not judgmentally equal.
\end{ex}
\begin{ex}\label{ex:one-function-two-recurrences}
Construct two different recurrences $(e_z,e_s)$ on the same type $E$ which are both satisfied judgmentally by the same function $f:\nat\to E$.
\end{ex}
\begin{ex}\label{ex:bool}
Show that for any type family $E : \bool \to \type$, the induction operator
\[ \ind{\bool}(E) : \big(E(\bfalse) \times E(\btrue)\big) \to \prd{b : \bool} E(b) \]
is an equivalence.
\end{ex}
\begin{ex}\label{ex:ind-nat-not-equiv}
Show that the analogous statement to \cref{ex:bool} for $\nat$ fails.
\end{ex}
\begin{ex}\label{ex:no-dep-uniqueness-failure}
Show that if we assume simple instead of dependent elimination for $\w$-types, the uniqueness property (analogue of \cref{thm:w-uniq}) fails to hold.
That is, exhibit a type satisfying the recursion principle of a $\w$-type, but for which functions are not determined uniquely by their recurrence\index{recurrence}.
\end{ex}
\begin{ex}\label{ex:loop}
Suppose that in the ``inductive definition'' of the type $C$ at the beginning of \cref{sec:strictly-positive}, we replace the type \nat by \emptyt.
Analogously to~\eqref{eq:fake-recursor}, we might consider a recursion principle for this type with hypothesis
\[ h:(C\to\emptyt) \to (P\to\emptyt) \to P. \]
Show that even without a computation rule, this recursion principle is inconsistent, i.e.\ it allows us to construct an element of \emptyt.
\end{ex}
\begin{ex}\label{ex:loop2}
Consider now an ``inductive type'' $D$ with one constructor $\mathsf{scott}:(D\to D) \to D$.
\index{Scott}%
The second recursor for $C$ suggested in \cref{sec:strictly-positive} leads to the following recursor for $D$:
\[ \rec{D} : \prd{P:\UU} ((D\to D) \to (D\to P)\to P) \to D \to P \]
with computation rule $\rec{D}(P,h,\mathsf{scott}(\alpha)) \jdeq h(\alpha,(\lam{d} \rec{D}(P,h,\alpha(d))))$.
Show that this also leads to a contradiction.
\end{ex}
\begin{ex}\label{ex:inductive-lawvere}
Let $A$ be an arbitrary type and consider generally an ``inductive definition'' of a type $L_A$ with constructor $\mathsf{lawvere}:(L_A\to A) \to L_A$.
The second recursor for $C$ suggested in \cref{sec:strictly-positive} leads to the following recursor for $L_A$:
\[ \rec{L_A} : \prd{P:\UU} ((L_A\to A) \to P) \to L_A\to P \]
with computation rule $\rec{L_A}(P,h,\mathsf{lawvere}(\alpha)) \jdeq h(\alpha)$.
Using this, show that $A$ has the \define{fixed-point property}, i.e.\ for every function $f:A\to A$ there exists an $a:A$ such that $f(a)=a$.
\index{Lawvere}%
\index{fixed-point property}%
In particular, $L_A$ is inconsistent if $A$ is a type without the fixed-point property, such as $\emptyt$, $\bool$, or $\nat$.
\end{ex}
\begin{ex}\label{ex:ilunit}
Continuing from \cref{ex:inductive-lawvere}, consider $L_\unit$, which is not obviously inconsistent since $\unit$ does have the fixed-point property.
Formulate an induction principle for $L_\unit$ and its computation rule, analogously to its recursor, and using this, prove that it is contractible.
\end{ex}
\begin{ex}\label{ex:empty-inductive-type}
In \cref{sec:bool-nat} we defined the type $\lst A$ of finite lists of elements of some type $A$.
Consider a similar inductive definition of a type $\lost A$ whose only constructor is
\[ \cons: A \to \lost A \to \lost A. \]
Show that $\lost A$ is equivalent to $\emptyt$.
\end{ex}
\begin{ex}\label{ex:Wprop}
Suppose $A$ is a mere proposition, and $B:A\to \UU$.
\begin{enumerate}
\item Show that $\wtype{a:A} B(a)$ is a mere proposition.
\item Show that $\wtype{a:A} B(a)$ is equivalent to $\sm{a:A} \neg B(a)$.
\item Without using $\wtype{a:A} B(a)$, show that $\sm{a:A} \neg B(a)$ is a homotopy $\w$-type $\wtypeh{a:A} B(a)$ in the sense of \cref{sec:htpy-inductive}.
\end{enumerate}
\end{ex}
\begin{ex}\label{ex:Wbounds}
Let $A:\UU$ and $B:A\to \UU$.
\begin{enumerate}
\item Show that $\Parens{\sm{a:A} \neg B(a)} \to \Parens{\wtype{a:A} B(a)}$.
\item Show that $\Parens{\wtype{a:A} B(a)} \to \Parens{\neg \prd{a:A} B(a)}$.
\end{enumerate}
\end{ex}
\begin{ex}\label{ex:Wdec}
Let $A:\UU$ and suppose that $B:A\to \UU$ is decidable, i.e.\ $\prd{a:A} (B(a)+\neg B(a))$ (see \cref{defn:decidable-equality}).
Show that $\Parens{\wtype{a:A} B(a)} \to \Parens{\sm{a:A} \neg B(a)}$.
\end{ex}
\begin{ex}\label{ex:Wbounds-loose}
Show that the following are logically equivalent.
\begin{enumerate}
\item $\Parens{\wtype{a:A} B(a)} \to \Brck{\sm{a:A} \neg B(a)}$ for any $A:\set$ and $B:A\to \prop$.\label{item:Wbounds-loose-Sigma}
\item $\Parens{\neg \prd{a:A} B(a)} \to \Brck{\wtype{a:A} B(a)}$ for any $A:\set$ and $B:A\to \prop$.\label{item:Wbounds-loose-Pi}
\item The law of excluded middle (as in \cref{sec:intuitionism}).
\end{enumerate}
Similarly, using \cref{thm:not-lem}, show that it is inconsistent to assume that either implication in~\ref{item:Wbounds-loose-Sigma} or~\ref{item:Wbounds-loose-Pi} holds for all $A:\UU$ and $B:A\to \UU$.
\end{ex}
\begin{ex}\label{ex:Wimpred}
For $A:\UU$ and $B:A\to \UU$, define
\[ W'_{A,B} \defeq \prd{R:\UU} \Parens{\prd{a:A} (B(a) \to R) \to R} \to R \]
$W'_{A,B}$ is called the \define{impredicative encoding of $\wtype{a:A} B(a)$}.
\index{impredicative!encoding of a W-type@encoding of a $\w$-type}%
\index{W-type@$\w$-type!impredicative encoding of}%
Note that unlike $\wtype{a:A} B(a)$, it lives in a higher universe than $A$ and $B$.
\begin{enumerate}
\item Show that $W'_{A,B}$ is logically equivalent (as defined in \cref{sec:pat}) to $\wtype{a:A} B(a)$.
\item Show that $W'_{A,B}$ implies $\neg\neg \sm{a:A} \neg B(a)$.
\item Without using $\wtype{a:A} B(a)$, show that $W'_{A,B}$ satisfies the same \emph{recursion} principle as $\wtype{a:A} B(a)$ for defining functions into types in the universe $\UU$ (to which it itself does not belong).
\item Using \LEM, give an example of an $A:\UU$ and a $B:A\to \UU$ such that $W'_{A,B}$ is not equivalent to $\wtype{a:A} B(a)$.
\end{enumerate}
\end{ex}
\begin{ex}\label{ex:no-nullary-constructor}
Show that for any $A:\UU$ and $B:A\to\UU$, we have
\[ \eqv{\neg\Parens{\wtype{a:A} B(a)}}{\neg\Parens{\sm{a:A} \neg B(a)}}. \]
In other words, $\wtype{a:A} B(a)$ is empty if and only if it has no nullary constructor.
(Compare to \cref{ex:empty-inductive-type}.)
\end{ex}
% Local Variables:
% TeX-master: "hott-online"
% End: