diff --git "a/stack-exchange/math_stack_exchange/shard_112.txt" "b/stack-exchange/math_stack_exchange/shard_112.txt" deleted file mode 100644--- "a/stack-exchange/math_stack_exchange/shard_112.txt" +++ /dev/null @@ -1,5217 +0,0 @@ -TITLE: "Natural" example of cosets -QUESTION [57 upvotes]: Do you know natural/concrete/appealing examples of right/left cosets in group theory ? -This notion is a powerful tool but also a very abstract one for beginners so this is why I'm looking for friendly examples. - -REPLY [2 votes]: The slide rule is an old analog computing device that can be considered as being based on the quotient group $(\mathbb{R_+^*}, \times)/\{10^k, \ k \in \mathbb{Z} \}$ which could be called as well the "floating point universe". An example: -$$\cdots \ \equiv \ 7530 \ \equiv \ 753 \ \equiv \ 75.3 \ \equiv \ 7.53 \ \equiv \ 0.753 \ \equiv 0.0753 \equiv \cdots $$<|endoftext|> -TITLE: Type Theory for Beginners -QUESTION [11 upvotes]: One of the first things one learns at university are some foundations of mathematics. This covers topics such as sets, functions between sets, relations, logic & proof, ... -I learned this stuff too. But it does not satisfy me because some things seem to be too unnatural for me. In this blog post, Dr. Shulman explains that these problems with the "standard" foundations (ZFC, ETCS) are solved by type theory. -The standard reference to learn type theory is the Homotopy Type Theory book. But this book is too complicated for me, since I am a beginner. That is why I am searching for material from which I can learn type theory. This material should be written for the same audience for which the introductions to sets, functions, ... mentioned above are written. - -REPLY [2 votes]: I'd suggest "Type theory and functional programming" by Simon Thompson. It's a bit older, but it gives a much gentler overview of type theory than the HoTT book. In contrast to Pierce's TAPL, it handles the full version of dependent type theory rather than the more limited theories such as F_omega. It's freely available from https://www.cs.kent.ac.uk/people/staff/sjt/TTFP/. -Alternatively, if you want a more practical introduction to the subject then I recommend the recent book "Verified functional programming in Agda" by Aaron Stump. It is very accessible to newcomers yet it builds up to some impressive applications of type theory.<|endoftext|> -TITLE: How to find the zeros of this function? -QUESTION [9 upvotes]: There is a function, called $f(x)$, where: -$$ f(x) = 2(x-a) + 2\cos x (\sin x - b) $$ -$a$ and $b$ are constants. I would like to find all the possible values of $x$ where $ f(x) = 0 $ - -I've tried to solve it this way: -First I simplified the equation: -$$ 2x - 2a + 2\cos x\sin x - 2b\cos x = 0 $$ -Then I replaced the $2\cos x\sin x$ to $\sin 2x$, and moved it to the other side: -$$ 2a - 2x + 2b\cos x = \sin 2x$$ -After that I used the arcsine function: -$$ x_1 = \frac{1}{2} \arcsin(2a - 2x + 2b\cos x) + 2n\pi$$ -$$ x_2 = \pi - \frac{1}{2} \arcsin(2a - 2x + 2b\cos x) + 2n\pi$$ -I don't know how to continue it. It is probably a dead end. Could you please give me hints about how should I solve it? -I would like to express $x$ without using $x$. - -REPLY [6 votes]: This is not an answer (as it does not really solve the stated question), but perhaps the different viewpoint is useful to someone. -The original equation -$$ 2 (x - a) + 2 \cos(x)(\sin(x) - b) = 0$$ -can also be simplified to -$$ x = a + b \cos(x) - \cos(x) \sin(x)$$ -and since $\cos(x) \sin(x) = \sin(2 x)/2$, to -$$ x = a + b \cos(x) - 1/2 \sin(2x)$$ -or -$$ x - a = b \cos(x) - 1/2 \sin(2x) \tag{1}\label{1}$$ -This also means that the range of possible solutions are well limited, -$$ a - \lvert b \rvert - 1/2 \; \le \; x \; \le a + \lvert b \rvert + 1/2 $$ -i.e. to a $2 \lvert b \rvert + 1$ -sized range around $a$: -$$ \lvert x - a \rvert \; \le \; \lvert b \rvert + 1/2 $$ -Note that the left side of equation $\eqref{1}$ is a straight line with slope $1$ ($y = x - a$). The right side is a $2 \pi$-periodic function with amplitude $\lvert b \rvert+1/2$ (unless $b = 0$, in which case the right side is a $\pi$-periodic sine wave with amplitude $1/2$). The solutions are their intersections. When finding numerical solutions for the general case (i.e., $a$ and $b$ are given numerically), this approach yields very good starting points intuitively, so that simple iterative methods can be used to find all solutions rapidly.<|endoftext|> -TITLE: Convex dense subset of $\Bbb{R}^n$ is the entire space -QUESTION [7 upvotes]: Say we have a convex dense set $X\subset\Bbb{R}^n$, does it follow that $X=\Bbb{R}^n$ ? -For $n=1$ it's true because convex set of real numbers are intervals, and if it's dense then it's $\Bbb{R}$. -Anyway, it seems more difficult in the general case, perhaps it's false, I don't know. -Any ideas ? - -REPLY [2 votes]: Using induction over the dimension $n$, it is easy to show that, given any set of $2^n$ points meeting every quadrant of $\Bbb R^n$, the zero vector is in their convex hull. -Let $Q_i$ denote the $i$-th quadrant for $i=1,\dots,2^n$. Assume $a$ is a point in $\Bbb R^n$. Since $a+Q_i$ is open, we can pick a point $x_i$ from $X\cap(a+Q_i)$. Now since $a$ is in the convex hull of these $x_i$ and $X$ is convex, $a$ must be in $X$.<|endoftext|> -TITLE: Relation between reflection group and coxeter group -QUESTION [7 upvotes]: Reflection group is defined see https://en.wikipedia.org/wiki/Reflection_group. -An abstract Coxter group is defined to have generators $s_1$, $s_2$, ..., $s_n$ and relations $s^2_i=e$, $(s_is_j)^{m_{ij}}=e$ for some $2\leq m_{ij}\leq \infty$. -I don't know why viewed as an abstract group, every reflection group is a Coxeter group? Can somebody give me an example to explain this? Thanks in advance. - -REPLY [7 votes]: We have $(s_i)^2 = e$ because if we repeat the same reflection twice in a row we end up back where we started. -Since $s_i s_j$ is an element of a group, it has an order (possibly infinity), which we denote by $m_{ij}$. -So in any reflection group, conditions of this form are at least satisfied. It remains to show that they're sufficient to completely define the group. As far as I know this is not so easy to explain, but Coxeter does this by characterizing the possible fundamental domains of a reflection group and then exploiting their polyhedral geometry. See: - -Coxeter, H. S. M.. “Discrete Groups Generated by Reflections”. Annals of Mathematics 35.3 (1934): 588–621. - -where this is Theorem 8.<|endoftext|> -TITLE: What does this (double absolute value like) notation mean? -QUESTION [18 upvotes]: Here, -$$\left\lVert\frac{\partial\bf x}{\partial s}\times\frac{\partial\bf x}{\partial t}\right\rVert$$ -the inside will at last be a vector. and two absolute value signs have covered it. what does it mean? -Can someone explain it to me? $||\vec a||$ - -REPLY [3 votes]: This notation represents the magnitude of whatever is inside, generally a vector. See this.<|endoftext|> -TITLE: Total space of a finite rank locally free sheaf, Vakil's 17.1.4 & 17.1.G -QUESTION [7 upvotes]: If $X$ is a scheme and $\mathcal{F}$ is a locally free sheaf of rank $n$, then in Vakil's book the total space of $\mathcal{F}$ is defined to be $Spec(\text{Sym}^\bullet \mathcal{F}^\vee)$, the relative spectrum of the sheaf of algebras. The total space is a rank $n$ vector bundle, which is easy to see, just take any open subset $U$ such that $\mathcal{F}|_U$ is trivial, then we have -\begin{equation} -Spec(\text{Sym}^\bullet \mathcal{F}^\vee|_U) \cong \mathbb{A}_U^n -\end{equation} -In Ex. 17.1.G, it claims that $\mathcal{F}$ is isomorphic to the sheaf of sections of the total space $Spec(\text{Sym}^\bullet \mathcal{F}^\vee)$. It is easy to see that the sections of $Spec(\text{Sym}^\bullet \mathcal{F}^\vee)$ on $U$ is isomorphic to $\mathcal{O}_U^n$, i.e. when $U$ is affine, say $U=\text{Spec}(A)$, then we have $\text{Sym}^\bullet \mathcal{F}^\vee|_U \cong A[x_1,\cdots,x_n]$ and the sections is in one to one bijection with the $A$-map $A[x_1,\cdots,x_n] \rightarrow A$, which is determined by the values of $x_i$, thus it is isomorphic to $A^n$. -However, I could not show the transition functions are actually the same. If the transition functions for $\mathcal{F}$ is $T_{ij}$ with respect to the affine open cover $U_i$, then the transition function for $\mathcal{F}^\vee$ is $(T_{ij}^t)^{-1}$, the inverse of transpose. It seems to me that the transition function for the vector bundle $Spec(\text{Sym}^\bullet \mathcal{F}^\vee)$ is actually $(T_{ij}^t)^{-1}$. How could the transition function for the sheaf of its sections be the same as that of $\mathcal{F}$, i.e. $T_{ij}$? - -REPLY [4 votes]: First of all, pick a basis $\{v_k^i\}$ for $\mathcal{F}$ on each trivializing open set $U_i$, and let $\{x_k^i\}$ be its dual basis, which trivializes $\mathcal{F}^\vee$. Over some open affine subset of an intersection $U_{ij}$, the total space is $\DeclareMathOperator{spec}{Spec} \spec(A[x_1^i,\ldots ,x_n^i])\cong \spec(A[x_1^j,\ldots ,x_n^j])$, where the isomorphism is the transition function $\phi_{ij}$ we are looking for. It is induced by the map of rings in the opposite direction -$$ -\phi_{ij}^*:A[x_1^j,\ldots ,x_n^j]\to A[x_1^i,\ldots ,x_n^i] -$$ -Which maps a generator $x_k^j$ to $T_{ij}^t(x_k^j)$. If we let $v$ be a point in the total space, then $\phi_{ij}(v)$ is such that $x_k^j(\phi_{ij}(v))=\phi_{ij}^*(x_k^j)(v)$, just by how a ring homomorphism induces a map of schemes. Now the fact that for any basis element $x_k^j\circ \phi_{ij}=\phi_{ij}^*(x_{k}^j)$ implies that $\phi_{ij}$ is the dual map to $\phi_{ij}^*$, that is, its matrix with respect to the bases $\{v_k^i\}$ and $\{v_k^j\}$ will be the transpose of $T_{ij}^t$, which gives us $T_{ij}$ back. -Summing up: the $x_i$'s form a basis for the dual of the fiber of the total space, and if the transition function acts like $T_{ji}^t$ on the dual, it must act on the original space as $T_{ij}$.<|endoftext|> -TITLE: Every finite connected space is also path-connected? -QUESTION [5 upvotes]: Let $X$ be a connected space, if $X$ is finite, then $X$ is a path-connected space? If so, how to prove it? If not, how to give a counterexample? -Thanks in advanced. - -REPLY [4 votes]: For $x \in X$, let $U_x := \bigcap \{U \subseteq X : U \text{ open}, x\in U\}$ denote the smallest open set containing $x$. - -Lemma. Let $x, y \in X$, then there is a sequence $x_0 = x, x_1, \ldots, x_n= y$ such that for each $i$, either $x_i \in U_{x_{i+1}}$ or $x_{i+1} \in U_{x_i}$. - -Proof. For $x \in X$, let $A \subseteq X$ denote the set of points $y\in X$ for which such a sequence exists. We have $x \in A$, and for any $y \in A$, we have $U_y \subseteq A$, hence $A$ is open. If $z \not\in A$, then $U_z \subseteq X \setminus A$, hence $A$ is closed. As $X$ is connected, $A=X$. - -Lemma 2. If $x \in U_y$, then there is a path connecting $x$ and $y$. - -Proof. Define $w \colon [0,1] \to X$ by $w(t) = x$ for $t < 1$ and $w(1) = y$. Let $U \subseteq X$ open, if $y \not\in U$, then $w^{-1}[U] \in \{\emptyset, [0,1)\}$, hence $w^{-1}[U]$ is open, if $y \in U$, then $U_y \subseteq U$, hence $x \in U$, therefore $w^{-1}[U] = [0,1]$. Therefore $w$ is continuous. - -Proposition. $X$ is path-connected. - -Proof. Let $x,y \in X$, by Lemma 1 there is a sequence $x_0, \ldots, x_n$ with $x_0 = x$, $x_n = y$, and $x_i \in U_{x_{i+1}}$ or $x_{i+1} \in U_{x_i}$ for all $i$. By Lemma 2, $x_i$ and $x_{i+1}$ are connectable by a path $w_i$. -As this is true for all $i$, $x_0$ and $x_n$ are connected by a path.<|endoftext|> -TITLE: What is the number of $n \times n$ binary matrices $A$ such that $\det(A) = \text{perm}(A)$? -QUESTION [18 upvotes]: Recall that the permanent is the 'positive analog' of the determinant whereby the signs in the cofactor expansion process are taken as positive. That is, the permanent is the immanant corresponding to the trivial character. -Many enumerative problems involving permutations and many enumerative problems involving graph theory may be reformulated using the permanents of binary matrices. -I have previously considered the natural combinatorial problem of determining the number A192892$(n)$ of $n \times n$ binary matrices $A$ such that $\det\left(A\right) = \text{perm}\left(A\right)$. Observe that A192892$(n)$ is also equal to the number of binary matrices $\left( a_{i, j} \right)_{n \times n}$ such that the product $$a_{1, \sigma(1)}a_{2, \sigma(2)}\cdot \cdots \cdot a_{n, \sigma(n)}$$ vanishes for all odd permutations $\sigma \in S_{n}$. -I have computed A192892$(n)$ for $n \leq 4$. Obviously, brute force algorithms for this enumerative problem are very inefficient. So it is natural to ask: -(1) Is there a simple combinatorial formula for A192892$(n)$? -(2) Is there a polynomial-time algorithm for computing A192892$(n)$? - -REPLY [7 votes]: To address questions (1) and (2), let's start with hardmath's comment. The set of matrices with zero permanent is a subset of the set we want to count, and it's easier to describe. Nonetheless, the number of such matrices is recorded in https://oeis.org/A088672 only up to $6\times6$, combining the efforts of three contributors. In the literature, we find this article: - -C. J. Everett and P. R. Stein, The asymptotic number of (0,1)-matrices with zero permanent, Disc. Math. 6 (1973), 29–34. - -The authors apply standard techniques such as Inclusion-Exclusion, but they can only get asymptotic bounds. All this suggests to me that there is no obvious, simple formula or algorithm. Maybe there exists a simple formula, but finding it will take some novel insight into these problems. -Now on to brute force. If we naively iterate through all $2^{n^2}$ matrices and check each one against all $n!/2$ odd permutations, then we can compute only a few terms of the sequence. Even if we perform 20 billion checks per second, computing $a(7)$ would take two years, and $a(8)$ would take 600,000 years. -Now, it's hard to erase that $2^{n^2}$ term. Still, there are several key speedups, each of which makes roughly another term feasible. They can be implemented in this order: - -Instead of iterating over possible values of the last row, compute how many spaces on the last row are "free", meaning that putting a $1$ there would not complete an odd permutation with the given values in the previous $n-1$ rows. If $k$ is the number of free spaces, then $2^k$ is the number of values of the last row that avoid all odd permutations, so we can immediately add that number to our running total. This increases our efficiency by a factor of $2^n/n$. -Dynamic programming. As we iterate through the rows of the matrix, filling in values, reduce the set of odd permutations to check by grouping them according to their values in the remaining rows. By the time we enumerate the next-to-last row, we have only $n^2$ permutations to check. This increases our efficiency by a factor of $n!/n^2$. -Column swaps. Modulo the sign of the permutations, we really only have to inspect representative matrices from orbits under the action of the symmetry group $(S_n\times S_n)\rtimes\mathbb Z/2$, which acts by permuting the rows and columns and transposing the matrix. In other words, we only need to check bipartite graphs up to graph isomorphism. Now, it's a little awkward to solve the graph isomorphism problem in the inner loop of the algorithm. A better tactic is to break up the orbits into more predictable chunks. First, we don't want to give up the big speedup that we got from projecting out the last row, so the symmetry group is reduced to $S_{n-1}\times S_n$. Let's take advantage of the bigger $S_n$ factor first, which swaps columns. We do this by only enumerating columns in increasing order, where the order is defined by taking the value of a column as a binary expansion, where the top row is the most significant bit. This can be computed row-by-row, and it dramatically reduces the number of matrices considered, by almost $n!$. I say "almost" for two reasons. One, some matrices will have large stabilizers under this action, equivalently small orbits. Second, we now have to keep track of even permutation matrices as well as odd permutation matrices, since some orbits will have to include odd permutations of the columns. And it takes a little effort to enforce the ordering and compute the stabilizers. But those problems aside, this increases our efficiency by $n!$. -Row swaps. We can't get an similar $(n-1)!$ speedup by enforcing the same ordering among rows, because the previous column permutations don't preserve the binary value of a row. We can get close, though, by ordering rows according to their Hamming weight, which is preserved by column permutations. There are only $n+1$ possible weights, so this scheme results in unfortunately large stabilizers, and therefore unfortunately small orbits. But even if the typical stabilizer order is $2^{n-1}$, that's still a speedup of $(n-1)!/2^{n-1}$, which is totally worthwhile. -Code optimization. A subset of the set of $2\times n$ partial permutations matrices can be represented as an $n^2$ bitfield. If $n\leq8$, this fits is a single $64$-bit machine register, and using $256$-bit SIMD instructions, we can process $4$ of those per cycle. Use a low-level language with support for generic programming, like C++, to eliminate dynamic allocation, minimize space of arrays, and optimize each row individually. Also, the problem is embarrassingly easy to parallelize to multiple CPU cores, or even multiple machines. - -Improvements 1-4 are implemented in https://github.com/Culter/PermDet. They result in the following values: -$a(0)=1$ -$a(1)=2$ -$a(2)=12$ -$a(3)=343$ -$a(4)=34997$ -$a(5)=12515441$ -$a(6)=15749457081$ -$a(7)=72424550598849$ -$a(8)=1282759836215548737$ -In fact, $a(9)$ is well in reach using this algorithm, but it would take greater-than-$64$-bit arithmetic to implement, which I haven't done.<|endoftext|> -TITLE: Compute the limit of a recursively defined sequence in terms of its initial values -QUESTION [5 upvotes]: Consider the sequence $\{ a_n \}$ defined recurisvely in terms of $a_1$ and $a_2$ by -$$ a_{n+1} = \frac{a_n + a_{n-1}}{2} $$ -for $n \geq 2$. Assuming this sequence converges, find the limit in terms of $a_1$ and $a_2$. -The book provides the answer as $\frac{1}{3} a_1 + \frac{2}{3} a_2$, but I don't see how to arrive at this. Usually, for recursive sequences if we can assume the limit exists, then we say it is $L$ and then use the recursion to solve for $L$. I don't see how to accomplish that in this case. - -REPLY [4 votes]: Jack D’Aurizio’s solution is nice and short, but the idea for it may seem to come out of thin air. Here’s another way to think about it that might occur to you fairly naturally if you recognize that by averaging at each step you’re moving halfway from $a_n$ to $a_{n-1}$ in order to reach $a_{n+1}$. Each step is half the size of the previous one, so the sequence should certainly converge, and the cumulative effect of the steps ought to be fairly easy to sort out. -To do this a bit more formally, let $d=a_2-a_1$, so that $a_1=a_2-d$. You start at $a_1$ and visit $a_2,a_3$, and so on in turn. Your first step is to add $d$ to reach $a_2$. Your second step takes you to -$$a_3=\frac{a_1+a_2}2=\frac{(a_2-d)+a_2}2=a_2-\frac{d}2\;.$$ -Note that $a_2=a_3+\frac{d}2$. Thus, your third step takes you to -$$a_4=\frac{a_2+a_3}2=\frac{\left(a_3+\frac{d}2\right)+a_3}2=a_3+\frac{d}4\;.$$ -Note that $a_3=a_4-\frac{d}4$. Thus, your fourth step takes you to -$$a_5=\frac{a_3+a_4}2=\frac{\left(a_4-\frac{d}4\right)+a_4}2=a_4-\frac{d}8\;.$$ -In general it appears that the $n$-th step is of length $\dfrac{d}{2^n}$ but with an alternating sign: -$$a_{n+1}=a_n+(-1)^{n+1}\frac{d}{2^{n-1}}\;.$$ -This is easily verified by induction, and the path is therefore -$$a_1+d-\frac{d}2+\frac{d}{2^2}-\frac{d}{2^3}+\ldots\;,$$ -where the partial sum after $n$ terms is simply $a_n$. The limit of the sequence $\langle a_n:n\in\Bbb Z^+\rangle$ must then be -$$a_1+d\sum_{n\ge 0}\frac{(-1)^n}{2^n}=a_1+d\sum_{n\ge 0}\left(-\frac12\right)^n\;,$$ -and you can easily evaluate the geometric series to express this in terms of $a_1$ and $a_2$.<|endoftext|> -TITLE: How can the inverse map be a morphism of algebraic varieties? -QUESTION [5 upvotes]: This is a very basic questions about algebraic groups, which I'm just starting to learn a little bit about: -For an algebraic variety to be an algebraic group, the inverse map needs to be a morphism of algebraic varieties, but I don't see how this can be true. My understanding is that morphisms are locally polynomials. -Specifically, I'm working on a problem involving the multiplicative group of the affine line, but I don't see how the inverse map is a morphism of the variety given that it's not a polynomial. - -REPLY [5 votes]: Let $k$ be algebraically closed, and let $G$ be the closed set $Z = V(XY-1)$. This can be identified with $k^{\ast}$, via $t \mapsto (t, \frac{1}{t})$. We want to show that the "inverse map" $\phi: Z \rightarrow Z$ given by $\phi(t,\frac{1}{t}) = (\frac{1}{t},t)$ is a morphism of varieties. - -Fact: let $Z_1 \subseteq k^n$ and $Z_2 \subseteq k^m$ be Zariski closed sets, and $\phi: Z_1 \rightarrow Z_2$ a function. Then $\phi$ is a morphism if and only if there exist polynomials $f_i(X_1, ... , X_n) \in k[X_1, ... , X_n], 1 \leq i \leq n$ such that $$\phi(x_1, ... , x_n) = (f_1(x_1, ... , x_n), ... , f_m(x_1, ... , x_n))$$ for every $(x_1, ... , x_n)$. - -Now take $Z_1 = Z_2 = Z$, and $n = m = 2$. Let $f_1(X,Y) = Y$, and $f_2(X,Y) = X$. Then clearly for $(x,y) \in Z$ (that is, for $(x,y) \in k^2$ satisfying $x= \frac{1}{y}$) $$\phi(x,y) = (\frac{1}{x},\frac{1}{y}) = (y,x) = (f_1(x,y),f_2(x,y)) $$<|endoftext|> -TITLE: Proving that the eigenfunctions of the Laplacian form a basis of $L^2(\Omega)$ (and of $H_0^1(\Omega)$) -QUESTION [9 upvotes]: I am studying the eigenfunctions and eigenvalues of the Laplacian on an open, bounded domain $\Omega \subset \mathbb{R}^n$ with homogeneous Dirichlet boundary conditions. I have read about the the weak and variational formulation of the problem. I understand the result that the first eigenvalue is given by: $$ \lambda_1 = \inf_{H_0^1(\Omega)} R = \inf_{v \in H_0^1(\Omega)} \frac{\int_\Omega \| \nabla v\|^2 \, \mathrm{d}x}{\int_\Omega v^2 \, \mathrm{d}x}$$ -and the associated eigenfunction is $u_1$ such that $R(u_1) = \lambda_1$, as well as the characterization of the nth eigenvalue/eigenfunction. I have proved some of the basic results, such as the fact that the sequence of eigenvalues is unbounded and eigenfunctions associated to different eigenvalues are orthogonal (both in $L^2(\Omega)$ and $H_0^1(\Omega)$). -Now I am trying to prove that the eigenfunctions $u_1,u_2,\ldots$ form a basis of $L^2(\Omega)$. I have seen some proofs of this fact (e.g. Jost's book on PDEs and Mihai Nica's article), but I am trying to use a different approach. I have a sketch of this proof but I need to fill in some details. -The proof argues by contradiction. We define $V$ to be the closure in $H_0^1(\Omega)$ of the set: -$$\left\{ u \in H_0^1(\Omega) : \exists \, N \in \mathbb{N}: u = \sum_{n=1}^N\alpha_nu_n\right\},$$ -where $\alpha_n \in \mathbb{R}$ and the $u_n$'s are the eigenfunctions. Then $V$ is a closed subspace of $H_0^1(\Omega)$. We assume, for the sake of contradiction, that $V \neq H_0^1(\Omega)$. -1) This should then imply that $V^\perp \neq \{0\}$, but I am not sure why. I know that it is not necessary for a subset of a Hilbert space $H$ to be equal to the whole space $H$ for its orthogonal complement to be trivial. However here the space is closed so it is possible that this may force the orthogonal complement to be trivial, but I am not sure if that is enough. -Then $V^\perp$ is a non-trivial closed subspace of $H_0^1(\Omega)$ and hence we can apply the same methods used to determine the existence of eigenvalues and eigenfunctions to deduce the existence of an eigenfunction $u$ in $V^\perp$. It can then be shown that the eigenvalue associated to this eigenfunction must equal one of the previously determined eigenvalues (I understand how to do this). -2) Then I am not sure why this leads us to a contradiction. My guess is that then this $u\in V^\perp$ should equal some $u_n \in V$ (it is obvious that $V$ contains all the $u_n$'s) and this would imply that $V\cap V^\perp \neq \varnothing$, which is impossible. -3) This would then prove that $V = H_0^1(\Omega)$ but I have some doubts/confusion as to why this shows that the $u_n$'s form a basis of $H_0^1(\Omega)$, if they do. -4) Then I am not sure how to extend this to $L^2(\Omega)$. I suspect that the fact that $H_0^1(\Omega)$ is dense in $L^2(\Omega)$ should play a role. -I would be very grateful for any help (and/or references) on the numbered points. - -REPLY [4 votes]: I know a different approach, and I think it is more "standard." Basically it is sufficient to prove that if $\Omega \subset \mathbb{R}^n$ is limited open, then $(-\Delta)^{-1}$ is a compact and injective self-adjoint operator on $L^2(\Omega)$ and on $H^1_{0}(\Omega)$ and then after applies Hilbert-Schmidt spectral theorem. Recalling a little proof of the Hilbert-Schmidt spectral theorem, I think that what you ask lies precisely in the proof of this theorem. More precisely there is the following theorem -"If $H$ is a real (it is true also in the complex case) Hilbert spaces and $K:H \longrightarrow H$ is a compact adjoint operator, then exists an orthonormal basis of eigenvectors $\lbrace u_n \rbrace$ of $K$ with eigenvalues $\lbrace \lambda_n \rbrace$ and it has the representation -$\displaystyle Kx = \sum_{n \geq 1} \lambda_n (x, u_n)_H u_n$ $(x \in H)$" -Now, the facts you say apply generally to elliptic operators in divergence form, ie type -$\displaystyle Lu:= - \sum_{i,j=1}^n (a_{ij} u_{x_i})_{x_j} + \sum_{i=1}^n (b_i u)_{x_i} +c u$ -where $a_{ij}, b_i, c \in L^\infty(\Omega)$, and it assumes that $L$ is uniformly elliptic. Basically the case of the Laplace operator is a particular case of $L$. After we introduce the weak solutions, assume $b_i=c=0$, there is the following theorem -"If $a_{ij}=a_{ji} \in L^\infty(\Omega)$, and considering $L^{-1} : L^2(\Omega) \longrightarrow H^1_{0}(\Omega) \subset L^2(\Omega)$, then $L^{-1}$ is a compact and injective self-adjoint operator. In addition there is an orthonormal basis $\lbrace \phi_k : k \in \mathbb{N} \rbrace$ of $L^2(\Omega)$ of eigenfunctions associated to the eigenvalues of $L^{-1}$ and -$\displaystyle L^{-1}f =\sum_{k=1}^\infty \lambda_k (f, \phi_k)_{L^2} \phi_k$ -where $Lu=f$ wih $f \in L^2(\Omega)$. In particular $\lim_{k \rightarrow \infty} \lambda_k =0$". -This whole theory is very well explained in the book "Lecture Notes on Functional Analysis: With Applications to Linear Partial Differential Equations" by A. Bressan.<|endoftext|> -TITLE: Direct sum $\mathbb{R} \oplus \mathbb{R}$ - isn't intersection non-zero? -QUESTION [7 upvotes]: I am just starting to learn about direct sums and every definition I have read about direct sums say that the intersection of subspaces must be zero. So, how can $$\mathbb{R} \oplus \mathbb{R}$$ be a direct sum when they are identical? Thanks in advance! - -REPLY [10 votes]: The direct sum depends on context. There are two notions of direct sum: the inner direct sum and the outer direct sum. -With the inner direct sum, we have some large vector space and two subspaces. We must assume that the intersection of the two subspaces is zero in order to form the inner direct sum. -With the outer direct sum, we take two vector spaces that have nothing to do with each other and slam them together. I.e. you get a vector in the outer direct sum by appending vectors from your two vector spaces. -It turns out that these two notions are isomorphic. That is, if you take the inner direct sum of two subspaces in a big vector space, that is isomorphic to the vector space obtained by taking the outer direct sum of those subspaces as vector spaces in their own right. -In your example, we are viewing the two copies of $\mathbb{R}$ as distinct vector spaces that have nothing to do with each other and then taking the outer direct sum of them. -To think about it in terms of the inner direct sum, think of one of the copies of $\mathbb{R}$ as the x-axis and the other copy of $\mathbb{R}$ as the y-axis. Then their intersection is zero, and taking the inner direct sum yields $\mathbb{R}^2$.<|endoftext|> -TITLE: Axiom of choice is equivalent to every relation includes a function with the same domain -QUESTION [7 upvotes]: The axiom of choice asserts that for any set $X$ there exists a function -$f : (2^X − \{\emptyset\}) \rightarrow X$ such that for any nonempty $A \subseteq X$, $f(A) \in A$. Show that this is equivalent to the assertion that every relation includes a function with the same domain. -I am given this problem in my assignment. My approach is following. -I have a set $X$. Then it has a choice function $g : \mathcal{P}(X) \setminus \left\{\emptyset\right\} \rightarrow X$. Suppose $X$ has a relation $R \subseteq X \times X$. $dom(R) \subseteq X$. Then the required function $f : dom(R) \rightarrow X$ is -$$f(x) = g(\{y\ |\ (x, y) \in R\})$$ -Conversely, suppose $X \supseteq Y$ is a non-empty subset. Let $$R_Y = \{(x, y)\ |\ y \in Y, x \in Y, (p, q) \in R_Y \Rightarrow p = x\}$$ -Then there is a function $f_Y : dom(R_Y) \rightarrow X$. But $dom(R_Y)$ is singleton from the definition of $R_Y$. Then $im(f_Y)$ is also singleton. We define our choice function as $f = \{(Y, y)\ |\ y \in im(f_Y)\}$ -Are my arguments correct? - -Edit.1: what I tried to do for converse is, for any subset $Y$, I construct a relation $Y_1 \times Y$ where $Y \supseteq Y_1 = \{y\}$. Then there is a function $f_Y : Y_1 \rightarrow Y$. Then if $f$ is my choice function I define $f(Y) = f_Y(y)$. -May be I should define $R_Y$ as -$$R_Y = \{(x, y)\ |\ y \in Y; x \in Y; (p, q), (p', q) \in R_Y \Rightarrow p = p'\}$$ -I understand there may be multiple $R_y$'s. But can't I define a singleton set as $\{s\in S\ |\ a, b \in S \Rightarrow a = b\}$? This can be $\{s\}$ $\forall s \in S$. But isn't it a valid specification? - -Edit.2: This is the final solution with the help from @Asaf. - -If we assume axiom of choice then for any $X$ there is a choice function $f : \mathcal{P}(X) \setminus \left\{\emptyset\right\} \rightarrow X$ such that $f(A) \in A$. -Suppose $R$ is a relation between $P$ and $Q$. Then we have a choice function $f_Q$ for $Q$. -Let $f_R(p) = f_Q(\{q\ |\ pRq\})$. The function $f_R : P \rightarrow Q$ and $dom(f_R) = dom(R)$. So this is our required function. - -Conversely, if every relation includes a function with same domain. Let $R$ be a relation between $\mathcal{P}(X) \setminus \left\{\emptyset\right\}$ and $X$ such that $aRb \Rightarrow b \in a$. Now $R$ includes a function $f : \mathcal{P}(X) \setminus \left\{\emptyset\right\} \rightarrow X$. Then $xRf(x) \Rightarrow f(x) \in x$. So $f$ is the choice function for $X$. - -REPLY [6 votes]: This depends on your formulation of the axiom of choice, since many different places formulate the axiom of choice differently. But it seems that you're using the formulation: "For every $X$, there is a choice function from $\mathcal P(X)\setminus\{\varnothing\}$." -Your first proof is correct. The second is not quite correct. First of all the definition of $R_Y$ is entirely unclear. But if I did understand it correctly, then you already had chosen $x\in X$ for defining $R_Y$. But which one? Presumably, there are many. -Instead you want to use the assumption. The assumption is that a relation can be reduced to a function with the same domain. Why not make that function your choice function? So what would the domain be? $\mathcal P(X)\setminus\{\varnothing\}$. Now come up with a relation with that domain, that any function that you can reduce from it would be a choice function.<|endoftext|> -TITLE: Rudin's RCA Q3.4 -QUESTION [5 upvotes]: I'm trying to solve the following question from Rudin's Real & Complex Analysis. (Chapter 3, question 4) : - -Suppose $f$ is a complex measurable function on $X$, $\mu$ is a positive measure on $X$, and - $$\varphi(p) ~=~ \int_X |f|^p \; d\mu ~=~ \|f\|_p^p,~~~~~~~~~~(0 < p < \infty).$$ - Let $E := \big\{ p :~ \varphi(p) < \infty\big\}$. Assume $\|f\|_\infty > 0$. -(a) If $r < p < s$, $r \in E$, and $s \in E$, prove that $p \in E$. -(b) Prove that $\log(\varphi)$ is convex in the interior of $E$ and that $\varphi$ is continuous on $E$. -(c) By (a), $E$ is connected. Is $E$ necessarily open ? Closed ? Can $E$ consist of a single point ? Can $E$ be any connected subset of - $(0,\infty)$ ? -(d) If $r < p < s$, prove that $\|f\|_p \leq \max\big( \|f\|_r, \|f\|_s\big)$. Show that this implies the inclusion - $$ \mathcal{L}_r(\mu) \cap \mathcal{L}_s(\mu) ~\subseteq~\mathcal{L}_p(\mu).$$ -(e) Assume that $\|f\|_r < \infty$ for some $r < \infty$ and prove that $$ \|f\|_p \xrightarrow[p \rightarrow \infty]{}\|f\|_\infty.$$ - -I got a solution to (a), (d) and (e). While typing my question, MSE suggested me to look at $a\mapsto \log\left(\lVert f\lVert_{1/a}\right)$ is a convex map -which seems to be related to (b). However I'm clueless about (c). Where should I start from ? -Edit -I believe the idea from $a\mapsto \log\left(\lVert f\lVert_{1/a}\right)$ is a convex map can be applied to get a proof that -$p \mapsto \log\|f\|_{\frac{1}{p}}$ is continuous on the interior of $E$. But this is not quite what we are asked to demonstrate... I'm puzzled. -Second Edit -Based on zhw's answer, it appears that $E$ is not necessarily open, nor necessarily closed and that it can be a singleton. The question whether or not $E$ can be any connected subset of $(0,\infty)$ remains. But I think I can come up with a proof that it can. - -REPLY [2 votes]: This is an answer for the continuity of $\varphi.$ -WLOG $f\ge 0.$ Let $A=\{f\le 1\},$ $B= \{f> 1\}.$ Suppose $[p,q]\subset E.$ Let $p_n\to p$ within $[p,q].$ Clearly $f^{p_n}\to f^p$ pointwise everywhere. -Now -$$\tag 1\int_X f^{p_n} = \int_X f^{p_n}\chi_{A} + \int_X f^{p_n}\chi_{B}.$$ -Observe $f^{p_n}\chi_{A} \le f^{p}\chi_{A}$ and $f^{p_n}\chi_{B} \le f^{q}\chi_{B}.$ Since both $f^{p}\chi_{A}, f^{q}\chi_{B} \in L^1(X),$ the DCT implies the right side of $(1)$ converges to -$$\int_X f^{p}\chi_{A} + \int_X f^{p}\chi_{B} = \int_X f^{p}.$$ -This implies $\varphi$ is continuous from the right at $p.$ -Similarly, $\varphi$ is continuous from the left at $q.$ It follows that $\varphi$ is continuous on $E.$<|endoftext|> -TITLE: How does the determinant link to the cross product -QUESTION [9 upvotes]: For a $2\times 2$ matrix $$\begin{pmatrix}a&b\\c&d \end{pmatrix} -$$ -The determinant is given by $ad-bc$. And the cross product of $$\begin{pmatrix} a\\b\\0\end{pmatrix}\times \begin{pmatrix} c\\d\\0\end{pmatrix} =\begin{pmatrix} 0\\0\\ad-bc\end{pmatrix}$$ -We can also note that $|a\times b|=|a|b|\sin\theta$ where $\theta$ is the angle between $a$ and $b$. Hence is there any way to relate the determinant to the equation with sine? I recently saw that $$\text{Re}(a)\text{Im}(b)-\text{Im}(a)\text{Re}(b)=|ab|\sin{\text{arg}(a/b)}$$ -How would one verify/prove this? - -REPLY [10 votes]: You can actually define the cross product of two vectors $\mathbf{a}, \mathbf{b} \in \mathbb{R}^3$ to the be unique vector $\mathbf{a} \times \mathbf{b} \in \mathbb{R}^3$ such that -$$ -\forall \mathbf{c} \in \mathbb{R}^3, \quad (\mathbf{a} \times \mathbf{b}) \cdot \mathbf{c} = \det(\mathbf{a}\,\vert\,\mathbf{b}\,\vert\,\mathbf{c}), -$$ -where $(\mathbf{a}\,\vert\,\mathbf{b}\,\vert\,\mathbf{c})$ denotes the $3 \times 3$ matrix whose columns are $\mathbf{a},\mathbf{b},\mathbf{c}$ in that order. In particular, you can recover $\mathbf{a} \times \mathbf{b}$ as -$$ - \mathbf{a} \times \mathbf{b} = \det(\mathbf{a}\,\vert\,\mathbf{b}\,\vert\,\mathbf{i})\mathbf{i} + \det(\mathbf{a}\,\vert\,\mathbf{b}\,\vert\,\mathbf{j})\mathbf{j} + \det(\mathbf{a}\,\vert\,\mathbf{b}\,\vert\,\mathbf{k})\mathbf{k}, -$$ -which can be massaged using determinant identities to give you the usual ghastly explicit formula; in the special case that $\mathbf{a}$ and $\mathbf{b}$ lie in the $xy$-plane, you immediately recover your observation above. Moreover, it immediately follows that $\mathbf{a} \times \mathbf{b}$ is perpendicular to $\mathbf{a}$, $\mathbf{b}$, and any linear combination of $\mathbf{a}$ and $\mathbf{b}$, since by basic determinant identities, including the fact that a square matrix with repeated columns has a vanishing determinant, -$$ - (\mathbf{a} \times \mathbf{b}) \cdot (s\mathbf{a}+t\mathbf{b}) = \det(\mathbf{a}\,\vert\,\mathbf{b}\,\vert\,s\mathbf{a}+t\mathbf{b}) = s\det(\mathbf{a}\,\vert\,\mathbf{b}\,\vert\,\mathbf{a}) + t\det(\mathbf{a}\,\vert\,\mathbf{b}\,\vert\,\mathbf{b}) = 0. -$$ -Anyhow, the point of all this you're comfortable with basic linear algebra, especially with how the determinant behaves under elementary row and column operations, then you can derive the identity -$$ - \|\mathbf{a} \times \mathbf{b}\| = \|\mathbf{a}\|\,\|\mathbf{b}\|\sin\theta_{\mathbf{a},\mathbf{b}} -$$ -from the identity -$$ -\forall \mathbf{c} \in \mathbb{R}^3, \quad (\mathbf{a} \times \mathbf{b}) \cdot \mathbf{c} = \det(\mathbf{a}\,\vert\,\mathbf{b}\,\vert\,\mathbf{c}), -$$ -without too much trouble. -For simplicity, let's assume that $\mathbf{a} \neq \mathbf{0}$ and $\mathbf{b} \neq \mathbf{0}$; otherwise the claim is trivial. Actually, let's show that for any $\mathbf{c} \in \operatorname{Span}\{\mathbf{a},\mathbf{b}\}^\perp$, i.e., for any $\mathbf{c}$ perpendicular to both $\mathbf{a}$ and $\mathbf{b}$, that -$$ - \left\lvert(\mathbf{a} \times \mathbf{b}) \cdot \mathbf{c}\right\rvert = \left(\|\mathbf{a}\|\,\|\mathbf{b}\|\sin(\theta_{\mathbf{a},\mathbf{b}})\right)\|\mathbf{c}\|. -$$ -If $\mathbf{a} \times \mathbf{b} \neq \mathbf{0}$, then we can plug in $\mathbf{c} = \mathbf{a} \times \mathbf{b}$ to get -$$ - \|\mathbf{a} \times \mathbf{b}\|^2 = \left\lvert(\mathbf{a} \times \mathbf{b}) \cdot (\mathbf{a} \times \mathbf{b})\right\rvert = \left(\|\mathbf{a}\|\,\|\mathbf{b}\|\sin(\theta_{\mathbf{a},\mathbf{b}})\right)\|\mathbf{a}\times\mathbf{b}\|, -$$ -and hence -$$ - \|\mathbf{a} \times \mathbf{b}\| = \|\mathbf{a}\|\,\|\mathbf{b}\|\sin\theta_{\mathbf{a},\mathbf{b}} -$$ -If $\mathbf{a} \times \mathbf{b} = \mathbf{0}$, then since $\operatorname{Span}\{\mathbf{a},\mathbf{b}\}^\perp$ is at least $1$-dimensional, take any non-zero vector $\mathbf{c} \in \operatorname{Span}\{\mathbf{a},\mathbf{b}\}^\perp$ to get -$$ - 0 = \left\lvert(\mathbf{a} \times \mathbf{b}) \cdot \mathbf{c}\right\rvert = \left(\|\mathbf{a}\|\,\|\mathbf{b}\|\sin(\theta_{\mathbf{a},\mathbf{b}})\right)\|\mathbf{c}\|, -$$ -which yields $\sin(\theta_{\mathbf{a},\mathbf{b}}) = 0$ and hence -$$ - \|\mathbf{a} \times \mathbf{b}\| = 0 = \|\mathbf{a}\|\,\|\mathbf{b}\|\sin\theta_{\mathbf{a},\mathbf{b}}. -$$ -First, by the defining identity for cross products, -$$ - (\mathbf{a} \times \mathbf{b}) \cdot \mathbf{c} = \det(\mathbf{a}\,\vert\,\mathbf{b}\,\vert\,\mathbf{c}). -$$ -Next, since determinants are preserved under column additions (e.g., $\det(\mathbf{a}\,\vert\,\mathbf{b}\,\vert\,\mathbf{c}) = \det(\mathbf{a}\,\vert\,\mathbf{b}+s\mathbf{a}\,\vert\,\mathbf{c})$), we have that -$$ - \det(\mathbf{a}\,\vert\,\mathbf{b}\,\vert\,\mathbf{c}) = \det(\mathbf{a}\,\vert\,\mathbf{b}^\prime\,\vert\,\mathbf{c}), -$$ -where -$$ - \mathbf{b}^\prime := \mathbf{b} - \frac{\mathbf{a} \cdot \mathbf{b}}{\|\mathbf{a}\|^2}\mathbf{a} -$$ -is the orthogonal projection of $\mathbf{b}$ onto $\operatorname{Span}\{\mathbf{a}\}^\perp$, i.e., onto the plane through the origin with normal vector $\mathbf{a}$; geometrically, if you believe that $\det(\mathbf{a}\,\vert\,\mathbf{b}\,\vert\,\mathbf{c})$ is the signed volume of the parallelepiped spanned by $\mathbf{a},\mathbf{b},\mathbf{c}$, then we're essentially saying that the parallelpiped spanned by $\mathbf{a},\mathbf{b},\mathbf{c}$ has the same volume as the paralleliped spanned by spanned by $\mathbf{a},\mathbf{b}^\prime,\mathbf{c}$ by Cavalieri's principle. Observe, in particular, that $\mathbf{a}$, $\mathbf{b}^\prime$, and $\mathbf{c}$ are pairwise orthogonal by construction. -Next, since $\mathbf{a}$, $\mathbf{b}^\prime$, and $\mathbf{a} \times \mathbf{b}$ are pairwise orthogonal, -$$ - \lvert\det(\mathbf{a}\,\vert\,\mathbf{b}^\prime\,\vert\,\mathbf{c})\rvert = \sqrt{\det(\mathbf{a}\,\vert\,\mathbf{b}^\prime\,\vert\,\mathbf{c})^2}\\ = \sqrt{\det\left((\mathbf{a}\,\vert\,\mathbf{b}^\prime\,\vert\,\mathbf{c})^T\right) \det(\mathbf{a}\,\vert\,\mathbf{b}^\prime\,\vert\,\mathbf{c})}\\ -= \sqrt{\det\left((\mathbf{a}\,\vert\,\mathbf{b}^\prime\,\vert\,\mathbf{c})^T(\mathbf{a}\,\vert\,\mathbf{b}^\prime\,\vert\,\mathbf{c}) \right)}\\ -= \begin{vmatrix}\|\mathbf{a}\|^2&0&0\\0&\|\mathbf{b}^\prime\|^2&0\\0&0&\|\mathbf{c}\|^2\end{vmatrix}^{1/2}\\ -= \|\mathbf{a}\|\,\|\mathbf{b}^\prime\|\,\|\mathbf{c}\|. -$$ -At last, since the angle $\theta_{\mathbf{a},\mathbf{b}} \in [0,\pi]$ between the non-zero vectors $\mathbf{a},\mathbf{b}$ is given by the formula -$$ - \cos \theta_{\mathbf{a},\mathbf{b}} = \frac{\mathbf{a}\cdot\mathbf{b}}{\|\mathbf{a}\|\,\|\mathbf{b}\|} -$$ -it follows that -$$ -\|\mathbf{b}^\prime\|^2 = \left(\mathbf{b} - \frac{\mathbf{a} \cdot \mathbf{b}}{\|\mathbf{a}\|^2}\mathbf{a}\right) \cdot \left(\mathbf{b} - \frac{\mathbf{a} \cdot \mathbf{b}}{\|\mathbf{a}\|^2}\mathbf{a}\right)\\ -=\|\mathbf{b}\|^2 - 2 \mathbf{b} \cdot \frac{\mathbf{a} \cdot \mathbf{b}}{\|\mathbf{a}\|^2}\mathbf{a} + \left\|\frac{\mathbf{a} \cdot \mathbf{b}}{\|\mathbf{a}\|^2}\mathbf{a}\right\|^2\\ -=\|\mathbf{b}\|^2 - \frac{(\mathbf{a} \cdot \mathbf{b})^2}{\|\mathbf{a}\|^2}\\ -=\|\mathbf{b}\|^2\left(1 - \left(\frac{\mathbf{a}\cdot\mathbf{b}}{\|\mathbf{a}\|\,\|\mathbf{b}\|}\right)^2\right)\\ -=\|\mathbf{b}\|^2(1-\cos^2\theta_{\mathbf{a},\mathbf{b}})\\ -=\|\mathbf{b}\|^2\sin^2\theta_{\mathbf{a},\mathbf{b}}, -$$ -and hence that -$$ - \left\lvert(\mathbf{a} \times \mathbf{b}) \cdot \mathbf{c}\right\rvert = \|\mathbf{a}\|\,\|\mathbf{b}^\prime\|\,\|\mathbf{c}\| = \|\mathbf{a}\|\,\|\mathbf{b}\|\sin\left(\theta_{\mathbf{a},\mathbf{b}}\right)\|\mathbf{c}\|, -$$ -as was claimed.<|endoftext|> -TITLE: Does an eigenspace of a matrix depend continuously on its components? -QUESTION [8 upvotes]: Let $M(x)$ be a diagonalisable $n \times n$ complex matrix whose components are continuous functions of $x$ and suppose that, for all $x$, $M$ has eigenvalue $0$ with multiplicity $m < n$ (independent of $x$). Is it possible to choose a basis for the $0$-eigenspace of $M$ whose components are continuous functions of $x$? - -REPLY [5 votes]: This is even true without the assumption that M is diagonalizable, the only important point is that $M(x)$ has constant rank. Your claim is a simple instance of a standard result on vector bundles, but I'll describe a direct argument, assuming that $x\in\mathbb R$. -First you have to observe that it suffices to do this locally. Suppose you have given two continuous families of vectors $v_1(x),\dots,v_m(x)$ for $x\in (a,b)$ and $w_1(x),\dots,w_m(x)$ for $x\in (c,d)$ with $a0$ such that $\det(D(x))\neq 0$ for $|x-x_0|<\epsilon$. Moreover, by Cramer's rule, the matrix entries of $D(x)^{-1}$ depend continuously on $x$. Denoting by $I$ the identity matrix and compute the product of $\tilde M(x)$ with the invertible block matrix $\begin{pmatrix} I & 0 \\ D(x)^{-1}C(x) & D(x)^{-1}\end{pmatrix}$, whose entries depend continuously on $x$. The result is $\begin{pmatrix} A(x)-B(x)D(x)^{-1}C(x) & B(x)D(x)^{-1} \\ 0 & I \end{pmatrix}$. By assumption, $\tilde M(x)$ has rank $m-n$ for all $x$, so the same must be true for the latter matrix. Since its last $m-n$ rows are evidently linearly independent, this is only possible, if all rows are linear combinations of the last $m-n$ rows. But this shows that $ A(x)-B(x)D(x)^{-1}C(x)=0$ for all $x$, and hence the first $m$ columns of $\begin{pmatrix} I & 0 \\ D(x)^{-1}C(x) & D(x)^{-1}\end{pmatrix}$ form the required basis for the kernel of $\tilde M(x)$.<|endoftext|> -TITLE: Combinations of 6 students among 20, at least one male -QUESTION [5 upvotes]: I have the following problem: A class has 20 students, 16 females, 4 males. Find all possibilities of choosing 6 students so that at least one is male. -I did the following, pick one male from the 4, you have 4 possibilities, then you are left with 19 students, pick 5, so I get 4*${19}\choose{5}$ possibilities. -However, one can notice that if you want no one to be male, then the possibilities are ${16}\choose{6}$, so, one could argue that the solution is instead ${20}\choose{6}$ - ${16}\choose{6}$. -So, which is solution is correct, and why? -$$4* {{19}\choose{5}}$$ -or -$${{20}\choose{6}} - {{16}\choose{6}}$$ - -REPLY [3 votes]: It's funny, I made the exact same mistake recently. The problem with the first approach is that it contains double counting. -Say you start out by picking male $A$ among the four males $A, B, C, D$, and then you choose $5$ people among the remaining $19$. Maybe one of these five people is male $B$, and the rest are $x, y, z, w$. -But you could also originally have picked male $B$, and then male $A$ could have been among the $5$ people that you then choose out of $19$, where the rest are also $x, y, z, w$. It's the same choice as before. -The second solution is correct.<|endoftext|> -TITLE: Elementary proof for: If x is a quadratic residue mod p, then it is a quadratic residue mod p^k -QUESTION [7 upvotes]: In article solving quadratic congruences, it is shown how to use Hensel's lemma to iteratively construct solutions to to $x^2 \equiv a \pmod{p^k}$ from the solutions to $x^2 \equiv a \pmod{p}$. The case where $p=2$ is treated separately. -While the construction is elegant, it is a tad lengthy and its reliance on Hensel's lemma makes it a bit far from elementary number theory. -If we are only concerned with an existential proof (rather than a constructive one), can we simplify the proof? That is, is it possible to succinctly prove the following theorem without resort to Hensel's lemma? - -For any prime $p$ and any $k \in \mathbb{N}$, if $x$ is a quadratic residue mod $p$, then it is a quadratic residue mod $p^k$. - -REPLY [2 votes]: I have long appreciated this result in LeVeque's Fundamentals of Number Theory, Theorem 4.4: -Suppose $p$ is a prime relatively prime to $a$. Let $t_n$ be the order of $a$ modulo $p^n$ and assume that $p^z$ exactly divides $a^{t_1} - 1$. Then if $p > 2$ or $z > 1$, -$$ t_n = \begin{cases} t_1, \quad &\text{for $n \leq z$}\\\\ t_1 p^{n-z}, \quad &\text{for $n > z$.}\end{cases}$$ -This result can be used to prove several congruences mod a prime power that are otherwise messy and annoying to prove (for instance, see Other ways to deduce Cyclicity of the Units of certain groups?). -In your question, if $a$ is a quadratic residue modulo $p$, then $t_1$ divides $\frac{p-1}{2}$. Thus in both of the above cases, $t_n$ is a divisor of $p^{n-1} \cdot\frac{p-1}{2} = \phi(p^n)/2$, so $a$ is a quadratic residue modulo $p^n$.<|endoftext|> -TITLE: Number system with $e^x = 0$ for some $x$ -QUESTION [11 upvotes]: It is well known that $e^x \ne 0$ for all $x \in \mathbb{R}$ as well as $x \in \mathbb{C}$. Upon reading this article and doing a bit of research I have found that this also applies to the quaternions $\mathbb{H}$, the octonions $\mathbb{O}$ as well as the space of $m$ by $n$ matrices with real or complex entries. -My question is whether there is ANY number system at all for which $e^x = 0$ for some $x$, that is, $\log 0$ is defined and has a finite value. Preferably the example should be finite-dimensional and should not be constructed by arbitrarily assigning a value to $\log 0$, such as $\log 0 := 42.$ -Additionally, for the purposes of this question, none of the usual properties of arithmetic or the exponential function are assumed true, though I suppose this makes my question somewhat meaningless. -Edit: I am intrigued at Yuriy S's idea of defining $e^x = 0$ for all $x$. My question now is what is the most "well behaved" algebra we can come up with if $e^x$ is required to be identically zero? - -REPLY [5 votes]: The floating-point number system (for a given number of bits) is a finite subset of the extended real number line. It relaxes various algebraic identities so that it can remain closed under as many operations and inputs as possible. As you put it, the "usual properties of arithmetic or the exponential function" are not true in this system, although they are approximately true for the most part. -In this system, $-\infty$ is a number and $e^{-\infty}=0$. -References: -http://pubs.opengroup.org/onlinepubs/9699919799/functions/exp.html -http://en.cppreference.com/w/c/numeric/math/exp<|endoftext|> -TITLE: The centre of the earth -QUESTION [5 upvotes]: I'm a real beginner here (first post and first foray into math since high-school, trying to catch up), so I'm going to try my best to explain my problem in mathematical terms then follow up with an intuitive explanation. Thanks in advance! - -Maths: - -Given two points $A$ and $B$ on a sphere, where the coordinates and the angle of the normal of the tangential plane against the sphere of which are known, what is the centre of that sphere? - -I'm not sure how best to represent such an angle, it seems it would vary based on the problem/context; perhaps somebody could advise? Let's say I start off with a spherical coordinate without the radius. -Intuitive: - -If I am a person stood on a perfectly spherical planet, I can feel the direction of gravity. If I walk a known distance in a known direction, I can feel the pull of gravity pulling from a different absolute direction (assuming I possess an absolute orientation reference!). How can I calculate the centre of that planet and my position relative to it? - -Thanks very much. Please excuse my weak expression of my problem. - -REPLY [4 votes]: Your problem is: - -Let $S\subset\mathbb{R}^3$ be a sphere (remark: this will in fact work for an $(n-1)$-sphere in $\mathbb{R}^n$), and assume we are given two distinct points $x,y\in S$ not antipodal to each other and the normal vectors to the sphere at these points, $n_x,n_y$. What is the center (and radius) of the $S$? - -The solution, as mentioned in the comments, is to take the straight lines defined by the normal vectors and intersect them. The lines can be parametrized by -$$\{x + tn_x\mid t\in\mathbb{R}\},\qquad\{y + sn_y\mid t\in\mathbb{R}\}.$$ -Their intersection is the solution of -$$x + tn_x = y + sn_y.$$ -This is an equation in the $3$ coordinates. Write it down and find $t$ (and $s$), then insert in the formula for the line through $x$ to find the center. -Remark: If the points are antipodal, then the lines coincide. In that case, the center is the point given by $c=\frac{x+y}{2}$.<|endoftext|> -TITLE: Help with proving a statement based on Riemann sums? -QUESTION [14 upvotes]: Suppose we have the original Riemann sum with no removed partitions, where $f(x)$ is continuous and reimmen integratable on the closed interval $[a,b]$. -$$\lim_{n\to\infty}\sum_{i=1}^{n}f\left(a+\left(\frac{b-a}{n}\right)i\right)\left(\frac{b-a}{n}\right)$$ -If we remove $s$ partitions for every $d$ partitions in the interval $[a,b]$ and add the remaining partitions as $n\to\infty$ the resulting sum is -$$\lim_{n\to\infty}\sum_{i=1}^{{\left(d-s\right)}\lfloor\frac{n}{d}\rfloor+\left(n\text{mod}{d}\right)}f\left(a+\left(\frac{b-a}{n}\right)s(i-g_1)+g_2\right)\left(\frac{b-a}{n}\right)$$ -Where $S(i)$ is a piece-wise linear vector that skips $s$ for every $d$ partitions. For example if we skip one partition out of every four partitions ,instead of the vector $i$ whose outputs are ($1$,$2$,$3$,$4$,$5$...), we have $s(1)=1$, $s(2)=3$, $s(3)=4$, $s(4)=5$, $s(5)=7$,$s(6)=8$...). - -So for in my theorem I'm trying to show that -$$\lim_{n\to\infty}\sum_{i=1}^{{\left(d-s\right)}\lfloor\frac{n}{d}\rfloor+\left(n\text{mod}{d}\right)}f\left(a+\left(\frac{b-a}{n}\right)s(i-g_1)+g_2\right)\left(\frac{b-a}{n}\right)=$$ -$$\frac{d-s}{d}\lim_{n\to\infty}\sum_{i=1}^{n}f\left(a+\left(\frac{b-a}{n}\right)i\right)\left(\frac{b-a}{n}\right)=\frac{d-s}{d}\int_{a}^{b}f(x)$$ - -I know as all the partitions of the orginal sum ($\lim_{n\to\infty}\sum_{i=1}^{n}f\left(a+\left(\frac{b-a}{n}\right)i\right)\left(\frac{b-a}{n}\right)$) come closer to being equal, the sum of the fraction of remaining partitions will be the same as that fraction of the orginal reimmen sum. -To prove the partitions of original reimmen sum comes closer to being equal I found the following. -$$\lim_{n\to\infty}f\left(a+\left(\frac{b-a}{n}\right)\right)\left(\frac{b-a}{n}\right)<\frac{\lim_{n\to\infty}\sum_{i=1}^{n}f\left(a+\left(\frac{b-a}{n}\right)i\right)\left(\frac{b-a}{n}\right)}{n}<\lim_{n\to\infty}f(b)\left(\frac{b-a}{n}\right)$$ -And -$$\lim_{n\to\infty}f(b)\left(\frac{b-a}{n}\right)-\lim_{n\to\infty}f\left(a+\left(\frac{b-a}{n}\right)\right)\left(\frac{b-a}{n}\right)=0$$ -Am I on the right direction with proving this? If not can you give expand on a better way of proving this? - -EDIT: -I did posted my incomplete answer but its cluttered. Is there a simpler (and more rigorous proof) that can be done? -SECOND EDIT: -The person who answered my question deleted his post for unknown reasons. He has sent no reply as to why he did so. I posted my version of his answer down below my incomplete answer. I am waiting for another answer that expands or gives a better proof. -Third edit: I deleted my original proof. Christian Blatters answer remains and there is a new answer from another user but Im not sure if its correct. - -REPLY [2 votes]: Let's simplify and assume $f$ is Riemann integrable on $[0,1].$ Fix $d,s \in \mathbb N,0 -TITLE: Counterexample Question -QUESTION [5 upvotes]: Let $f:X\rightarrow Y$ be a morphism of varieties. If $f(X)$ is dense in $Y$, then $\tilde{f}:\Gamma(Y)\rightarrow \Gamma(X)$ is injective, where $\tilde{f}$ is the homomorphism induced by $f$. In fact, if $X$ and $Y$ are affine, then we have if and only if. Can we relax the prerequisites a bit and have $\tilde{f}$ injective $\Rightarrow$ $f(X)$ dense be true even if $Y$ is not affine? I'm inclined to say no, since I could only prove it using the fact that $Y$ is affine. But, this is not a proof that it's impossible. Are there any nice counterexamples out there? - -REPLY [5 votes]: Take $X$ to be an embedding of a closed point into $Y=\mathbb P^1$. Then $\Gamma(Y)\to\Gamma(X)$ is an isomorphism.<|endoftext|> -TITLE: Why square a constant when determining variance of a random variable? -QUESTION [12 upvotes]: If I want to calculate the sample variance such as below: - -Which becomes: $\left(\frac{1}{n}\right)^2 \cdot n(\sigma^2)= \frac{\sigma^2}{n} $... -My question is WHY does it become $$\left(\frac{1}{n}\right)^2?$$ -In other words, why does the $(1/n)$ inside the variance become $(1/n)^2$? -I've read that this is because: - -When a random variable is multiplied by a constant, it's variance gets multiplied by the square of the constant. - -Again, though, I want to know why? -I've looked in multiple sources but they all seem to gloss over this point. I want to visually see why this is done. -Could someone please demonstrate why the $1/n$ is squared using my example? - -Update: -As @symplectomorphic points out in a comment under their answer, my confusion was the result of not realizing there was a difference between the variance of a set of data and the variance of a random variable. - -See @symplectomorphic's other comment for an explanation of the difference. - -@symplectomorphic's answer provides a good conceptual walkthrough, while user @Tryss's answer provides the correct mathematical explanation. Thanks to both of you! - -REPLY [12 votes]: Tryss's answer is correct. But you seem to need a more elementary illustration. Here it is, at least for the variance of sample data. (Your question is really about the variance of a random variable, but the point is the same.) -Take the two numbers $1$ and $3$. The mean of this set of data is 2. The variance is the average squared deviation from the mean. The deviations from the mean are $-1$ and $1$, so the squared deviations are $1$ and $1$, so the average squared deviation is $1$. Hence the variance of this set of data is 1. -Now look what happens when we multiply the dataset by 4. Our two numbers become 4 and 12. The mean is now 8. (This illustrates that when you multiply by a constant, the mean gets multiplied by that constant.) The deviations from the mean are $-4$ and $4$ (the deviations also get multiplied by the constant). Therefore the squared deviations are 16 and 16, so the averaged squared deviation is $16$. Hence the variance of this new set of data is 16. -Moral: when we multiplied our data by 4, the variance got multiplied by 16. This is totally unsurprising, because the variance is the average squared deviation. When you multiply your data by a constant, the deviations also get multiplied by that constant, so the squared deviations get multiplied by the square of that constant.<|endoftext|> -TITLE: How to compute $\text{liminf}\frac{ \varphi(2^n-1)}{2^n}$ -QUESTION [5 upvotes]: Let $\varphi$ denote Euler's function. How can I compute $\liminf_n \frac{\varphi(2^n-1)}{2^n}$? - -REPLY [2 votes]: If $p_1,...,p_k$ are some distinct prime divisors of $n$, then $\frac{\varphi(n)}{n}\leq \frac{(p_1-1)(p_2-1)...(p_k-1)}{p_1p_2...p_k}$. For the proof see Euler's totient function -$\forall \varepsilon \in \mathbb R >0 \quad \exists k\in \mathbb N \quad \text{s.t}\quad \frac{(p_1-1)(p_2-1)...(p_k-1)}{p_1p_2...p_k}<\epsilon$ when $p_i$ is the $i$th prime number. Because $\infty=\sum_{i=1}^{\infty}\frac{1}{n}=\prod_p \frac{1}{1-p^{-1}}=\frac{1}{\frac{(p_1-1)(p_2-1)...}{p_1p_2...}}$ -You can see Divergence of the sum of the reciprocals of the primes -If $a|b$ then $2^a-1|2^b-1$ -By Fermat's little theorem $p_i|2^{p_i-1}-1$ for all $1\leq i \leq k$ - -If we set $n=(p_1-1)(p_2-1)...(p_k-1)$ then by 3 and 4 we can conclude $p_i|2^n-1$ -Thus by 1 $$ \frac{\varphi(2^n-1)}{2^n-1}<\varepsilon$$ -and by 2 $$\liminf_n \frac{ \varphi(2^n-1)}{2^n}=0$$<|endoftext|> -TITLE: Is there a name for those elements $x$ of a commutative ring $R$ such that $Rx$ is maximal among all proper ideals? -QUESTION [5 upvotes]: Ever since learning basic ring theory, I've always felt kind of confused about the fact that: - -maximal ideals are prime (because every field is an integral domain), but -irreducible elements needn't be prime. - -Recently, the reason for this suddenly snapped into focus: the problem, in short, is that irreducibility can only "see" principal ideals, and therefore isn't usually strong enough to imply full-blown primeness, except in a principal ideal ring. Indeed: - -Proposition. Let $R$ denote a commutative ring. Then for all $x \in R,$ the following are equivalent. - -$x$ is irreducible. -$Rx$ is maximal among all proper principal ideals. - -Corollary. Let $R$ denote a principal ideal commutative ring. Then for all $x \in R,$ the following are equivalent. - -$x$ is irreducible. -$Rx$ is maximal among all proper ideals. - - -Motivated by this realization, I was just wondering: - -Question. Is there a name for those elements $x$ of a commutative ring $R$ such that $Rx$ is maximal among all proper ideals? - -REPLY [6 votes]: These elements are called m-irreducible or i-atomic sometimes. In rings with zero-divisors, these are actually distinct from irreducible elements. I recommend the paper by D.D. Anderson and Valdez Leon called Factorization in Commutative Rings with Zero-divisors if you want to read more. In a domain which is not a field, $(0)$ is not maximal among principal ideals (obviously) despite being irreducible, so the version you state is not quite even true in domains, although it is for non-zero elements. -The paper I suggest gives you some non-trivial examples of all the various relations between the notions of irreducible as well as examples which show the reverse inclusions are not true. -Edit: I figured I would add the example just in case people cannot access that paper. In $\mathbb{Z} \times \mathbb{Z}$, the element $(0,1)$ is prime and therefore irreducible, but not maximal among principal ideals since it is properly contained in $(2,1)$.<|endoftext|> -TITLE: Let $x,y,z>0$ and $x+y+z=1$, then find the least value of ${{x}\over {2-x}}+{{y}\over {2-y}}+{{z}\over {2-z}}$ -QUESTION [6 upvotes]: Let $x,y,z>0$ and $x+y+z=1$, then find the least value of -$${{x}\over {2-x}}+{{y}\over {2-y}}+{{z}\over {2-z}}$$ -I tried various ways of rearranging and using AM > GM inequality. But I couldn't get it. I am not good at inequalities. Please help me. -I wrote $x$ as $1-(y+z)$ and I took $x+y$ as $a$ and the others as $b$ and $c$. And I am trying. - -REPLY [3 votes]: For the sake of alternatives: -It is easy to check that $$\frac{x}{2-x} \ge \frac{18x-1}{25} \quad\forall x < 2,$$ (equivalent to $(3x-1)^2\ge 0$). Applying the above inequality for $y$ and $z$ also, then taking the sum of the three inequalities, we are done.<|endoftext|> -TITLE: Inequality $abdc$ $\leq$ $3$ -QUESTION [11 upvotes]: $a+b+c+d=6$ -and -$a^2+b^2+c^2+d^2=12$. -and $a,b,c,d$ are reals. -Prove: $abcd$ $\leq$ $3$ without Lagrange multipliers, complex numbers or convexity help. -Using Cauchy–Schwarz inequality I found: $a,b,c,d \in [0,3]$. -How solve inequality? - -REPLY [2 votes]: Another way, given you have already discovered $a, b, c, d \in [0, 3]$. Note that the objective function $abcd$ and the constraints are symmetric and hence we may assume WLOG $0 \le a \le b \le c \le d \le 3$, or equivalently, $a \in [0, b], \; b \in [a, c], \; c \in [b, d], \; d \in [c, 3]$. Further the objective is linear in each variable and the domain is closed and convex. This means extrema can be attained only when each variable is at the boundary of its allowable interval. -If $a=0$, we clearly have a minimum for $abcd$, so $a=b$ at the maximum. Similarly we must have $c\in \{b, d\}$ and $d \in \{c, 3\}$ . Thus we have only two possibilities for the maximum, $(a, b, c, d) \in \{(p, p, q, q), (p, p, p, 3)\}$ for some $0< p \le q \le 3$. -In the first case we have $p+q = 3, p^2+q^2=6$ and need to show $pq \le \sqrt3$, which follows from $2pq = (p+q)^2-(p^2+q^2) = 3 \le 2\sqrt3$ though the maximum is never reached in this case. -In the second case we have $2p+q = 3, 2p^2+q^2=3$ and need to show $p \le 1$ which is easy as the system has only one solution, $p=q=1$, with equality/ maximum attained.<|endoftext|> -TITLE: Why study dimensions? -QUESTION [5 upvotes]: I am quite new to the forum so please feel free to correct/give pointers if I am posting something in the wrong place. -I have been perusing "Dimension Theory" by Witold Hurewicz & Henry Wallman and while doing so noticed I couldn't quite see the reason for 'studying/researching' dimension theory. As an undergraduate when I was taught the concept of dimension in terms of 'linear independence' and 'spanning' of vectors I never paid much heed to alternate/analytical formulations of dimension. Now having come across, say, the Hausdorff, Fractal, Assouad dimensions I am quite curious as to why mathematicians have focussed so much on this avenue. -I would be very, very grateful for any pointers or links which can give me a little bit of 'general' motivation for studying dimensions i.e why study dimension theory. Specific examples from dynamical systems, metric geometry, number theory, set theory et cetera are very welcome too. -Thank you, thanks a lot! -P.S the tags might be a little over the place - couldn't quite create new ones for metric geometry and assouad dimension. - -REPLY [3 votes]: "Topological dimension" as defined by Hurewicz and Wallman is a topological invariant which lets you prove that many pairs of spaces are not homeomorphic to each other. -For example, why are $\mathbb{R}$ and $\mathbb{R}^2$ not homeomorphic? -The pedestrian answer is: $\mathbb{R}$ is separated by removal of any point, whereas $\mathbb{R}^2$ is not. -But you might want to generalize: Why are $\mathbb{R}^m$ and $\mathbb{R}^n$ not homeomorphic when $m \ne n$? -To obtain a general answer, return to the special case and recast it in terms of dimension theory: - -$\mathbb{R}$ has dimension $1$ (because a point separates it, in fact every point locally separates, which is how dimension 1 is defined in some treatments; I believe the Hurewicz and Wallman definition is close to this but perhaps not exactly the same) -but $\mathbb{R}^2$ does not have dimension $1$. -and "dimension 1" is a topological invariant. - -In general, topological dimension is an invariant assigned to topological spaces whose value is either a non-negative integer or $\infty$. Homeomorphic spaces must have equal topological dimension. Using it, the proof that $\mathbb{R}^n$ is not homeomorphic to $\mathbb{R}^m$ when $n \ne m$ is that the dimension of $\mathbb{R}^n$ equals $n$ whereas the dimension of $\mathbb{R}^m$ equals $m$ (these must be verified, of course). -You've mentioned a few other types of dimension in your question, and each of them is an important invariant in a different context. For example, fractal dimension or "Hausdorff dimension" which I presume you mean, is a real-valued bi-Lipschitz invariant of metric spaces. Hausdorff dimension is what I would use to prove that the middle thirds Cantor set is not bi-Lipschitz equivalent to the every-other-fifths Cantor set, because their Hausdorff dimensions are $\log(2)/\log(3)$ and $\log(3)/\log(5)$ respectively, and these numbers are not equal.<|endoftext|> -TITLE: Infinitely nested radical expansion of functions -QUESTION [6 upvotes]: Is where a 'best' way to make a nested radical expansion for an analytic function? This way seems convenient: -$$f(x)=a_0+a_1x+a_2x^2+a_3x^3+\dots=\sqrt{a_0^2+2a_0a_1x+(a_1^2+2a_0a_2)x^2+\cdots}=$$ -$$=\sqrt{a_0^2+2a_0a_1x+(a_1^2+2a_0a_2)x^2 \sqrt{1+\cdots}}$$ - -I conjecture that this infinitely nested radical expansion has the same interval of convergence as the original Taylor series. Is this correct? - -Also, the number of 'roots' we take into account gives us twice the number plus one of the correct terms in the Taylor series. -For example: -$$e^x=\sqrt{1+2x+2x^2\sqrt{1+\frac{4}{3}x+\frac{10}{9}x^2\sqrt{1+\frac{32}{25}x+\frac{681}{625}x^2\sqrt{1+\dots}}}}=$$ -$$=1+x+\frac{x^2}{2}+\frac{x^3}{6}+\frac{x^4}{24}+\frac{x^5}{120}+\frac{x^6}{720}+\dots$$ -This expression converges for any $x$. -$$\frac{1}{1-x}=\sqrt{1+2x+3x^2\sqrt{1+\frac{8}{3}x+\frac{46}{9}x^2\sqrt{1+\frac{76}{23}x+\frac{4089}{529}x^2\sqrt{1+\dots}}}}=$$ -$$=1+x+x^2+x^3+x^4+x^5+x^6+\dots$$ -This expression converges for $|x|<1$. -This idea may seem pointless, since the coefficients in the radical expansion are very hard to calculate even for functions with 'simple' Taylor series. - -But is it possible, that some function with 'complicated' Taylor series will have simple nested radical expansion of this kind? - -For example, consider the easiest nested radical of this kind: -$$f(x)=\sqrt{1+x+x^2\sqrt{1+x+x^2\sqrt{1+x+x^2\sqrt{1+\dots}}}}=$$ -$$=1+\frac{x}{2}+\frac{3x^2}{8}+\frac{x^3}{16}+\frac{11x^4}{128}-\frac{9x^5}{256}+\frac{27x^6}{1024}+\dots+$$ -Though this kind of functions we can always find in closed form, if we assume the infinite nested radical converges. -$$f(x)=\sqrt{1+a_1x+a_2x^2\sqrt{1+a_1x+a_2x^2\sqrt{1+a_1x+a_2x^2\sqrt{1+\dots}}}}=$$ -$$=\frac{a_2}{2}x^2+\sqrt{1+a_1x+\frac{a_2^2}{4}x^4}$$ - -I've never seen this topic discussed anywhere, so a reference would be nice. The only thing I've seen is Ramanujan nested radical, and it's usually presented as a funny trick, nothing more. - -REPLY [2 votes]: This is a really interesting question! -I can't find any textbooks, but the following links to papers look helpful: -http://s000.tinyupload.com/index.php?file_id=46943254406811078849 -http://s000.tinyupload.com/index.php?file_id=52136112093260715560 -http://www.fq.math.ca/Papers1/45-3/osler.pdf -http://s000.tinyupload.com/index.php?file_id=68073100179415439358 -http://s000.tinyupload.com/index.php?file_id=43652915445104392351 -It seems like it is an area with room for much more research. -Also see the references from this webpage: -http://mathworld.wolfram.com/NestedRadical.html -Finally, maybe it is possible to establish some conclusions using the theory of continued fractions, say by a logarithm-like transformation? -https://en.wikipedia.org/wiki/Continued_fraction -EDIT: Try these links instead for the files I had to upload. -http://www.filehosting.org/file/details/560287/1.pdf -http://www.filehosting.org/file/details/560288/2.pdf -http://www.filehosting.org/file/details/560289/allen1985.pdf -http://www.filehosting.org/file/details/560290/levin2005.pdf -1. Nested Square Roots of 2 -Author(s): L. D. Servi -Source: The American Mathematical Monthly, Vol. 110, No. 4 (Apr., 2003), pp. 326-330 Published by: Mathematical Association of America -Stable URL: http://www.jstor.org/stable/3647881 . -2. Journal of Approximation Theory 174 (2013) 90–112 -On Vie`te-like formulas -Samuel G. Moreno∗, Esther M. Garcıa-Caballero Departamento de Matematicas, Universidad de Jaen, 23071 Jaen, Spain -Received 7 November 2012; received in revised form 6 June 2013; accepted 27 June 2013 Available online 11 July 2013 -3. Continued Radicals -Author(s): Edward J. Allen -Source: The Mathematical Gazette, Vol. 69, No. 450 (Dec., 1985), pp. 261-263 Published by: Mathematical Association -Stable URL: http://www.jstor.org/stable/3617569 -4. THE RAMANUJAN JOURNAL, 10, 305–324, 2005 Springer Science + Business Media, Inc. Manufactured in the Netherlands. -A New Class of Infinite Products Generalizing Vie`te’s Product Formula for π -AARON LEVIN adlevin@math.brown.edu -Department of Mathematics, Brown University, Box 1917, Providence, Rhode Island 02912 Received November 11, 2002; Accepted May 21, 2004 -Interesting related post: Can the general septic be solved by infinitely nested radicals?<|endoftext|> -TITLE: The definition of orientation of a manifold from Spivak, Calculus on Manifolds -QUESTION [5 upvotes]: In Spivak Calculus on Manifolds the author uses a definition of orientation of a manifold which I do not understand, and which I do not found elsewhere. I cite: - -It is often necessary to choose an orientation $\mu_x$ for each tangent space $M_x$ of a manifold $M$. Such choices are called consistent provided that for every coordinate system $f : W \to \mathbb R^n$ and $a, b \in W$ the relation - $$ - [f_{\ast}((e_1)_a), \ldots, f_{\ast}((e_k))_a] = \mu_{f(a)} -$$ - holds if and only if - $$ - [f_{\ast}((e_1)_b), \ldots, f_{\ast}((e_k)_b)] = \mu_{f(b)}. -$$ - -For the notation. With $v_p$ he denotes the tangent vector $v$ at the point $p$, for a function $f : A \to \mathbb R^m$ with $A \subseteq \mathbb R^n$ differentiable on $A$ we have $f_{\ast}(v_p) := (Df(p)(v))_{f(p)}$ and for a basis $v_1, \ldots, v_n$ he denotes by $[v_1, \ldots, v_n]$ the orientation of $v_1, \ldots, v_n$. Also a coordinate system is defined as: - -A subset $M$ of $\mathbb R^n$ is a $k$-dimensional manifold if and only if for each point $x \in M$ the following ''coordinate condition'' is satisfied: -(C) There is an open set $U$ containting $x$, an open set $W \subseteq R^k$, and a $1-1$ differentiable function $f : W \to \mathbb R^n$ such that -(1) $f(W) = M \cap U$, -(2) $f'(y)$ has rank $k$ for each $y \in W$, -(3) $f^{-1} : f(W) \to W$ is continuous. -Such a function $f$ is called a coordinate system around $x$. - -Now my question, I totally do not understand his definition of orientability of a manifold. If I got him right he requires that we find a number $\mu_x$ for each point $x$ on the manifold, which fulfills a compatibility condition for every coordinate system. As $\det Df(p) \ne 0$ for each $p \in W$, I see that this determinant could not switch sign, but otherwise it could vary continuously. So it does not make sense to have a numbers $\mu_x$ and require the above condition for all coordinate systems, as by composition with diffeomorphisms we can change the coordinate system and take care that the determinant might have different values (despite still having all the same sign). So this might be an error and instead ''for all coordinate'' systems, he means ''for each $x$ there exists a coordinate system such that'' (i.e. we have an atlas whose transition maps have determinants with the same sign, which is closer to other definitions I found). Another possiblity is how $[\cdot,\cdots,\cdot]$ is precisely defined, as he just wrote it denotes the orientation (or equivalence class), but such an class is not a number, its a set of ordered bases, but could be represented by a number, so $[\cdot,\cdots,\cdot] \in \mathbb R$, but maybe we have $[\cdot,\cdots,\cdot]\in\{-1,1\}$ which would make some more sense. -Other definitions I found are that the existence of a $n$-form $\omega$ on $M$ is asserted such that $\omega(p)$ is strictly positive on the tangent space at $p$, or that we could find an atlas such that the transition maps have positive determinant, or the definitions from here which I hardly understand. - -REPLY [2 votes]: One of the things that makes this an awful textbook is that Spivak often defines somewhere much earlier on in the text as a seemingly irrelevant comment and fails to cross-reference it when he actually uses it. Check p. 82-83 for the definition of the $[f_*((e_1)_a),\ldots,f_*((e_k)_a)]$ notation. -In short, your equivalence class idea is right. Generally, a form $\omega\in\mathrm{Alt}^k(V)$ divides the bases of $V$ into two groups: if $(v_1,\ldots, v_k)$ and $(w_1,\ldots, w_k)$ are bases, then the signs of $\omega(v_1,\ldots, v_k)$ and $\omega(w_1,\ldots, v_k)$ partition them into two equivalence classes in a manner independent of $\omega$ and depending only of the determinant of the change-of-basis matrix between $v_i$ and $w_i$. The expression $[v_1,\ldots,v_k]$ is simply the equivalence class. Since Spivak defines the usual orientation to be that of the standard basis $(e_i)_{i=1}^{k}$, we can basically assign $[e_1,\ldots, e_k]=1$, while for instance, the permuted basis $(e_1,\ldots,e_k,e_{k-1})$ would be assigned an orientation of $[e_1,\ldots, e_k, e_{k-1}]=-1$. -Basically, all Spivak is saying is that orientations of the pushforward vectors $f_*((e_i)_p)\in\mathbb{R}^n_p$ need to agree in the above sense for every coordinate system $f:W\to \mathbb{R^n}$ and point $p\in W$.<|endoftext|> -TITLE: Proof by induction, utilizing inductive assumption -QUESTION [7 upvotes]: Show that for every natural number $n$ there exist integers $x,y$ such that -$$4x^2 + 9y^2\equiv 1\pmod{n} $$ -The base case is trivial, since 1 divides anything. Assume the claim holds for some $k\in\mathbb{N}$, must show that the claim holds for $k+1$. We must find some $x', y'$ such that $4x'^2 + 9y'^2\equiv 1\pmod{k+1}$. My idea was to utilize the property: -$$\forall t\neq 0 (a\equiv b\pmod{n}\Longleftrightarrow ta\equiv tb\pmod{tn}) $$ -Since for $k$ the claim holds we have -$$4x^2+9y^2\equiv 1\pmod{k}\Longleftrightarrow (k+1)(4x^2+9y^2)\equiv k+1\pmod{k(k+1)} $$ -but I don't know how to proceed (nor if this even leads anywhere), how to better apply the inductive assumption. -To elaborate: -I can do the following: -$$(4x^2 + 9y^2)k + (4x^2 + 9y^2-1) \equiv k\pmod{k(k+1)}\Longrightarrow 4x^2 + 9y^2 + z\equiv 1\pmod{k+1} $$ -the inductive assumption provides that $4x^2+9y^2-1$ is divisible by $k$, but the result is not in a convincing form. -Alternative ideas for constructing proof in question are also welcome, of course. - -REPLY [2 votes]: (This is not a complete answer; merely some initial observations.) -You'll probably struggle to induct on $n$ like that. After all, knowing that $n \vert m$ (some integer $m$) tells you almost nothing about what $n+1$ divides. -Notice that $4x^2 + 9y^2 = ||2x+3y i||^2$ so it is necessary and sufficient to be able to find $x, y$ such that $$||2x+3yi||^2 \equiv 1 \pmod{n}$$ -Consider $2x+3iy$ and $2\alpha + 3i\beta$. -If we multiply those together, we get $$4x\alpha -9y \beta + 6 (\alpha y+\beta x)i$$ -This is of the form $2r + 3is$ if and only if $y$ or $\beta$ is even; and in that case, $s$ is also even. -Therefore if we can find $2x+3iy, 2\alpha + 3 i \beta$ such that $4x^2+9y^2 \equiv a \pmod{n}$ and $4\alpha^2+9\beta^2 \equiv b \pmod{n}$, with either $y$ or $\beta$ even, then we can find $4r^2+9s^2 \equiv ab \pmod{n}$ with $s$ even. -This is a kind of closure property which might be helpful, although it is no use in the case that $n$ is even, because then the left-hand side would always be even and the right-hand side always odd, so the "closed set" is in fact empty. -It solves the question in the case that $n$ is odd, though. -Indeed, if $n$ is odd, then for some $c \in \mathbb{N}^{>0}$ we have $4^c \equiv 1 \pmod{n}$. -Letting $x=1, y=0$ yields $4x^2+9y^2 = 4$, and we can just use the closure result above $c$ times. -Although after all this work, this is equivalent simply to putting $$x = 2^{c-1}, y=0$$ -Similarly, if there is $d$ such that $9^d \equiv 1 \pmod{n}$, then we can set $$x=0, y=3^{d-1}$$ -That holds if and only if $n \not \equiv 0 \pmod{3}$. -Therefore the only remaining case is when $n$ is divisible by $6$. -Letting $x=y=1$, we get $13$, so if there is $c$ such that $13^c \equiv 1 \pmod{n}$ then we are likewise done. -But this happens iff $n \not \equiv 0 \pmod{13}$. -Therefore the only remaining case is when $n$ is divisible by $6 \times 13 = 78$.<|endoftext|> -TITLE: Root test is stronger than ratio test? -QUESTION [5 upvotes]: I am a little bit confused regarding the meaning of the phrase :" Root test is stronger than ratio test", and was hoping you will be able to help me figure it out. As far as I can see here: https://www.maa.org/sites/default/files/0025570x33450.di021200.02p0190s.pdf -The limit from the ratio test is greater or equal the limit from the root test . So, my first question is- is there any example of a series $\Sigma a_n$ such that the limit from the ratio test is exactly 1 (i.e.- inconclusive), but the limit from the root test is less than 1? (i.e.- convergence can be proved by using the root test but not by using the ratio test ) -If not, then is it correct that this phrase is the meaning of "stronger" is when the limit from the ratio test does not exist? (as in the classic example of a rearranged geometric series) -Hope you will be able to help. -THanks ! -related posts: -Show root test is stronger than ratio test -Inequality involving $\limsup$ and $\liminf$: $ \liminf(a_{n+1}/a_n) \le \liminf((a_n)^{(1/n)}) \le \limsup((a_n)^{(1/n)}) \le \limsup(a_{n+1}/a_n)$ -Do the sequences from the ratio and root tests converge to the same limit? - -REPLY [6 votes]: Consider the example of series -$$\sum 3^{-n-(-1)^n}$$ -root test establishs the convergance but ratio test fails -onother example series with nth term -$a_n=2^{-n}$ if n is odd -$a_n=2^{-n+2}$ if n is even -for second series -when n is odd or even and tends to $\infty$ -${a_n}^{\frac{1}{n}}=\frac{1}{2}$ -Hence by cauchys root test the series converges -but the ratio test gives $\frac{a_n}{a_n+1}=\frac{1}{2}$ if n is odd and tends to $\infty$ -$\frac{a_n}{a_n+1}=8$ when n is even and approachs $\infty$ -Hence ratio test fails.. -Sorry I dnt know mathjax that is why i was a bit late...<|endoftext|> -TITLE: Use this sequence to prove that there are infinitely many prime numbers. -QUESTION [15 upvotes]: Problem: -By considering this sequence of numbers -$$2^1 + 1,\:\: 2^2 + 1,\:\: 2^4 + 1,\:\: 2^8 +1,\:\: 2^{16} +1,\:\: 2^{32}+1,\ldots$$ -prove that there are infinitely many prime numbers. - - -I am thinking that if I can show that every pair of numbers in the sequence are relatively prime then since each has at least one prime factor this would prove the existence of infinitely many primes. -But I am new to discrete mathematics and number theory so I am not sure on how to proceed. - -REPLY [9 votes]: If $2^{2^n}\equiv -1\pmod p$, then show that $2^{2^{m}}\not\equiv-1\pmod p$ for any $m -TITLE: A spiralling sequence based on integer divisors. Has anyone noticed this before? -QUESTION [29 upvotes]: Firstly, please excuse the informal style of my explanation, as I am not a mathematician, although I am aware that this can be explained in more formal terms. -I have mapped integers to points on a circle on the complex plane in the following way: -$$ -a_n=\prod _{j=1}^n (-1)^{2 (n \bmod j)/j} -$$ -I then took a sequence of partial sums of $a_n$: -$$ -b_n=\sum _{j=1}^n a_j -$$ -I think of it as a path made of vectors of length 1 on the complex plane. I then plotted $b_n$, and I saw this beautiful vine-like shape: - -(for $n <= 45000$) - -(for $n <= 1000$) -The path goes clockwise, then the spin accelerates until it turns anticlockwise and moves somewhere else. In order to find out more about the "whirlpools" and "peak flows", I checked the differences between consecutive terms of $a_n$, and obtained the following plot: - -The minima/"peak flows" fall at $n = 1, 4, 11, 30, 83, 226, ...$, which appears to correspond to http://oeis.org/A078141, and to be given by $$\left\lfloor e^{n-\gamma }\right\rfloor$$ I have checked that the "whirlpools" appear to correspond to $$\left\lfloor e^{n - \gamma + 1/2}\right\rfloor$$ I think this would mean that each branch of the vine is $e$ times larger than the previous one in some way... -As for the position of the "peak flows" on the complex plane, they are: -$$ -1.,0.5\, +0.866025 i,0.695702\, +1.84669 i,1.28152\, +1.03625 i,1.75407\, +1.91755 i -$$ -for $n = 1, 4, 11, 30, 83$. -Here is a plot of the absolute values of $b_n$: - -Here is an array plot of distances between terms of $a_n$, for $n <= 100$, just for fun, really: - -I have a bunch of questions: - -As per title. Is this something that has been noticed before? -Why do the "whirlpools" fall where they do on the complex plane? What is special about those values? -Is every term of $a_n$ unique? If so, is it possible that there exists a way of learning something about the divisors of $n$ from $a_n$ or $b_n$? - -REPLY [7 votes]: For the first question: You may be able to find some similar results on exponential sums in the book "Ten Lectures on the Interface Between Analytic Number Theory and Harmonic Analysis," by Montgomery. Also, Noam Elkies has a similar picture in part of his analytic number theory course. -For the second question: "Why do the whirlpools occur where they do?" Let us first simplify the problem a little: to avoid confusion regarding which root to use in the definition of $a_n$, take $a_n=\prod_{j=1}^n e^{2\pi i(n\pmod{j})/j}$ Now, observe that: $a_{n+1}=a_n\cdot e^{\frac{2\pi i n}{n+1}}\cdot\prod_{j=1}^{n+1}e^{2\pi i/j}=a_n\cdot e^{\frac{-2\pi i }{n+1}}\cdot\prod_{j=1}^{n+1}e^{2\pi i/j}$ (if you would like me to write out a proof of this 'observation,' leave a comment, and I'll add one). Then, by induction and the fact that $a_1=1$, we obtain the explicit formula: $a_n=\prod_{m=0}^{n-2}\prod_{j=1}^{m+1}e^{\frac{2\pi i}{j}}$, which we can simplify down to: $$a_n=\exp(2\pi i nH_n)$$ where $H_n=\sum_{j=1}^n\frac{1}{j}$. Then we may compute $a_{n+1}-a_n=\exp(2\pi i(n+1)(\frac{1}{n+1}+H_n)-\exp(2\pi inH_n)$ $=\exp(2\pi inH_n)\exp(2\pi iH_n)-\exp(2\pi inH_n)=a_n(\exp(2\pi i H_n)-1)$ -Note that a whirlpool will correspond to the smallest possible change $|a_{n+1}-a_n|$: we are looking for places where adding consecutive terms does not change things by very much. So, using the approximation $H_n\approx \log n+\gamma$ helpfully pointed out by Marc Paul in the comments above, we get that the local minimums of $|a_{n+1}-a_n|$ should occur near $\exp(2\pi i (\log(x)+\gamma))=1$ i.e. when $\log(x)+\gamma=n$ for some integer $n$, and this occurs exactly when $x=e^{n-\gamma}$. This is because then $|a_{n+1}-a_n|\approx 0$ will be as close to $0$ as possible. Similarly, the "maximum flows" should be near where $\exp(2\pi i(\log(x)+\gamma))=-1$, i.e. near $x=e^{n-\gamma+.5}$, because this is where $|a_{n+1}-a_n|\approx 2$, which is the maximum possible value. -Warning: there is a little unfinished business ahead in the answer to your third question, "Is every term of $a_n$ unique?" In short, no. $a_1=1=a_2$. However, in general if $a_n=a_m$ with $n>m$, then using our formula for $a_n$, we have $\exp(2\pi inH_n)=\exp(2\pi imH_m)$ which implies $2\pi inH_n-2\pi imH_m=2\pi i k$ for some integer $k$. Then we have: $$\sum\limits_{j=1}^m\frac{n-m}{j}+\sum\limits_{j=m+1}^n\frac{n}{j}=k$$ which I suspect has no solutions other than $n=2$, $m=1$, $k=2$, but which remains to be proved.<|endoftext|> -TITLE: How is this fractal produced? -QUESTION [7 upvotes]: It is stated here: - -Iterating the above optimized map $$f(z)=\frac{1}{4}(1 + 4z - (1 + 2z)\cos(\pi z))$$in the complex plane produces the - Collatz fractal. -The point of view of iteration on the real line was investigated by - Chamberland (1996),[23] and on the complex plane by Letherman, - Schleicher, and Wood (1999). - -However, in the 2 mentioned publications I did not find this image. I would like to know which start value $x_0$ created this image. -Am I correct, that this image is simply a visualization of the sequence $(f^n(x_o))_{n \in \mathbb{N}}$ where the black parts show where the sequence remained for a long time? Does this sequence also end in a finite orbit? - -REPLY [7 votes]: The start value $x_0$ is determined by the coordinates of each pixel. Pixels are coloured acccording to how quickly the orbit for that pixel diverges (escape time colouring). Black pixels remained bounded within the iteration limit. I wrote a small GLSL implementation as demonstration: https://www.shadertoy.com/view/Ms3XDn (it could be improved with smooth colouring, and user interface for moving around/zooming) -Here is a screenshot of the shadertoy with center (2.66, 0) and size 0.5 units: - -And here's one with center (0, 0) and size 4 units:<|endoftext|> -TITLE: Matrix inequality after taking inverse -QUESTION [9 upvotes]: Let A and B be Positive definite Matrices with $ A\leq B$ in the sense that $B-A$ is positive definite. Is it true that $A^{-1} \geq B^{-1} $? - -REPLY [5 votes]: Consider $B-A\geq 0$ using Schur complement, this is equivalent to -$$\begin{bmatrix}B&I\\I&A^{-1}\end{bmatrix}\geq 0, \quad B>0$$ -Since $A^{-1}>0$, now apply the Schur complement one more time to obtain -$$A^{-1}-I(B)^{-1}I=A^{-1}-B^{-1}\geq 0$$ -therefore we have $A^{-1}\geq B^{-1}$.<|endoftext|> -TITLE: Stuck on basic limit problem: $\lim_{x \to 0} \frac{\sin(\tan x)}{\sin x}$ -QUESTION [7 upvotes]: Consider $\lim_{x \to 0} \frac{\sin(\tan x)}{\sin x}$. The answer is $1$. This is clear intuitively since $\tan x ≈ x$ for small $x$. How do you show this rigorously? In general, it does not hold that $\lim_{x \to p} \frac{f(g(x))}{f(x)} = 1$ if $g(x) - x \to 0$ as $x \to p$. -No advanced techniques like series or L'Hôpital. This is an exercise from a section of a textbook which only presumes basic limit laws and continuity of composite continuous functions. -This should be a simple problem but I seem to be stuck. I've tried various methods, including $\epsilon-\delta$, but I'm not getting anywhere. The composition, it seems to me, precludes algebraic simplification. - -REPLY [2 votes]: Here we present a solution that relies on only (i) elementary inequalities from geometry and (ii) the squeeze theorem. - - -NOTE: -We first note that $\frac{\sin(\tan (x))}{\sin(x)}$ is an even function of $x$ and hence, if the right-side limit $\lim_{x\to 0^+}\frac{\sin(\tan (x))}{\sin(x)}$ exists, then the limit $\lim_{x\to 0}\frac{\sin(\tan (x))}{\sin(x)}$ exists. The ensuing analysis focuses, therefore, on establishing the right-side limit. - -First, recall from elementary geometry that the sine function satisfies the inequalities -$$x\cos(x)\le \sin(x)\le x \tag 1$$ -for $0\le x\le \pi/2$. From $(1)$ it is easy to see that -$$x\le \tan(x)\le \frac{x}{\cos(x)} \tag 2$$ -for $0\le x<\pi/2$. Using $(1)$ and $(2)$, we can write for $0 -TITLE: Are there any objects which aren't sets? -QUESTION [53 upvotes]: What is an example of a mathematical object which isn't a set? -The only object which is composed of zero objects is the empty set, which is a set by the ZFC axioms. Therefore all such objects are sets. -Objects composed of many objects are obviously sets. -What about objects composed of exactly one object? Are there any which aren't sets? - -REPLY [15 votes]: In my answer I'll list three things that are worth thinking about, that most people wouldn't intuitively consider as sets. -Symbols -To expand a bit on Henning's answer, I'll give another example. No symbol is a set. This includes the symbol "2", which is why in a strict sense "2" can never be a set, although "2" can be interpreted as a set in some models of some formal systems such as ZFC. -Each symbol is designed and described in a meta-language to convey an intended meaning, but the symbol itself has no intrinsic structure. It is only the interpretation of the symbol that can be said to have any structure at all, and that of course depends on the interpretation. In ZFC the intended interpretation is that every object in the set-theoretic universe is a set, but what about the symbols used in the language of ZFC itself? You can encode each symbol as some set in ZFC, exactly like you can encode the concepts of natural numbers as sets, but that is still merely a representation and not the real thing, as Henning's answer explains. -Similarly consider the fact that any proof in ZFC is a string of symbols. Again you can encode any finite string of symbols as a set in ZFC (or even as a natural number in PA) and be able to perform the usual operations on strings using suitable first-order formulae. But again the encoding is not the real thing. And this time it is even more obvious that it cannot be the real thing. For it is actually a theorem of Godel that any sufficiently strong formal system does not fully capture everything that is true about itself. In particular there is a first-order statement Con(ZFC) over ZFC that states "There does not exist an encoding of a proof of a contradiction within ZFC.". According to the intended interpretation of the encoding, one would think that Con(ZFC) means the same thing as "ZFC is consistent" in the meta-system, but it does not, since if ZFC has a model whose encodings of strings are isomorphic to the strings in the meta-system, then Con(ZFC) is independent over ZFC. Furthermore, it is possible that ZFC is consistent but disproves Con(ZFC). The whole problem lies in the fact that no sufficiently strong formal system can pin down their intended interpretation, at least in classical first-order logic. So it is not just that strings are not sets, but even more so that it is impossible to fully define them in any formal system (not just ZFC). -Urelements -Unrelated to the above is the notion in some formal systems that not everything is a set. NFU is one such formal system invented by Quine, where there are urelements that are not sets, and it is meaningless to ask whether something is a member of an urelement. The concept of urelements can be said to be motivated by the philosophical position of not assuming a particular kind of structure when it might be absent. In formal systems we can therefore handle real-world objects without any philosophical concern as to whether they are sets, since they could be urelements. One does not have to assume that urelements are totally atomic or indivisible in some sense; rather it is just that the formal system does not know about their internal structure. -Functions and algorithms -Lastly, we have functions. As you probably know, in ZFC a function can be encoded as a set of ordered pairs from its domain and codomain that exactly one pair with first item $x$ for any $x$ in the domain. As before, this encoding is not the only possible way, so what really is a function? Moreover, we write things like "$f(g(x) \cup y) \in z$" where $f,g$ are functions with appropriate domains and codomains, which is technically impossible in pure ZFC but requires a syntactic transformation. This is because our intuitive notion of functions is not the encoding even though it is more or less captured by the encoding. It is not completely captured because we can trivially conceive of the identity function on the entire universe, but that cannot be encoded in ZFC without the pain of contradiction. Nor can it be done in any extension of ZFC. Incidentally it can be done in NFU, but some would argue that NFU is about as unintuitive as ZFC, just in different aspects. -Also, algorithms are the natural extension of functions. They still start with the intuitive notion of doing something based on the input and producing some output, but usually involve iterations of some sort. Again, we can encode them using unions of chains of the encodings of functions constructed by induction, but it's arguable whether that is natural. For this reason there are other notations devised in history, such as [typed] lambda calculus and μ-recursion and most intuitively programming languages. No programmer conceives of the algorithm embodied by his program as a set under normal circumstances.<|endoftext|> -TITLE: What exactly is the 'induction trap' -QUESTION [6 upvotes]: I've looked everywhere, and I've looked at a lot of examples. I don't quite understand what about the induction trap is so wrong. The most common example is the graph theory tree example (page 5 here: https://classes.soe.ucsc.edu/cmps102/Spring15/lect/1/ind-tantalo.pdf). Can anyone explain it? - -REPLY [4 votes]: I've never heard the term "induction trap" but from what I can figure from the link is it is when instead of performing induction on a k = n case you step down to an n-1 case and then go back up to the n case and show the n+1 case. -The problem is that the n-1 case may not have been valid. -The best example is the proof that in any group of horses, all horses in the group are the same color. -If n=1 then all horses in any group of 1 are the same color. True. -Induction step: Assume there is an n for which all n horses in any group of n are the same color. Remove a horse so that you have n-1 horse. PROBLEM! We never verified for the n = 0 case. All zero horses are the same color but vacuously. There is no single one color for them all to be. From here on our proof is doomed. -OUR BIG INVALID STEP. All the remaining n-1 horses are the same color as the removed horse. NOT VERIFIED FOR n-1 = 0. It's only vacuously true. -From here on our reasoning is sound but our premise is bad. Add a new horse, we have n horses. Any group of n horses must be the same color. So the new horse must be the same color as the n-1 horses. Add the original horse. It's the same color as the n-1 horses and therefore as the new horse itself. So all n+1 horses are the same color. So any group of n+1 horses are the same color. -Had we started at n = 0 and initially stated "All 0 horses are the same color" we'd recognize that as true but from which any induction is impossible (as they are the same color vaccuously but it is not the case that they are a specific color).<|endoftext|> -TITLE: Why does $29^2 : 31^2 : 41^2$ have a close integer approximation with small numbers? -QUESTION [11 upvotes]: "Everybody knows" that such coincidences as -$$2\times2\times\overbrace{41\times41} = 6724 \approx 6728 = 2\times2\times2\times\overbrace{29\times29}$$ -(And why did I bother with the first two factors of $2$ on each side? Be patient.) -are "explained" by the fact that $\dfrac{41}{29}$ is a convergent in the simple continued fraction expansion of $\sqrt 2$. and maybe -$$2\times2\times2\times\overbrace{29\times29} = 6728 \approx 6727 = 7\times\overbrace{31\times31}$$ -has a similar "explanation", as presumably would the fact that -$$2\times2\times\overbrace{41\times41} = 6724 \approx 6727 = 7\times\overbrace{31\times31}.$$ -Is there some such "explanation" of the simultaneous proximity of all three of these numbers to each other? - -REPLY [2 votes]: Because - $$\sqrt{2}=\sqrt{\frac{8}{7}}\sqrt{\frac{7}{4}}\approx\frac{31}{29}\times\frac{41}{31}=\frac{41}{29}$$ - -We may group the three numbers into one single expression -$$4+6724\times6728=(6727-1)^2$$ -that can be written as -$$\left(\frac{6727-1}{2}\right)^2-\left(\frac{6724}{2}\times\frac{6728}{2}\right)=1 $$ -or -$$\left(\frac{7\times31^2-1}{2}\right)^2-2\left(2\times 29 \times 41\right)^2=1 $$ -This is Pell's equation -$$X^2-dY^2=1$$ -with $X=\frac{7\times31^2-1}{2}$, $Y={2\times29\times41}$ and $d=2$, -so the corresponding approximation to $\sqrt{2}$ is given by -$$\sqrt{2}\approx\frac{7\times31^2-1}{4\times29\times41}=\frac{(31\sqrt{7}+1)(31\sqrt{7}-1)}{4\times29\times41}=\frac{3363}{2378}$$ -which is the tenth convergent of the continued fraction expansion. -Factoring the numerator shows that the square in 6727 is related to $\sqrt{7}$, as in the answer by wythagoras. -A simpler example is given by -$$\begin{align}98&=2\times7^2\\ -99&=11\times3^2\\ -100&=1\times10^2 -\end{align}$$ -with -$$99^2-98\times100=1$$ -and $$\sqrt{2}\approx\frac{11\times3^2}{7\times10}=\frac{99}{70}$$ -Approximating $\sqrt{2}$ with the sixth convergent explains the squares of $7$ and $10$, but we also need -$$\sqrt{\frac{11}{1}}\approx\frac{10}{3}$$ -and / or -$$\sqrt{\frac{11}{2}}\approx\frac{7}{3}$$ -to justify the square of $3$. -In this example, the approximation for $\sqrt{2}$ can be obtained by direct multiplication of the approximations for $\sqrt{\frac{2}{11}}$ and $\sqrt{\frac{1}{11}}$, but this is not the case in the example from the question. -$$\sqrt{2}=11\sqrt{\frac{2}{11}}\sqrt{\frac{1}{11}}\approx11\times\frac{3}{7}\times\frac{3}{10}=\frac{99}{70}$$ -However, dividing the approximations implied by the equations involving number $7$, the convergent $\sqrt{2} \approx \frac{41}{29}$ is obtained. -$$\sqrt{2}=\sqrt{\frac{8}{7}}\sqrt{\frac{7}{4}}\approx\frac{31}{29}\times\frac{41}{31}=\frac{41}{29}$$<|endoftext|> -TITLE: Is the word "any" a $\forall$ or an $\exists$? -QUESTION [18 upvotes]: I was wondering how should the word "any" be used in mathematical context. Is it a "for all" or an "it exists"? -For example, assume I stated something like - -A set $X$ is called nice if $P(x)$ holds for any $x\in X$. - -Would that mean that $X$ is nice only if all of its elements satisfy $P$, or that $X$ is nice as long as one of its elements satisfies $P$? -Personally, I always assumed the second case, but English is not my mother tongue, and I have seen the word being used both ways. - -REPLY [3 votes]: $\newcommand{\eps}{\varepsilon}$Good writing facilitates understanding. -In my experience, the greatest risk of confusion comes from a predicate of the form "if for any...": - -A function $f$ is continuous at $a$ if for any $\eps > 0$, there is a $\delta > 0$ such that if $|x - a| < \delta$, then $|f(x) - f(a)| < \eps$. (Here, "any" means "every".) -A function $f$ is discontinuous on a set $A$ if for any $a$ in $A$, $f$ is discontinuous at $a$. (Here, "any" means "some".) - -In each case, the intended meaning is far from obvious until the definition of continuity has been absorbed. Even then, "if for any" makes even a fluent reader stop and re-read, breaks the train of thought. -In other words, the phrase obstructs learning and hampers communication. It belongs only in manuals of expository sabotage.<|endoftext|> -TITLE: Can we find a general $\delta$ to prove the continuity of polynomials? -QUESTION [6 upvotes]: Polynomials are continuous functions. In other words, for all $\epsilon > 0$ and all $a$, there is some $\delta > 0$ such that if $|x-a|<\delta$, $|P(x)-P(a)|<\epsilon$ where $P(x)$ is a function of the form -$$P(x) = \sum_{i=0}^n a_i x^i$$ -The proof of this is usually done via limit theorems and mathematical induction, which circumvents the difficulty of finding $\delta$. -My question is, is it possible to prove the continuity of polynomials by explicitly finding a $\delta$ (a closed form), in terms of $n$, $a$, $a_i$ and $\epsilon$ where we have $\delta = \min(....)$. -If this is too ambitious, are there special cases (with, say, quadratics)? - -REPLY [5 votes]: Assume $|x - a| \leq \delta \leq 1$. We have -$$ -\begin{align*} -|P(x) - P(a)| &= \left|\sum_{i = 1}^n a_i(x - a)(x^{i-1} + x^{i-2}a + \dots + a^{i-1})\right| \\ -&\leq \sum_{i = 1}^n i|a_i||x - a| \max(|x|^{i-1}, |a|^{i-1}) \\ -&\leq |x-a|\max(|x|^{n-1},|a|^{n-1},1)\sum_{i=1}^n i|a_i| \\ -&\leq \delta (|a| + 1)^{n-1}n^2 \max_{i \geq 1}(|a_i|). -\end{align*}$$ -So (assuming $P(x)$ is non-constant) you can choose $\delta = \min(1,\epsilon/M)$, where $M = (|a|+1)^{n-1}n^2 \max_{i \geq 1}(|a_i|)$. -What I wrote works, but it might be possible to do this a bit more cleanly by using $|P(x) - P(a)| \leq |x-a| \max_{y \in I_{a,x}} |P'(y)|$, where $I_{a,x}$ is either $[a,x]$ or $[x,a]$.<|endoftext|> -TITLE: Probability of rolling dice twice -QUESTION [5 upvotes]: Did I calculate the correct probability for these simple scenarios: -1: What is the probability of rolling 3 and 4 with two dice in two rolls? -If you roll either 3 or 4 in the first roll, you put that die aside and roll the second die again. -I would break the probability down like this: - -1/18 chance of getting both 3 and 4 (or 4 and 3) in the first roll. -1/4 chance of rolling 3 (31, 32, 33, 35, 36, 13, 23, 53 and 63), and then 1/6 chance of rolling the 4 in the second roll. -1/4 chance of rolling 4 (41, 42, 44, 45, 46, 14, 24, 54 and 64), and then 1/6 chance of rolling the 3 in the second roll. -1/18 of rolling both 3 and 4 on the second roll (having rolled neither in the first). - -This adds up to: -$$\frac{1}{18} + (\frac{1}{4} \times \frac{1}{6}) + (\frac{1}{4} \times \frac{1}{6}) + \frac{1}{18} = \frac{7}{36}$$ -Is that correct? -2: What is the probability of rolling double 6 in two rolls? -Slightly different breakdown: - -1/36 chance of getting 66 in the first roll. -10/36 chance of getting a single 6 (all combinations of 6, except 66), then 1/6 of getting the second 6 in the second roll. -1/36 chance of getting 66 in the second roll. - -$$\frac{1}{36} + (\frac{5}{18} \times \frac{1}{6}) + \frac{1}{36} = \frac{11}{108}$$ -Correct? -I've watched several videos on probability calculations, the usual "pick a coin in a bag with some unfair coins", and this is the first time I'm actually trying to apply what I've learned. -Update: I was wrong -Correct answers: -$$\frac{1}{18} + (\frac{1}{4} \times \frac{1}{6}) + (\frac{1}{4} \times \frac{1}{6}) + (\frac{4}{9} \times \frac{1}{18}) = \frac{53}{324} = 16.4\%$$ -and -$$\frac{1}{36} + (\frac{10}{36} \times \frac{1}{6}) + (\frac{25}{36} \times \frac{1}{36}) = \frac{121}{1296} = 9.3\% $$ - -REPLY [2 votes]: They are not correct: you do not include the probability of getting to the second roll without having made progress in the overall probability of getting the exact needed roll on the second roll alone. -Which is to say: The probability of making no progress on the first roll and then getting 34 or 43 on the second is $\frac{4}{9} \times \frac{1}{18}$, because you only have a 4-in-9 chance of making no progress. -Similarly, since there's a $\frac{25}{36}$ chance of making no progress on the first roll when the goal is to get two sixes, the probability of getting double six on the second roll after making no progress on the first is $\frac{25}{36}\times\frac{1}{36}$<|endoftext|> -TITLE: Is there a notion of a complex derivative or complex integral? -QUESTION [8 upvotes]: While reading about fractional calculus in http://arxiv.org/pdf/math/0110241.pdf , I came across the following quote: - -Fractional integration and fractional differentiation are generalisations of notions of integer-order integration and differentiation, and include n-th derivatives and n-folded integrals (n denotes an integer number) as particular cases. - -Let $D = \frac{d}{dx}$ . -We have found meaningful notions of $D^2$ and $D^{-1}$ (derivative and antiderivative, respectively, of integer-order) and $D^{\frac{1}{2}}$ (derivative of fractional-order) , and we can say that integer differentiation (where given $D^n , n \in \mathbb{Z}$) is a special case of more general fractional differentiation (where given $D^n , n \in \mathbb{R}$). -I'm wondering if there's some meaningful notion of "complex differentiation", say something like $D^i$, where fractional differentiation is a special case (note that anti differentiation is a special case of differentiation; namely, given $D^n$, anti differentiation occurs when $n$ is real and negative). Is this conceivable? If so, are there any apparent applications of this? -Sorry if this is a dumb question (by dumb, I mean something that I could've found elsewhere online). I've searched around and haven't found anything on this yet. - -REPLY [2 votes]: I can present an example problem with a complex fractional derivative. Consider the Cauchy pulse (Ref: S.L. Hahn, Hilbert Transforms in Signal Processing, Artech House, 1996) -$$\psi=\frac{1}{1-i\tau}$$ -Differentiating $n$ times we obtain -$$\psi(\tau,n)=\psi^{(n)}(\tau)=\frac{d^n\psi}{d\tau^n}=\frac{i^nn!}{(1-i\tau)^n}=\frac{i^n\Gamma(n+1)}{(1-i\tau)^n}$$ -And, of course, the Gamma function is our pathway to any fractional derivative. Now, insofar as $\tau \in (-\infty,\infty)$ we can make a change of variables so that $\tau=\text{tan}\theta,\ \theta\in [-\pi/2,\pi/2]$. Then -$$\frac{1}{(1-i\tau)^n}\to \frac{1}{(1-i\text{tan}\theta)^n}=\text{cos}^{n+1}\theta \ e^{i(n+1)\theta}$$ -As an example, we plot the ${i^{3/2}}^{th}$ derivative of the Cauchy pulse [note that $i^{3/2}=\frac{\sqrt{2}}{2}(-1+i)$]. Clearly, we can create a infinite number of new plane curves in this manner. -For a more detailed description of the Cauchy pulse and related functions see The Apple of My i. This was my first foray into recreational mathematics five years ago.<|endoftext|> -TITLE: linearly independent generalized eigenvectors -QUESTION [5 upvotes]: I'm self-studying Axler's Linear Algebra Done Right and I am not understanding one step of the proof of 8.13 (Linearly independent generalized eigenvectors). It is the same step as the one that "yields" 1.4.65 in the proof of Lemma 1.4.63 in this book, which also leaves it unexplained. -We are multiplying both sides of the equation $\sum a_i v_i = 0$, where each $v_i$ is a generalized eigenvector, by $(T-\lambda_j)^k\prod_{i \ne j} (T - \lambda_i)^n$, where each $\lambda_i$ is the eigenvalue corresponding to $v_i$, and somehow getting: -$a_j (T-\lambda_j)^k\prod_{i \ne j} (T - \lambda_i)^n v_j = 0$ -(i.e., all terms of $\sum a_i v_i$ disappear except for the one contained $v_j$) -I understand that each $v_i$ is an element of $null(T-\lambda_i)^n$ and so its term would disappear if the operator were applied directly to the $v_i$, but only the final $(T-\lambda_i)^n$ applies directly to each $v_i$. Are these transformations commutative for some reason? -I'm likely missing something trivial here and would appreciate your insights. Thanks! - -REPLY [6 votes]: Just to bring this question officially to an end, I will make my comment an answer: yes, of course these transformations are commutative, because they are polynomials in $T$.<|endoftext|> -TITLE: Examples of Waldhausen categories. -QUESTION [6 upvotes]: Waldhausen's wS construction of K-theory assigns K-groups to an arbitrary small Waldhausen category, my main goal in reading this construction was to apply it to the case of exact categories with weak equivalencec being isomorphisms. Now my question is, what are the other examples of Waldhausen categories? Are their K-groups serious studied? -Also if I am not mistaken, the category of cofibrant objects in a model category satisfy the axioms of a Waldhausen category, but may not be a small category. Can we talk about the K-groups of some suitable small subcategory of such a category? - -REPLY [5 votes]: The most obvious non-exact examples of Waldhausen categories are just the non-additive ones: a Waldhausen category is only pointed. So Waldhausen's leading example in his original paper is the category $R(X)$ of "retractive" spaces over $X$, that is, retractions $r:Y\to X$ with a choice of a splitting $s:X\to Y$ of $r$. This is pointed, with zero object the identity of $X$, but is certainly not additive, so Waldhausen really needed the extra generality for his goal of defining an algebraic K-theory for topological spaces. This invariant is indeed intensively studied, although to be fair it's possible to define via an additive category given modern techniques of structured ring spectra. -The thing with small categories is not so much to protect against set-theoretical difficulties as to avoid having all K-groups be zero, due to the so-called Eilenberg swindle that becomes possible given countable coproducts. So you do certainly want to restrict to a smaller subcategory. It's most common to use the compact objects, which e.g. gives the Waldhausen category of perfect complexes much used in algebraic geometry.<|endoftext|> -TITLE: Is a straight line the shortest distance between two points? -QUESTION [6 upvotes]: Quite simply, I heard a lot of talk about how a straight line isn't necessarily the shortest distance between two points. -Is this true, and if it isn't, how would that work? - -REPLY [6 votes]: Here is a very non-technical answer: If our space was Euclidean then a straight line would be the shortest distance between two points. And until Einstein, through his general theory of relativity, showed that the space can actually be bent everybody believed and treated the space as Euclidean. -But now we know that the "physical" space is not Euclidean and therefore a straight line is not necessarily the shortest distance between two points. Consider for example being on the surface of a solid (impenetrable) sphere. The shortest distance between two points on the sphere is not a straight line. -I recommend you to read about geodesics.<|endoftext|> -TITLE: Is the Laplace transform essentially a generalized version of the Fourier transform? -QUESTION [7 upvotes]: My current understanding of the two concepts is far from perfect, and I am essentially just a beginner. -But it seems to me that while the Fourier tries to decompose functions as a composition of waves, the Laplace does the same thing except with exponentials. I also often hear that the Laplace can be applied to more functions than the Fourier without really knowing why. - -I was wondering if someone could provide some insight into the current "state of affairs" between the two (Laplace vs Fourier) - -what are the differences/similarities between Laplace & Fourier in what they accomplish -what does the Laplace do that Fourier does not which makes it apply to more functions - - -Preferably in simple English if possible without heavy reference to higher mathematical literature - -REPLY [2 votes]: The Laplace transform is an important tool to study linear time evolution -problems where the situation at time $t=0$ is given. Thus consider ($i$ is -added for later convenience) -\begin{equation*} -\partial _{t}f(t)=iAf(t),\;t\geqslant 0 -\end{equation*} -where $A$ is some operator. Then the complex Laplace transform -\begin{equation*} -\hat{f}(z)=\int_{0}^{\infty }dt\exp [izt]f(t),\;Imz>0 -\end{equation*} -satisfies -\begin{equation*} -i[z-A]\hat{f}(z)=f(0)\Rightarrow \hat{f}(z)=-i[z-A]^{-1}f(0) -\end{equation*} -In particular the situation where $A$ is a self-adjoint operator in a -Hilbert space is important (think of the Schrödinger equation of quantum -mechanics where $A=H$, the Hamiltonian) . The point is that it is much easier to study the properties of $ -A$ through its resolvent $[z-A]^{-1}$ than through the time evolution -operator $\exp [iAt]$. -Actually the Laplace transform above can be considered as a Fourier -transform. Thus set -\begin{equation*} -z=\omega +i\delta ,\;\delta >0 -\end{equation*} -Then ($\theta (t)$ is the Heaviside step function) -\begin{equation*} -\hat{f}(\omega +i\delta )=\int_{-\infty }^{+\infty }dt\exp [i\omega t]\theta -(t)\exp [-\delta t]f(t) -\end{equation*} -so we are dealing with the Fourier transform of $\theta (t)\exp [-\delta -t]f(t)$. In case $f(t)$ is square integrable this immediately gives the -inverse Laplace transform -\begin{eqnarray*} -\theta (t)\exp [-\delta t]f(t) &=&\frac{1}{2\pi }\int_{-\infty }^{+\infty -}d\omega \exp [-i\omega t]\hat{f}(\omega +i\delta ) \\ -f(t) &=&\frac{1}{2\pi }\int_{-\infty }^{+\infty }d\omega \exp [-i(\omega -+i\delta )t]\hat{f}(\omega +i\delta ) \\ -&=&\frac{1}{2\pi }\int_{\Gamma }dz\exp [-izt]\hat{f}(z),\;t\geqslant 0 -\end{eqnarray*} -where $\Gamma $ is the familiar Bromwich contour, a straight line parallel -to and above the real axis.<|endoftext|> -TITLE: Is the tensor product of non-commutative algebras a colimit? -QUESTION [5 upvotes]: For $R$ a commutative ring, the tensor product of $R$-algebras is the coproduct in the category of commutative $R$-algebras. In the noncommutative case it is no longer the coproduct in the category of associative $R$-algebras, but it does satisfy a universal property, as given on Wikipedia. Is this some sort of colimit? If not, is there are a straightforward description of this universal property via a functor (right?) adjoint to the tensor product, as is the case for tensor products of modules? - -REPLY [2 votes]: There is a tensor-hom adjunction for the tensor product of algebras, but it exists at the level of the Morita 2-category, rather than the 1-category of algebras. -Namely, the Morita 2-category has objects $k$-algebras, and the category of morphisms $A \to B$ is the category $\text{Mod}(A^{op} \otimes B)$ of $(A, B)$-bimodules, where composition is given by tensor product. (All tensor products are over $k$.) The Morita 2-category has an internal hom $[A, B] = A^{op} \otimes B$, and its left adjoint is the tensor product in the sense that we have natural identifications -$$[A \otimes B, C] \cong A^{op} \otimes B^{op} \otimes C \cong [A, B^{op} \otimes C] \cong [A, [B, C]].$$ -In the Morita 2-category the coproduct of two algebras $A$ and $B$ is $A \times B$ (rather than the free product), and tensor product distributes over this.<|endoftext|> -TITLE: Elegantly Proving that $~\sqrt[5]{12}~-~\sqrt[12]5~>~\frac12$ -QUESTION [18 upvotes]: $\qquad$ How could we prove, without the aid of a calculator, that $~\sqrt[5]{12}~-~\sqrt[12]5~>~\dfrac12$ ? - - -I have stumbled upon this mildly interesting numerical coincidence by accident, while pondering on another curios approximation, related to musical intervals. A quick computer search then also revealed that $~\sqrt[7]{12}~-~\sqrt[12]7~>~\tfrac14~$ and $~\sqrt[7]{15}~-~\sqrt[15]7~>~\tfrac13.~$ I am at a loss at finding a meaningful approach for any of the three cases. Moving the negative term to the right hand side, and then exponentiating, is —for painfully obvious reasons— unfeasible. Perhaps some clever manipulation of binomial series might show the way out of this impasse, but I fail to see how... - -REPLY [4 votes]: An approach using binomial series could look as follows: -For small positive $x$ and $y$ one has -$$(1+x)^{1/5}>1+{x\over5}-{2x^2\over25},\qquad (1+y)^{1/12}<1+{y\over12}\ .$$ -Using the ${\tt Rationalize}$ command in Mathematica one obtains, e.g., $12^{1/5}\doteq{13\over8}$. In fact -$$12\cdot(8/13)^5-{18\over17}={1398\over 6\,311\,981}>0\ .$$ -It follows that -$$12^{1/5}>{13\over8}\left(1+{1\over17}\right)^{1/5}>{13\over8}\left(1+{1\over85}-{2\over 85^2}\right)\doteq1.64367\ .$$ -In the same way Mathematica produces $5^{1/12}\doteq{8\over7}$, and one then checks that -$$5\cdot (7/8)^{12}-{141\over140}=-{136\,294\,769\over2\,405\,181\,685\,760}<0\ .$$ -It follows that -$$5^{1/12}<{8\over7}\left(1+{1\over140}\right)^{1/12}<{8\over7}\left(1+{1\over12\cdot 140}\right)\doteq1.14354\ .$$ -This solution is not as elegant as the solution found by Giovanni Resta, but the involved figures are considerably smaller.<|endoftext|> -TITLE: What is the importance of “variety of algebras” in Universal Algebra? -QUESTION [12 upvotes]: Given an algebraic category, Birkhoff's Variety Theorem gives a categorical characterization of the full subcategories whose object-class forms a variety (i.e. can be defined by equations in the sense of Model Theory). -The theorem is often stated as being of fundamental importance to Universal Algebra. As far as its importance for metamathematical questions is concerned, this does not surprise me, as it describes a connection between Model Theory and Universal Algebra. But what about its “internal” importance for Universal Algebra? Suppose we are studying a certain class of objects in some algebraic category. In how far could it be useful to know whether this class forms a variety? -My question only concerns the one implication of Birkhoff's theorem of course. It is clear that the result that varieties are closed under the taking of products, subalgebras and homomorphic images has a wide range of possible applications. But what about the converse? - -REPLY [4 votes]: Reformulating part of Alex Kruckman's answer: Knowing that some class $\mathcal{A}$ of algebras has an equational presentation is useful because we then know that an algebra belongs to $\mathcal{A}$ if every finitely generated subalgebra does.<|endoftext|> -TITLE: What is the infinite product of (primes^2+1)/(primes^2-1)? -QUESTION [7 upvotes]: I have shown that the infinite product $$\prod_{p \in \mathcal{P}}\frac{p^2+1}{p^2-1}$$ is equal to $\frac{5}{2}$ (pretty remarkable!). I have checked this numerically with Wolfram Alpha for up to $500000$ primes and it seems true. -I was wondering if this result is recorded anywhere? -Also if true, does this mean that there aren't infinitely many primes of form $p^2+1$? - -REPLY [13 votes]: From MO: $$\frac{2}{5}=\frac{36}{90}=\frac{6^2}{90}=\frac{\zeta(4)}{\zeta(2)^2}=\prod_p\frac{(1-\frac{1}{p^2})^2}{(1-\frac{1}{p^4})}=\prod_p \left(\frac{(p^2-1)^2}{(p^2+1)(p^2-1)}\right)=\prod_p\left(\frac{p^2-1}{p^2+1}\right)$$ -$$\implies \prod_p \left(\frac{p^2-1}{p^2+1}\right)=\frac{2}{5}$$<|endoftext|> -TITLE: Verification : Prove the Fubini-Tonelli theorem when $(X,\mathcal{M},\mu)$ is any measure space and $Y$ is a countable set with the counting measure. -QUESTION [5 upvotes]: The Fubini-Tonelli theorem is valid when $(X,\mathcal{M},\mu)$ is an arbitrary measure space and $Y$ is a countable set $\mathcal{N}=\mathcal{P}(Y)$, and $\nu$ is counting measure on $Y$. -This is an exercise from Folland. I tried to prove this as in the proof of the original theorem, but as you can see below, the proof of the Tonelli theorem uses Theorem 2.36, which in turn requires the $\sigma-$finiteness of both measures. So how can I prove this? I would greatly appreciate any help. -Edit: I came up with a solution below but need it verified. - -My Attempt -For convenience, let $Y=\mathbb{N}$, and let $Y$ be the union of $Y_n=\{1,\dots, n\}$. Let $f\in L^{+}(X \times Y)$. Let $f_n=f\chi_{X\times Y_n}$. Then clearly $f_n$ is an increasing sequence converging to $f$, so we can use the Monotone Convergence Theorem later. Now we have: -$\int f_n d(\mu \times \nu)=\sum_{y=1}^n\int_{X\times y} f d(\mu \times \nu)=\sum_{y=1}^n\int_X f_n (x,y) d\mu(x)$. -$g_n(x)=\int f{_n}(x,y)d\nu(y)=\sum_{y=1}^n f_n(x,y)$. -$h_n(y)=\int f_n(x,y)d\mu(x)$. -$\int g_n(x) d\mu(x)=\int \sum_{y=1}^n f_n(x,y)d\mu=\sum_{y=1}^n \int_X f_n(x,y) d\mu(x)$. -$\int h_n(y) d\nu(y)=\sum_{y=1}^n \int_X f_n(x,y) d\mu(x)$. -Also, $\lim_n g_n(x)=\lim_n \sum_{y=1}^n f_n(x,y)=\lim_n \sum_{y=1}^n f(x,y)\cdot \chi_{X\times Y_n}=\sum_{y=1}^\infty f(x,y)$. -and $\lim_n h_n(y)=\lim \int_X f_n(x,y) d\mu(x)=\int_X \lim f_n(x,y) d\mu(x)=\int_X f(x,y) d\mu(x)$ by the Monotone Convergence Theorem. -So the three integrals are equal, and now we can apply the Monotone Convergence Theorem and the rest of the proof is identical to the one given below. - -REPLY [2 votes]: There is only a need to prove the case where $f$ is a characteristic function. Following which, the rest of proof for theorem 2.37 applies. -To this end let $f=\chi_E$ where $E \subseteq X\times Y$. Writing $Y=\{1,2,3,\dots\}$, we may write $E$ as a disjoint union of countably many rectangles $E=\coprod\limits_{n\in \mathbb{Z}^+} (E_n \times\{n\})$. Going back to the definition of $\mu \times\nu$ as an outer measure that extends the pre-measure on the algebra of "finite disjoint rectangles", we see that -$$(\mu\times\nu)(E)=\sum\limits_{n=1}^{\infty}\mu(E_n)$$ -That is, -$$\int \chi_E d(\mu\times\nu) = \sum\limits_{n=1}^{\infty} \left[\int \chi_{E_n} d\mu(x)\right] =\int \left[\int \chi_{E_n} d\mu(x)\right] d\nu(y)$$ -We also know in general for $f_1,f_2,\dots\in L^{+}$ that $\int \sum f_n = \sum \int f_n$, so both of equations (2.38) are proven for $f$ being a characteristic function.<|endoftext|> -TITLE: Textbooks for learning Algebraic Topology -QUESTION [7 upvotes]: No doubt a similar question has been answered before, but I make my ideal textbook specific. -Does anyone know of an Algebraic Topology textbook with the following properties. --Accessible (Nothing Hardcore Please, I would consider myself a very average student) --Solutions (They need not be worked solutions, although that would be nice, even one liners telling me solutions to more computational questions would be really nice) --I am currently working through Munkres' Algebraic Topology, It is accessible but has no solutions so it is very frustrating when I need to check whether or not I computed the homology group of the connected sum of a double tori correctly or not, and the like. --On that note, for the connected sum of two tori is $H_{1}(T\#T)=Z \oplus Z \oplus Z \oplus Z$? and $H_{2}(T\# T)=Z$. No working needed, unless you really want to.... - -REPLY [3 votes]: Elementary Topology Problem Textbook by Viro, Harlamov, etc. as an introduction -It covers only part of a subject, but it has solutions.<|endoftext|> -TITLE: Prove there are no prime numbers in the sequence $a_n=10017,100117,1001117,10011117, \dots$ -QUESTION [15 upvotes]: Define a sequence as $a_n=10017,100117,1001117,10011117$. (The $nth$ term has $n$ ones after the two zeroes.) -I conjecture that there are no prime numbers in the sequence. I used wolfram to find the first few factorisations: -$10017=3^3 \cdot 7 \cdot 53$ -$100117=53\cdot 1889$ -$1001117=13 \cdot 53\cdot1453$ and so on. -I've noticed the early terms all have a factor of $53$, so the problem can be restated as showing that all numbers of this form have a factor of $53$. However, I wouldn't know how to prove a statement like this. Nor am I sure that all of the terms do have a factor of $53$. -I began by writing the $nth$ term of the sequence as -$a_n=10^{n+3}+10^n+10^{n-1}+10^{n-2}+10^{n-3}+\cdots+10^3+10^2+10^1+7$ but cannot continue the proof. - -REPLY [2 votes]: Another way to find the inductive relationship already cited, from a character manipulation point of view: -Consider any number in the sequence, $a_n$. To create the next number, you must: - -Subtract $17$, leaving a number terminating in two zeroes; -Divide by $10$, dropping one of the terminal zeroes; -Add $1$, changing the remaining terminal zero to a $1$; -Multiply by $100$, sticking a terminal double zero back on; -Add $17$, converting the terminal double zero back to $17$ - -Expressing this procedure algebraically, and simplifying: -$$a_{n+1}=\left (\frac{a_n-17}{10}+1 \right ) \times 100+17=10a_n-53$$<|endoftext|> -TITLE: Simplify $2(\sin^6x + \cos^6x) - 3(\sin^4 x + \cos^4 x) + 1$ -QUESTION [5 upvotes]: Here is the expression: - -$$2(\sin^6x + \cos^6x) - 3(\sin^4 x + \cos^4 x) + 1$$ - -The exercise is to evaluate it. -In my text book the answer is $0$ -I tried to factor the expression, but it got me nowhere. - -REPLY [2 votes]: Consider $a^3+b^3=(a+b)^3-3ab(a+b)$ and $a^2+b^2=(a+b)^2-2ab$. For $a=\sin^2x$ and $b=\cos^2x$ we have -$$ -2(\sin^6x+\cos^6x)-3(\sin^4x+\cos^4x)+1= -2(a+b)^3-6ab(a+b)-3(a+b)^2+6ab+1 -$$ -However, $a+b=\sin^2x+\cos^2x=1$, so the expression simplifies to -$$ -2-6ab-3+6ab+1=0 -$$<|endoftext|> -TITLE: Every matrix can be written as a sum of unitary matrices? -QUESTION [20 upvotes]: Any matrix $A \in \mbox{GL}(n, \mathbb{C})$ can be written as a finite linear combination of elements $U_i\in U(n)$: -$$ A = \sum_{i} \lambda_i U_i$$ - -Is this true? How could I prove it? - -REPLY [8 votes]: It is known that every complex square matrix $A$ can be written as a linear combination of at most two unitary matrices. First, by scaling, you may assume that $\|A\|\le1$. Then, by singular value decomposition, you may also assume that -$$ -A=\operatorname{diag}(s_1,\ldots,s_n) -$$ -where the singular values $s_j$s are real nonnegative and bounded above by $1$. Now, as $s_j=\frac12(z_j+\bar{z}_j)$, where $z_j=s_j+i\sqrt{1-s_j^2}$ has unit modulus, it follows that $A$ is the average of two unitary matrices. -If I remember correctly, there was also an open conjecture that the least possible number $k(n)$ such that every real $n\times n$ matrix can be written as a linear combination of at most $k(n)$ real orthogonal matrices is equal to $4$. It has been shown that $k(n)\le 4$. See proposition 1 in Chi-Kwong Li and Edward Poon, Additive Decomposition of Real Matrices, Linear and Multilinear Algebra, 50(4):321-326, 2002.<|endoftext|> -TITLE: Is the map from a group to its center a group homomorphism? -QUESTION [8 upvotes]: Is the map $f: G \rightarrow Z(G)$ where $f$ maps elements not in the center to the identity and is the identity map restricted to the center. -I think it is by my computation, but I just want to make sure. Could anyone confirm it for me? -Thanks! - -REPLY [2 votes]: By your definition of $f$ $$|\ker f|=|Z(G)|+1$$ -So every group $G\quad \text{s.t}\quad |Z(G)|+1\nmid |G|$ is a counterexample .<|endoftext|> -TITLE: Why is $\arccos(-\frac 13)$ the optimal angle between bonds in a methane ($\rm CH_4$) molecule? -QUESTION [10 upvotes]: Background: In a CH4 molecule, there are 4 C-H bonds that repel each other. Essentially the mathematical problem is how to distribute 4 points on a unit sphere where the points have maximal mutual distance - or, how to distribute 4 position vectors such that the endpoint distances between them are maximized. -This question Angle between lines joining tetrahedron center to vertices shows that the angles between the points/vectors forming the vertices of a regular tetrahedron is $\arccos(-\frac13)$. -I have been told by chemistry teachers that that is the shape which maximizes mutual distance between the points. However, that link was not relevant to me because I am trying to figure out a proof that the tetrahedral model maximizes distance and is the only model which maximizes it. -Therefore, the very similar question Calculations of angles between bonds in CH₄ (Methane) molecule was also irrelevant, because the answer started off with "Note that a regular tetrahedron can be inscribed in alternating vertices of a cube.", and thus assuming that the tetrahedral shape was optimal without any mathematical basis. -So my question is, is $\arccos(-\frac13)$ the optimal angle, and if so, how can I approach this question to prove it? I have limited knowledge in vector calculus but I am willing to learn if this is some multivariable optimization problem. -Thanks! - -REPLY [5 votes]: It might be helpful to look at a 2D model. You'll need a nickel and two pennies. -Put the coins flat on a table however you like. Then let your friend move the coins according to some rules. - -If the pennies are less than 2 diameters apart or more than 4 diameters apart, they can move the pennies to a more reasonable distance apart. -If the nickel can be moved to a spot either closer or more equidistant to the pennies, they can move the nickel. - -Every time your friend can move the coins, you need to pay them $100. -What would the rules be for 3 pennies and a nickel? -Alternately, make a computer model. You'll quickly get to models like the Thomson problem. But notice that the 2010 proof for $5$ points was incredibly difficult, and $7$ points is considered unproven. -The 4 point case involves a very strong attractor, the tetrahedron. Add a few more atoms and it's a vastly harder problem. A molecule doesn't care if there is an optimal solution -- if the atoms find a local minimum, they will likely stay in that weird configuration. -Methane doesn't have any other local minima to worry about. -Water has two hydrogens and two free electrons, making the model more interesting.<|endoftext|> -TITLE: Finite extensions of $\mathbb Q_p$ are exactly completions of numberfields -QUESTION [5 upvotes]: I read that every finite extension of $\mathbb{Q}_p$ is in fact a completion of a numberfield K with a place of K. I also heard that this is a consequence of Krasner´s Lemma. -Do you have any hint how to prove this? -And how can i prove conversely that every completion of a numberfield is a finite extension of $\mathbb{Q}_p$? -Every hint is strongly appreciated. Thanks! - -REPLY [4 votes]: Maybe Krasner’s Lemma is the most direct route to your first statement. It says, very roughly, that if you take an irreducible polynomial over $\Bbb Q_p$ and jiggle its coefficients just slightly, each root $\rho'$ of the new polynomial will be close to a unique root $\rho$ of the original, and in fact the two will generate the same field over $\Bbb Q_p$. So, if $K=\Bbb Q_p(\alpha)$, take its $\Bbb Q_p$-polynomial $f(X)$ and jiggle it to an $\,\bar f\in\Bbb Q[X]$. Then $\,\bar f$ has a root $\alpha'$ also generating $K$ over $\Bbb Q_p$, but $\alpha'$ is algebraic (over $\Bbb Q$). -In case your extension $K$ is unramified over $\Bbb Q_p$, you don’t need Krasner or anything like him. For, such a $K$ can be gotten by adjoining a root of unity to $\Bbb Q_p$. To get your field that completes to give $K$, adjoin a root of unity of the same order to $\Bbb Q$. Of course the really interesting extensions are the ramified ones, so this argument doesn’t apply.<|endoftext|> -TITLE: Two interview questions -QUESTION [31 upvotes]: I recently came across two interview questions for admission in B.Math at an university. I gave the two questions a try and want to know if my solutions are correct or not. -Q1: Given that $x^4-4x^3+ax^2+bx+1=0$ has all positive roots and $a,b\in\Bbb R$, prove that all the roots are equal. -My solution: Let $p,q,r,s$ be the four roots of the given equation. Using Vieta's formulas, we have, -$$p+q+r+s=4\quad\textrm{and}\quad pqrs=1$$ -Since $p,q,r,s$ are all positive, we have, by the AM-GM inequality, -$$\frac{p+q+r+s}{4}\geq\sqrt[4]{pqrs}=\sqrt[4]{1}=1\implies p+q+r+s\geq 4$$ -Since we got $p+q+r+s=4$ using Vieta's formulas and knowing that the equality case in AM-GM inequality holds iff $p=q=r=s$, we conclude that all the roots to the given equation are equal. $_\square$ - -Q2: Without actually computing anything, find the value of $\dbinom{p+q}2-\dbinom p2-\dbinom q2$. -My solution: Since it's told not to actually compute anything, I suppose that they were asking for a combinatorial proof. I have the following argument: -Suppose we have $p+q$ people in a room with $p$ people in Group 1 and $q$ people in Group 2. Then, $\dbinom{p+q}2$ counts the number of ways we can select two people from the people in the entire room. However, $\dbinom p2$ and $\dbinom q2$ count the number of ways we can select two people from Group 1 and Group 2 respectively. -Now, we can select two people from the entire room by either taking two people from Group 1 or taking two people from Group 1 or taking one person from Group 1 and another from Group 2. These are the only possible methods. -So, the expression we have counts the number of ways we can select one person from Group 1 containing $p$ people and another person from Group 2 containing $q$ people. By the rule of product, we have $pq$ ways to do this and hence the value of the given expression is $pq$. $_\square$ - -REPLY [2 votes]: Citing the number of upvotes on Brian M. Scott's comment, I believe it's safe to say your answers are correct. I would just like to officially answer this question to remove it from the Unanswered Questions queue.<|endoftext|> -TITLE: Show that 13 is the largest prime which divides two consecutive terms of $n^2 + 3$. -QUESTION [5 upvotes]: Show that 13 is the largest prime which divides two consecutive terms of $n^2 + 3$. - -The integers are $39$ and $52$. First of all, I set the variable for the number as $k$. So, $k|n^2 +3$ and $k|n^2 + 2n+ 4$ which imply that $k|2n+1$. $n=6$ over here. And the fact that 13 is the largest 'prime' makes me feel it is hard to prove. That's all I have managed to get. I need a few hints to set me in the right direction. Thanks. - -REPLY [11 votes]: If $k\mid n^2+3$ and $k\mid n^2+2n+4$, then as you noted, $k\mid 2n+1$. -But then also from $k\mid n^2+3$ we have $k\mid 2n^2+6$, and from $k\mid 2n+1$ we have $k\mid 2n^2+n$. -Hence $k\mid (2n^2+n)-(2n^2+6))=n-6$. And so $k\mid 2n-12$. -From $k\mid 2n-12$ and $k\mid 2n+1$, we obtain $k\mid 13$.<|endoftext|> -TITLE: Prov that function is eventually periodic to origin. -QUESTION [6 upvotes]: Let $f:\mathbb{Z}^4 \rightarrow \mathbb{Z}^4$ by -$f(w,x,y,z) = (\mid w-x \mid,\mid x-y \mid,\mid y-z \mid,\mid z-w \mid)$ - -Prove that for any $(w,x,y,z) \in \mathbb{Z}^4$ there is $n>0$ such that $f^n(w,x,y,z)=(0,0,0,0)$ -Prove that there is no $n$ such that $f^n(w,x,y,z)=(0,0,0,0)$ for all $(w,x,y,z) \in \mathbb{Z}^4$ - - -I wrote a computer code in R that can execute this function for any integers $w,x,y,z$ and $n$. See below: -rm(list = ls()) -myfun=function(w,x,y,z){ -outcome <- c(abs(w-x), abs(x-y), abs(y-z),abs(z-w)) -return(outcome) -} -w<-1 -x<-3 -y<-534 -z<-3 -n=6 - -outcome <- matrix (nrow=n, ncol=4) -for (i in 1:n){ -outcome[i,] <- myfun(w,x,y,z) -w <- outcome[i,1] -x <- outcome[i,2] -y <- outcome[i,3] -z <- outcome[i,4] -} -outcome - - -After executing 100's of points I see that after, $n=5$ we see the function going to $(0,0,0,0)$. I tried using brute force and applied the function 6 times by hand to see if it cancels out but I've end up with a very complex function. There must be a cleaner way of proving this question. Point me in the right direction please. - -REPLY [2 votes]: $\bf{First\ part}$. We prove the statement given by Paul Sinclair in a comment: -The max element never grows (other then the first time when the arguments can be negative) and it decreases after at most 4 applications, if you start at a non-zero quadruple. - -We assume that $f:\Bbb{N}_0^4\to \Bbb{N}_0^4$. -Then clearly $Max(f(V))\le Max(V)$ for all $V$. -If $Max(f(V))= Max(V)$, for some $V\ne 0$, then one of the entries of $V$ is $0$ and it is adjacent to the maximal value $m$ (note that we consider the first entry adjacent to the fourth, due to invariance under rotation). -Consequently, if all values of $V$ are different to its adjacent values, then none of the entries of $f(V)$ is zero, hence -$Max(f(f(V)))< Max(f(V))\le Max(V)$. - -Now start with a $0\ne V\in \Bbb{N}_0^4$, and we have to prove that $Max(f^k(V)) -TITLE: Is the algebra of universally integrable functions a von Neumann algebra? -QUESTION [5 upvotes]: I would like to continue this discussion. -Let $X$ be a compact space. Let us call a function $f:X\to {\mathbb C}$ universally integrable if it is integrable with respect to each regular Borel measure $\mu$ on $X$ (one can imagine $\mu$ as an arbitrary positive continuous functional on ${\mathcal C}(X)$). We denote by ${\mathcal U}(X)$ the space of all universally integrable functions on $X$. -Nate Eldredge noticed here, that ${\mathcal U}(X)$ is a $C^*$-algebra with respect to the sup-norm: -$$ -||f||=\sup_{x\in X}|f(x)|. -$$ -Question: - -Is ${\mathcal U}(X)$ a von Neumann algebra with respect to this norm? - -REPLY [3 votes]: The following is a slight elaboration on Martin Argerami's not-quite-complete (now deleted) answer. Let $E\subset X$ be any set which is not universally measurable (i.e., its characteristic function is not universally integrable). For each finite subset $F\subset E$, note that the characteristic function $1_F$ is in $\mathcal{U}(X)$. These characteristic functions form a bounded increasing net $(1_F)$ in $\mathcal{U}(X)$. If $\mathcal{U}(X)$ were a von Neumann algebra, it would be monotone-complete, and so $(1_F)$ would have a supremum $f\in\mathcal{U}(X)$. Note that for each $x\in X\setminus E$, $1_{X\setminus\{x\}}\in\mathcal{U}(X)$ is an upper bound for each $1_F$, so we have $f\leq 1_{X\setminus\{x\}}$ for all such $x$. The only function $f$ on $X$ which satisfies $1_F\leq f$ whenever $F\subset E$ is finite and $f\leq 1_{X\setminus\{x\}}$ wheneve $x\in X\setminus E$ is $f=1_E$. But $1_E\not\in\mathcal{U}(X)$, so no such supremum $f\in\mathcal{U}(X)$ can exist. Thus $\mathcal{U}(X)$ is not a von Neumann algebra, at least whenever $X$ is nontrivial enough that such a set $E$ exists. -More generally, a similar argument shows that if $A$ is a *-subalgebra of the algebra of all bounded functions on a set $X$ which contains all characteristic function of singletons, then if $A$ is a von Neumann algebra it must actually be the entire algebra of bounded functions.<|endoftext|> -TITLE: Fastest way to show that $D_6 \to S_5$ is an injective homomorphism -QUESTION [5 upvotes]: I want to show that there is an injective homomorphism from $D_6 \to S_5$ where $D_6$ denotes the dihidral group of order 12 and $S_5$ the symmetric group. But I'm not sure how I can do this efficiently. -I define $f: D_6 \to S_5$ by $f(\sigma) = (12)$ and $f(\rho) = (123)(45)$, with $\sigma$ being a reflection and $\rho$ being a rotation. -I know that $D_6$ is generated by $\rho$ and $\sigma$ and that $S_5$ is generated $(12), (23), (34), (45)$. -So what is the fastest way, for someone who is just starting with algebra, to show that this is a homomorphism? Do I have to show it explicitly for all 12 elements? -What confuses me is that you have to show for all $x,y \in D_6$ we have $f(xy)=f(x)(y)$, while $x$ and $y$ can be any combination of $\rho$ and $\sigma$. -Lastly, what is the fastest way to show it's kernel is trivial without going over all elements? - -REPLY [2 votes]: In my opinion, the fastest way to show $D_6\rightarrow S_5$ is injective, is to show $S_5$ has subgroup isomorph by $D_6$, and so $D_6$ embed is $S_5$. -$$D_6\cong C_2 \times S_3\leq S_5$$<|endoftext|> -TITLE: Why does L'Hopital's rule fail in calculating $\lim_{x \to \infty} \frac{x}{x+\sin(x)}$? -QUESTION [126 upvotes]: $$\lim_{x \to \infty} \frac{x}{x+\sin(x)}$$ - -This is of the indeterminate form of type $\frac{\infty}{\infty}$, so we can apply l'Hopital's rule: -$$\lim_{x\to\infty}\frac{x}{x+\sin(x)}=\lim_{x\to\infty}\frac{(x)'}{(x+\sin(x))'}=\lim_{x\to\infty}\frac{1}{1+\cos(x)}$$ -This limit doesn't exist, but the initial limit clearly approaches $1$. Where am I wrong? - -REPLY [8 votes]: There is another useful rule, which I don't seem to have seen written down explicitly: - -Let $f, g, r$ and $s$ be functions such that $g\to\infty$ and $r, s$ are bounded. - Then the limit of $\dfrac{f}{g}$ and the limit of $\dfrac{f + r}{g + s}$ gives the same result. - -Applied here, since $\sin x$ is bounded, the limit is the same as the limit of $\dfrac{x}{x}$.<|endoftext|> -TITLE: Describe all ring homomorphisms from $\mathbb{R}[T] \rightarrow \mathbb{R}[T]$ -QUESTION [7 upvotes]: One of the problems in a problem set I was given as homework in my Algebra course proposes the next problem: - -Describe all ring homomorphisms $\mathbb{R}[T] \rightarrow \mathbb{R}[T]$. Which of them are isomorphisms? - -I would like some suggestions towards the right direction, not the answer to the problem. -This is what I've got so far: -Given a ring homomorphism $f:\mathbb{R}[T] \rightarrow \mathbb{R}[T]$: -Given an arbitrary polinomial $p(T) = a_{0} + a_{1}T + \ldots + a_nT^n$ we have that $f(p) = f(a_0) + f(a_1)f(T) + \ldots + f(a_n)f(T)^n$. So we get that $f$ is completely determined by the values it assumes on $\mathbb{R}$ and $f(T)$. -So this problem may now be separated in two: - -Classifying all homomorphisms of the form $f:\mathbb{R} \rightarrow \mathbb{R}[T]$ -Classifying all possible values $f(T)$ - -With respect to (1): -Conjecture -The only ring homomorphism is $f(x)=x$ (we put as a condition that f(1) = 1, on the definition the professor gave us, so that discards $f(x)=0$) -I've shown by induction that $f(n) = n, ~\forall n\in\mathbb{N}$, then $f(m) = m, ~\forall m\in \mathbb{Z}$, then $f(q) = q, ~\forall q\in \mathbb{Q}$ but I -am having problems showing that $f(\alpha) = \alpha, ~\forall \alpha \in \mathbb{Q}'$ because I don't really know if $f(\alpha) \in \mathbb{R}$. -If I knew that $f(\mathbb{R}) \subset \mathbb{R}$ then I could do something like $f(\alpha) = f(\sqrt{\alpha})^2 > 0$ if $\alpha > 0$. And this would help me prove that $f(\beta) = \beta$ for all irrationals too. The only problem is that, what happens if say, $f(\alpha) = T$. Then $>$ would make no sense. -I think I solved this problem but I am not sure, maybe here is where you guys can help me a little. -If we suppose that $f(\alpha)$ is a polynomial with degree $n$, we can then compute $f(\alpha^\frac{1}{n+1}) = f(\alpha)^\frac{1}{n+1}$ and that would be in $\mathbb{R}[T]$ only if the degree, $n$, of $f(\alpha)$ were $0$ thus proving that indeed $f(\alpha) \in \mathbb{R}$. -Is there any mistake or an easier way? Or is there any usefull comment anyone wants to make that could help me out. Thanks in advance :) - -REPLY [2 votes]: The only Homomorphism on $\mathbb R$ that fixes $1$, is the identity map. For the proof you can see Ring homomorphisms $\mathbb R\to\mathbb R$ . -Since $f$ is a homomorphism that fixes $1$, image of every invertible elements in $\mathbb R[T]$ should be invertible, and we should have $f_{|\mathbb R}=Id_{\mathbb R}$ . -$f$ completely determined by the image of $T$, and so -$f(a_0+a_1T+...+a_nT)=a_0+a_1(p(T))+...+a_n(p(T))^n$ where $p(T)=f(T)$. -At last $f$ is isomorphism iff $f(T)=aT+b$<|endoftext|> -TITLE: An example from Lang's Algebra about primary ideal -QUESTION [5 upvotes]: On page 421 in Lang's Algebra, the author writes - -Let $R$ be a factorial ring with a prime element $t$. Let $A$ be the subring of polynomials $f(X)∈R[X]$ such that - $$f(X)=a_0 + a_1X + \dotsb $$ - with $a_1$ divisible by $t$. Let $P=(tX,X^2)$. Then $P$ is prime. - -My question is: why $P$ is prime? - -REPLY [4 votes]: The claim is wrong: $P$ is not prime in $A$. - -The product of elements $t\in A$ and $X^3\in A$ belongs to $P=(tX,X^2)$, but $t\notin P$ and $X^3\notin P$. -Remark. If instead we take $P=(tX,X^2,X^3)$, then $P$ is prime and $P^2$ is not primary.<|endoftext|> -TITLE: How many n-th Order Partial Derivatives Exist for a Function of k Variables? -QUESTION [6 upvotes]: Example: -Let's say for example I have a function, $f$, of $2$ variables : -$f(x,y)$ -For this function there exits $2$ first-order partial derivatives namely: -$f_x = \frac{\partial f}{\partial x}$ -$f_y = \frac{\partial f}{\partial y}$ -Then if we are to differentiate further we will find that there are $4$ computable second-order partial derivatives. -$f_{xx} = \frac{\partial^2 f}{\partial x^2}$ -$f_{xy} = \frac{\partial^2 f}{{\partial x} {\partial y}}$ -$f_{yy} = \frac{\partial^2 f}{\partial y^2}$ -$f_{yx} = \frac{\partial^2 f}{{\partial y} {\partial x}}$ -However due to the Equality of Mixed Partials (https://en.wikipedia.org/wiki/Symmetry_of_second_derivatives), two of those second-order partial derivatives are equivalent, $f_{xy} = f_{yx}$, and thus we are left with $3$ second-order partial derivatives for a function of 2 variables. -$f_{xx} = \frac{\partial^2 f}{\partial x^2}$ -$f_{xy} = \frac{\partial^2 f}{{\partial x} {\partial y}} \Leftrightarrow f_{yx} = \frac{\partial^2 f}{{\partial y} {\partial x}}$ -$f_{yy} = \frac{\partial^2 f}{\partial y^2}$ -Question: Given a function of k variables: -$f(x_1 , x_2,x_3,\dots,x_{k-1},x_k)$ -Is there a formula to find the number of $n^{th}$-order partial derivatives, (where $n$ is the order of the partial derivative), for a function of $k$ variables? -For example where $n=1$ (i.e. the first-order derivatives), there would be $k$ partial derivatives, just as in the example above, for a function of $2$ variables there exists $2$ first-order derivatives. - -REPLY [4 votes]: In the noncommutative case (I know you weren't asking for it, but I will include it for the sake of completeness), the process of generating the sum of all the derivatives is nothing but the successive application of the differential operator -$$\left(\frac{\partial}{\partial x_1}+...+\frac{\partial}{\partial x_k}\right)$$ -For instance, if $k=2$ -$$\left(\frac{\partial}{\partial x}+\frac{\partial}{\partial y}\right)f=\frac{\partial f}{\partial x}+\frac{\partial f}{\partial y}$$ -I will develop an example with $k=3$. One good way of calcualting all the derivatives is to draw a table. In the first column I write the operators. In the first row, I write all the functions to which the operators shall be applied. The entries of the table are the results of applying the corresponding operator to the functions. Let's say then that $f=f(x,y,z)$. The process of obtaining all the first order partial derivatives of $f$ could be described by -\begin{array}{cc} - & f\\ -\frac{\partial}{\partial x} & \frac{\partial f}{\partial x}\\ -\frac{\partial}{\partial y} & \frac{\partial f}{\partial y}\\ -\frac{\partial}{\partial z} & \frac{\partial f}{\partial z} -\end{array} -To get all the second order derivatives, we just use all the entries of this table as the target functions of another table, say -\begin{array}{cccc} - & \frac{\partial f}{\partial x} & \frac{\partial f}{\partial y} & \frac{\partial f}{\partial z}\\ -\frac{\partial}{\partial x} & \frac{\partial^{2}f}{\partial x^{2}} & \frac{\partial^{2}f}{\partial x\partial y} & \frac{\partial^{2}f}{\partial x\partial z}\\ -\frac{\partial}{\partial y} & \frac{\partial^{2}f}{\partial y\partial x} & \frac{\partial^{2}f}{\partial y^{2}} & \frac{\partial^{2}f}{\partial y\partial z}\\ -\frac{\partial}{\partial z} & \frac{\partial^{2}f}{\partial z\partial x} & \frac{\partial^{2}f}{\partial z\partial y} & \frac{\partial^{2}f}{\partial z^{2}} -\end{array} -The third order derivatives -\begin{array}{cccccccccc} - & \frac{\partial^{2}f}{\partial x^{2}} & \frac{\partial^{2}f}{\partial x\partial y} & \frac{\partial^{2}f}{\partial x\partial z} & \frac{\partial^{2}f}{\partial y\partial x} & \frac{\partial^{2}f}{\partial y^{2}} & \frac{\partial^{2}f}{\partial y\partial z} & \frac{\partial^{2}f}{\partial z\partial x} & \frac{\partial^{2}f}{\partial z\partial y} & \frac{\partial^{2}f}{\partial z^{2}}\\ -\frac{\partial}{\partial x} & \frac{\partial^{3}f}{\partial x^{3}} & \frac{\partial^{3}f}{\partial x^{2}\partial y} & \frac{\partial^{3}f}{\partial x^{2}\partial z} & \frac{\partial^{3}f}{\partial x\partial y\partial x} & \frac{\partial^{3}f}{\partial x\partial y^{2}} & \frac{\partial^{3}f}{\partial x\partial y\partial z} & \frac{\partial^{3}f}{\partial x\partial z\partial x} & \frac{\partial^{3}f}{\partial x\partial z\partial y} & \frac{\partial^{3}f}{\partial x\partial z^{2}}\\ -\frac{\partial}{\partial y} & \frac{\partial^{3}f}{\partial y\partial x^{2}} & \frac{\partial^{3}f}{\partial y\partial x\partial y} & \frac{\partial^{3}f}{\partial y\partial x\partial z} & \frac{\partial^{3}f}{\partial y^{2}\partial x} & \frac{\partial^{3}f}{\partial y^{3}} & \frac{\partial^{3}f}{\partial y^{2}\partial z} & \frac{\partial^{3}f}{\partial y\partial z\partial x} & \frac{\partial^{3}f}{\partial y\partial z\partial y} & \frac{\partial^{3}f}{\partial y\partial z^{2}}\\ -\frac{\partial}{\partial z} & \frac{\partial^{3}f}{\partial z\partial x^{2}} & \frac{\partial^{3}f}{\partial z\partial x\partial y} & \frac{\partial^{3}f}{\partial z\partial x\partial z} & \frac{\partial^{3}f}{\partial z\partial y\partial x} & \frac{\partial^{3}f}{\partial z\partial y^{2}} & \frac{\partial^{3}f}{\partial z\partial y\partial z} & \frac{\partial^{3}f}{\partial z^{2}\partial x} & \frac{\partial^{3}f}{\partial z^{2}\partial y} & \frac{\partial^{3}f}{\partial z^{3}} -\end{array} -You can see that the sum of all third order derivatives, given in this case by -$$\left(\frac{\partial}{\partial x}+\frac{\partial}{\partial y}+\frac{\partial}{\partial z}\right)^{3}f$$ -Can be calculated simply by summing the entries of this third order table. It's obvious how this can be used for a function of $k$ variables. In each step the table will have $k\times(number\: of\: input\: functions)$ resulting derivatives. Starting with just $f$ as input function, you will have $k^n$ $n^{th}$ order derivatives. -In the fully commutative case, on the other hand, I agree with Joriki. As he said, the procedure is explained in Stars and Bars and Multisets-Counting Multisets. And the result is -$$N_{PD}=\left(\left(\begin{array}{c} -k\\ -n -\end{array}\right)\right)=\left(\begin{array}{c} -k+n-1\\ -n -\end{array}\right)=\left(\begin{array}{c} -k+n-1\\ -k-1 -\end{array}\right)$$ -Where $\left(\left(\begin{array}{c} -k\\ -n -\end{array}\right)\right)$ is the multiset counting number.<|endoftext|> -TITLE: Can the inscribed angle theorem be generalized to solid angles in 3D? And beyond to n-dimensional space? -QUESTION [11 upvotes]: The "inscribed angle theorem" is a common 2-dimensional plane geometry fact. It states that for a circle the angle formed between any two points on the circumference with the center is twice the angle formed by those two points with any other point on the circumference. I will not elaborate on a proof or further details here, but instead provide a link and image from the wikipedia page on this topic, where the basic theorem is proved. -IMAGE: The inscribed angle θ is half of the central angle 2θ that subtends the same arc on the circle (magenta). -Wikipedia Article on inscribed angle theorem -My question is whether this simple 2D geometric concept can be adapted to solid angles in 3D? And perhaps beyond to n dimensions? -The 2D case dealt with 3 points on the circumference of a circle (2 that defined the arc, and the 3rd point that formed the angle that was half the angle at the center). In 3D, imagine a sphere rather than a circle, and consider 4 points instead of 3. Let 3 of the 4 points form the base of a tetrahedron, then consider two distinct cases. -In the first case the 4th point forms the tip of the tetrahedron. In the other case the center of the sphere defined by the 4 points forms the tip of the tetrahedron. In either case the base of the tetrahedron, and the associated spherical triangle on the surface of the sphere, is the same. However there are two different solid angles at the tip of the tetrahedron in each case -- one for when the tip is at the center of the sphere and the other when the tip is at the 4th point defining the sphere. -Are these solid angles related (one being half of the other, or some other similar relation) as in the inscribed angle theorem in 2D? If they are related in some way, is this a common fact/theorem in solid geometry? Does it have a name like the "inscribed angle theorem" in 2D? What is the relation between these two solid angles? -Is there a similar concept in 4D, or n-dimensional, space? (I am not even sure if there is a solid angle concept in arbitrary n-dimensional space.) If there is a concept of solid angles in higher dimensions is there a predictable relation between these two angles a set n-dimensional space?For example, a particular dimension like 10-dimensional space, then 11 points in that 10D space, is there a way to easily find the "solid angle" at the 11th point, and then use a fixed relation to know the "solid angle" at the center of the 10D hypersphere going out to the other 10 points on the surface of the hypersphere? - -REPLY [6 votes]: The answer to your question is no. The inscribed angle theorem does not work for inscribed solid angles in a sphere. -To see why, I will start by stating an equivalent version of the inscribed angle theorem. Consider the following figure. - -This figure shows the point projection of a circle $C_1$ onto a circle $C_2$ of twice the radius. The projection point $P$ lies on the circumference of $C_1$ and is the center of $C_2$. In this situation, the inscribed angle theorem can be stated as follows: - -The projection from $P$ of $C_1$ onto $C_2$ is length-preserving. - -Note that this projection maps $C_1$ onto the lower half of $C_2$. The figure above shows two red arcs that have the same length under this projection. -So the proper question here is whether an analogous statement is true for spheres. Imagine two spheres $S_1$ and $S_2$, where $S_2$ has twice the radius of $S_1$, and $S_1$ is tangent to $S_2$ on the inside. If $P$ is the center point of $S_2$, then the right question to ask is the following: - -Is the projection from $P$ of $S_1$ onto $S_2$ area-preserving? - -Since doubling the radius of a sphere quadruples its area, this is the same as asking whether a solid inscribed angle is equal to one quarter of the corresponding central angle. -Note that this is certainly true infinitesimally near the point of tangency. That is, a solid inscribed angle that lies within an $\epsilon$-neighborhood of a diameter of a sphere is approximately 1/4 of the corresponding solid central angle. -Unfortunately, the answer to this question is no. This involves a simple calculation. Let the two spheres be -$$ -x^2+y^2+z^2=1,\qquad x^2+y^2+(z-1)^2 = 4. -$$ -So $P = (0,0,1)$ and the point of tangency is $(0,0,-1)$. We can parametrize the first sphere $S_1$ using spherical coordinates: -$$ -\Phi(\theta,\phi) \,=\, (\cos\theta\sin\phi,\,\sin\theta\sin\phi,\,\cos\phi). -$$ -It is easy to check that the projection from $P$ onto $S_2$ maps the point $\Phi(\theta,\phi)$ to the point -$$ -\Psi(\theta,\phi) \,=\, \bigl(2 \cos \theta \cos(\phi/2), 2\sin \theta \cos(\phi/2), 1-2\sin(\phi/2)\bigr) -$$ -on $S_2$. But -$$ -\left\|\frac{\partial \Phi}{\partial \phi} \times \frac{\partial \Phi}{\partial \psi}\right\| \,=\, \sin \phi -$$ -and -$$ -\left\|\frac{\partial \Psi}{\partial \phi} \times \frac{\partial \Psi}{\partial \psi}\right\| \,=\, 2\cos(\phi/2). -$$ -These are not the same, so the given map is not area-preserving. Indeed, since -$$ -\frac{2 \cos(\phi/2)}{{\sin\phi}} \,=\, \csc(\phi/2) -$$ -area is locally being multiplied by $\csc(\phi/2)$. As expected, this is equal to $1$ at the point of tangency (where $\phi=\pi$), but approaches infinity as we move toward $P$ (with $\phi\to 0$). -Practically speaking, what this means is that, given a region $R$ on a sphere and a point $P$ on the sphere that does not lie in $R$, the ratio -$$ -\frac{\text{central solid angle for }R}{\text{inscribed solid angle from }P\text{ to }R} -$$ -is close to $4$ when $R$ is small and diametrically opposite the point $P$, but approaches infinity when $R$ is small and very close to $P$. More specifically, the above considerations show that this ratio is the average value over $R$ of $4\csc(\phi/2)$, where $\phi$ is the central angle from $P$ to a point in $R$.<|endoftext|> -TITLE: Can we always find homotopy of two paths which lies "between" the paths? -QUESTION [6 upvotes]: Let $\gamma_0,\gamma_1:[0,1]\to\mathbb{R}^2$ be paths such that $\gamma_0(0)=\gamma_1(0)$ and $\gamma_0(1)=\gamma_1(1)$. I wish to show that there is a homotopy $\Gamma:[0,1]\times[0,1]\to\mathbb{R}^2$ from $\gamma_0$ to $\gamma_1$ that satisfies the following: - -For all $(s,t)\in[0,1]\times[0,1]$, $\Gamma(s,t)$ is not in the unbounded face of $\gamma_0([0,1])\cup\gamma_1([0,1])$. - -PS: If the answer is no, are there additional restrictions which we could place on $\gamma_0$ and $\gamma_1$ which would allow this? (Ex. rectifiable, differentiable, etc.) - -REPLY [4 votes]: If you allow the contour of the union of those two paths to form a simple closed curve*, this is true, and can be seen as a consequence of the Schoenflies Theorem together with the fact that the disk is contractible. - -*I think this is the case, since you are talking about "unbounded face".<|endoftext|> -TITLE: How to read the Jacobian (determinant) shorthand notation, and why is it so cryptic? -QUESTION [5 upvotes]: Lets say we have a function $f : \mathbb{R}^3\rightarrow \mathbb{R}^3$, as defined below, with its value being denoted as $(a, b, c)$ for convenient reference. -$$f(x,y,z) = (x^2, y^2, z^2) = (a, b, c)$$ -The Jacobian matrix of $f$ and subsequently the Jacobian determinant would then be: -$$ -\begin{bmatrix} -a'\\ -b'\\ -c' -\end{bmatrix} -= -\begin{bmatrix} -\frac {\partial a}{\partial x} & \frac {\partial a}{\partial y} & \frac {\partial a}{\partial z}\\ -\frac {\partial b}{\partial x} &\frac {\partial b}{\partial y} & \frac {\partial b}{\partial z}\\ -\frac {\partial c}{\partial x} &\frac {\partial c}{\partial y} & \frac {\partial c}{\partial z} -\end{bmatrix} -= -\begin{bmatrix} -2x & 0 &0 \\ -0 & 2y &0 \\ -0 & 0 &2z -\end{bmatrix} -$$ -$$ -\begin{vmatrix} -a'\\ -b'\\ -c' -\end{vmatrix} -= -\begin{vmatrix} -\frac {\partial a}{\partial x} & \frac {\partial a}{\partial y} & \frac {\partial a}{\partial z}\\ -\frac {\partial b}{\partial x} &\frac {\partial b}{\partial y} & \frac {\partial b}{\partial z}\\ -\frac {\partial c}{\partial x} &\frac {\partial c}{\partial y} & \frac {\partial c}{\partial z} -\end{vmatrix} -= -\begin{vmatrix} -2x & 0 &0 \\ -0 & 2y &0 \\ -0 & 0 &2z -\end{vmatrix} -= -2x2y2z -$$ -Ok sure, this makes sense. It's kind of just like normal calculus but expanding everything out into a matrix. -Now I look at the shorthand notation for the Jacobian determinant: -$$ -\frac {\partial(a,b,c)}{\partial(x,y,z)} = 2x2y2z -$$ -Where did this even come from? Why are there partials there. How does it convey the same amount of information? How do I even read this "shorthand notation". It just seems so left field - out of nowhere. - -How do I read it -How did this arguably rather cryptic notation come about? - -REPLY [5 votes]: It is a little awkward, but using the partial derivative notation with vectors basically means: -$$\begin{align}\dfrac{\partial (a, b, c)}{\partial (x, y, z)} ~=~& \det\left((\dfrac{\partial}{\partial x}, \dfrac{\partial}{\partial y}, \dfrac{\partial}{\partial z})^\top(a,b,c) \right)^\top -\\[1ex] =~& \begin{vmatrix}\dfrac{\partial a}{\partial x}&\dfrac{\partial a}{\partial y}&\dfrac{\partial a}{\partial z}\\ \dfrac{\partial b}{\partial x}&\dfrac{\partial b}{\partial y}&\dfrac{\partial b}{\partial z}\\\dfrac{\partial c}{\partial x}&\dfrac{\partial c}{\partial y}&\dfrac{\partial c}{\partial z}\end{vmatrix} -\end{align}$$ -The notation summarises the essentials of the Jacobian determinant.   You are taking the partial derivatives of $a, b, c$ each with respect to $x,y,z$, constructing a matrix of the result and evaluating its determinant. -$$\dfrac{\partial (a, b, c)}{\partial (x, y, z)}$$<|endoftext|> -TITLE: Find max of $f(x)=12x^2\int_0^1yf(y)dy+ 20x\int_0^1y^2f(y)dy+4x$ -QUESTION [5 upvotes]: Let $$f(x)=12x^2\int_0^1yf(y)dy+ 20x\int_0^1y^2f(y)dy+4x$$ - Find the maximum value of $f(x)$ - -I wrote the two integrals as $I_1$ and $I_2$ since they are constants and differentiated the equation and put it to $0$. Then I tried writing one integral in terms of the other. But I could not get the required answer. -I am in 12th and it came in one of my tests. - -REPLY [4 votes]: $ \int_{0}^{1} x f(x) dx = - \left(\int_{0}^{1} 12x^{3} dx \right) - \left( \int_{0}^{1} y f(y) dy \right)+ - \left( \int_{0}^{1} 20x^{2} dx \right) - \left( \int_{0}^{1} y^{2} f(y) dy \right)+\int_{0}^{1} 4x^{2} dx$ -$ \int_{0}^{1} x^{2} f(x) dx = - \left( \int_{0}^{1} 12x^{4} dx \right) - \left( \int_{0}^{1} y f(y) dy \right)+ - \left( \int_{0}^{1} 20x^{3} dx \right) - \left( \int_{0}^{1} y^{2} f(y) dy \right)+ - \int_{0}^{1} 4x^{3} dx$ -Therefore, -$$I_{1} = 3I_{1}+\frac{20}{3} I_{2}+\frac{4}{3}$$ -$$I_{2} = \frac{12}{5}I_{1}+5I_{2}+1$$ -On solving, -$$I_{1}=-\frac{1}{6}$$ -$$I_{2}=-\frac{3}{20}$$ -Hence, $$f(x)=-2x^{2}+x=\frac{1}{8}-2\left( x-\frac{1}{4} \right)^{2}$$<|endoftext|> -TITLE: Probability of obtaining a heads on the coin before a 1 or 2 on the die? -QUESTION [7 upvotes]: I came across this question recently and can't seem to find the correct approach. -Any help would be appreciated! - -An experiment consists of first tossing an unbiased coin and then rolling a fair die. -If we perform this experiment successively, what is the probability of obtaining a heads on the coin before a $1$ or $2$ on the die? -$\mathbb P(\textrm{Heads})=\frac12$ -$\mathbb P(1,2)=\frac13$ -If $A_i$ represents the event that a $1$ or a $2$ is rolled on the $i^{th}$ toss, then I have to find the following: -$$\bigcup^{\infty}_{i=1}\mathbb P(A_i).$$ - -But I am not sure how to find this and also incorporate the probability of landing on heads before this... -Am I approaching this correctly or should I be assigning random variables and working from there? - -REPLY [3 votes]: What you are describing is a series. -You could think of this as a game between Alice and Bob, where Alice flips the coin (wins with a head) and Bob rolls the die (wins with 1 or 2). Essentially you are asking what is the probability that Alice wins before Bob -$$P(A -TITLE: Sum of inverse of Fibonacci numbers -QUESTION [9 upvotes]: If $F(n)$ is the nth Fibonacci number, How can I prove that: -$$\sum_{i=1}^{\infty} \frac{1}{F(i)}\approx 3.36\, .$$ - -REPLY [7 votes]: $$ -\begin{align} -F_n -&=\frac{\phi^n-(-1/\phi)^n}{\sqrt5}\\ -&=\frac{\phi^n}{\sqrt5}\left(1-\left(-\frac1{\phi^2}\right)^n\right) -\end{align} -$$ -Therefore, -$$ -\begin{align} -\sum_{n=1}^\infty\frac1{F_n} -&=\sum_{n=1}^\infty\frac{\sqrt5}{\phi^n}\left(1+\left(-\frac1{\phi^2}\right)^n+\left(-\frac1{\phi^2}\right)^{2n}+\left(-\frac1{\phi^2}\right)^{3n}+\cdots\right)\\ -&=\sqrt5\left(\frac1{\phi-1}-\frac1{\phi^3+1}+\frac1{\phi^5-1}-\cdots\right)\\ -&=\sqrt5\sum_{k=0}^\infty\frac{(-1)^k}{\phi^{2k+1}-(-1)^k} -\end{align} -$$ -Since $\frac{\sqrt5}{\phi^{19}+1}=0.0002392$, -$$ -\sqrt5\sum_{k=0}^8\frac{(-1)^k}{\phi^{2k+1}-(-1)^k}=3.3600587 -$$ -is less than $0.0002392$ too high.<|endoftext|> -TITLE: What is the opposite category of $\operatorname{Top}$? -QUESTION [22 upvotes]: My question is rather imprecise and open to modification. I am not entirely sure what I am looking for but the question seemed interesting enough to ask: -The opposite category of rings is the category of affine schemes. This is usually thought of as the category of spaces. Can we run the construction backwards for categories usually thought of as containing spaces? -For instance, does $\operatorname{Top}^{\operatorname{op}}$ have a nice description as some "algebraic" category? -Note that it does not seem easy to describe the opposite category of all schemes. Therefore, the above question might be asking too much. Perhaps the following is a more tractable (or not) question: -Can we find an "algebraic" category $C$ such that we can embed $C^{\operatorname{op}}$ in $\operatorname{Top}$ such that every topological space can be covered by objects in $C^{\operatorname{op}}$? Perhaps one would like to replace this criterion of being covered by objects by a more robust notion in general. -One can repeat the question for other categories of spaces like: - -Category of manifolds (perhaps closer to schemes than general topological spaces) -Compactly generated spaces -Simplicial Sets - -and so on. A perhaps interesting example is the category of finite sets, it's opposite category is the category of finite Boolean algebras. - -REPLY [3 votes]: The description of $\mathbf{Top}^{\mathrm{op}}$ as a quasi-variety (mentioned in the answer by arsmath) has a closely related description, which appears in - -J. Adamek, M. C. Pedicchio, A remark on topological spaces, grids, and topological systems. Cahiers de Topologie et Géométrie Différentielle Catégoriques 38.3 (1997): 217-226 - -Namely, $\mathbf{Top}^{\mathrm{op}}$ is the category of monomorphisms $F \to U(B)$, where $F$ is a frame and $B$ is a complete atomic boolean algebra, and $U(B)$ is its underlying frame. Both the category of frames and the category of CABAs are monadic over $\mathbf{Set}$ and hence infinitary algebraic. It follows that $\mathbf{Top}^{\mathrm{op}}$ is essentially infinitary algebraic. -This can actually be used to give a universal property of $\mathbf{Top}$ as a complete category, see Large limit sketches and topological space objects (Section 9).<|endoftext|> -TITLE: Why are turns not used as the default angle measure? -QUESTION [28 upvotes]: Why is $2\pi$ radians not replaced by $1$ turn in formulas? -The majority of them would be simpler. If such a replacement was proposed earlier, why was it declined? - -REPLY [3 votes]: Just a complement to zahbaz answer. -With angles in radian, you get the nice formulas: -sin' = cos -cos' = -sin - -sin(x) = 1/2 * (exp(ix) - exp(-ix)) -cos(x) = 1/2 * (exp(ix) + exp(-ix)) - -Of course those are mathematical considerations, and that's the reason why in common life we use degrees. -But there are other units. In artillery, the thousandth (millième in french) is used. You have 6400 of then in one turn (why not...) but as an approximation, it gives an elevation of 1m at 1km of distance. If you have this in binoculars and can guess the height of something, you can easily compute a distance - even a military can...<|endoftext|> -TITLE: An integral for $2\pi+e-9$ -QUESTION [8 upvotes]: Motivation -Lucian asked about the almost-integer $2\pi+e\approx9$ in a comment to a partially answered why question about $e\approx H_8$. This is more involved than approximations to $\pi$ and logarithms because two transcendental constants are included, as in $e^\pi-\pi\approx20$. -Tried so far -An answer can be crafted from integrals related to $\pi\approx\frac{22}{7}$ and $e\approx\frac{19}{7}$ -$$\int_0^1 \frac{x^4(1-x)^4}{1+x^2}dx =\frac{22}{7}-\pi$$ -$$\frac{1}{14}\int_0^1 x^2(1-x)^2e^xdx=e-\frac{19}{7}$$ -to obtain - -$$\int_0^1 x^2 (1-x)^2 \left(\frac{e^x}{14}-\frac{2 x^2 (1-x)^2}{1+x^2}\right) dx = 2\pi+e-9 $$ - -The visual representation of this integral provided by WolframAlpha shows that $2\pi+e-9$ is positive and small (the integrand is between $0$ and $0.004$ for $0 -TITLE: Limits of finite structures - first order logic -QUESTION [5 upvotes]: Assume that $\mathcal{C}=\{M_i:i\in I\}$ is an infinite collection of different finite $\mathcal{L}$-structures in a first-order language $\mathcal{L}$. The question is: - - -What kind of infinite "limits" can we produce from the class $\mathcal{C}$? -What kind of "transference principles" are valid for the different limits that we can take. -What advantages could exist by working in the limit $\mathcal{M}$ instead of working directly on the class $\mathcal{C}$. - - -Perhaps the prototypical example is the ultraproduct construction, on which the transference principle is Łoś's Theorem and has the advantage of, among other things, being an infinite structure and also being equipped with some notion of measure for definable sets. -However, I have heard also about profinite structures (inverse limits of finite structures), and about Fraïssé limits, among others. But I am not sure what are their main properties, or if there exist some sort of transference principle (perhaps only for some kind of formulas). I would appreciate if anyone can give any information or reference to look at. -The ultimate answer would be a complete classification of the possible limits together with their properties, but of course I understand that such a thing could only exist in the platonic world. Thus, any partial answer will be appreciated as well. - -REPLY [4 votes]: Since the questions is fairly broad, the answer below may be a bit trivial, but there are more interesting examples given afterwards with proper reference. -Let's take a converging sequence $(M_n) \to \mathcal{M}$ (all that follows also applies to random sequences). -To precisely define what convergence mean, let's define the metric -$$d_{FO}(M_1,M_2) = \begin{cases} - 2^{-\min(q(\varphi))} & \mbox{for } M_1 \models \varphi \wedge M_2 \not\models \varphi\\ - 0 & \mbox{otherwise}\\ - \end{cases}$$ -(where $q(\varphi)$ is the quantifier depth of the formula). So this metric depends on the smallest formula separating two structures. -Let's assume we have properly defined our space so that we have as usual that a sequence is convergent iff it is Cauchy. -Two structures are elementarily equivalent if they are at distance $0$. -Since finite structures are categorical, the equivalence class of all finite structures are singletons, but the equivalence classes of infinite structures may be infinite. -Call an infinite structure an $*$-limit if it can be obtained as the limit of a converging sequence of finite structures. -Now for the consequences of these definitions: -(1). All $*$-limits have the finite model property (FMP). However characterizing theories with the FMP is a hard task. -(2). Similarly, by definition you have $\mathcal{M} \models \varphi$ implies that there exists an $i$ such that for all $n>i$, $M_n \models \varphi$. Getting an estimate on $i$ depends on the rate of convergence. -(3). I assume you mean what is the advantage to work on $\mathcal{M}$ rather than $(M_n)$. Well, if you don't have estimates on the rate of convergence, if you have $M_i \models \varphi$, you don't know if it isn't an anomaly because you took a too small structure. So some tests might work better on the limit since it may be free of such artifacts. However all non-first-order parameters (e.g. connectedness) are not continuous with the limit, so it may not be accurate to use for all estimations. -An interesting example (or rather a counter-example) of the link between Fraïssé limits and a class of finite structures is the class of triangle-free graphs. The limit of the sequence $(T_n)$, where $T_n$ is the uniform distribution on all triangle-free graphs on $n$ vertices, is $\mathcal{B}$, the bipartite equivalent to the Rado graph. This happens because almost all triangle-free graphs are bipartite. However, the Fraïssé limit of the triangle-free graphs is not bipartite, so there is no transference principle at all. This contrasts sharply with the class of all graphs and the Rado graph. See the details, proofs and wonders in [0]. -For a classification of all possible limits... there are a lot of them, right? -However there are some interesting results of more restricted scope. Way too much to give an overwiew here, but we can have two interesting examples. There's a classification of reducts of the Rado graph [1], or Ove Ahlman's work [2], which precisely study the class of structures that can be obtained as limits of sequences (with some additional hypothesis). -[0] Kolaitis, Promel, Rothschild, "$K_{l+1}$-free graphs: asymptotic structure and a 0-1 law", Trans. Amer. Math. Soc., 303 (1987). -[1] Simon Thomas, "Reducts of the Random Graph", J. Symbolic Logic 56 (1991). -[2] Ove Ahlman, "Simple structures axiomatized by almost sure theories", Annals of Pure and Applied Logic 167 (2016).<|endoftext|> -TITLE: Why is it more efficient to compute the modular exponentiation by calculating to the power of two and not three for example? -QUESTION [5 upvotes]: I learned about modular exponentiation from this website and at fast modular exponentiation they calculate the modulo of the number to the power of two and then they repeat this step. Why not calculate to the power of three ? -https://www.khanacademy.org/computing/computer-science/cryptography/modarithmetic/a/fast-modular-exponentiation - -REPLY [2 votes]: In order to raise x to the power N - modularly or otherwise - by repeated squaring or cubing, one has to add corrective multiplications to make the final exponent of the result equal to N in base 2 or 3, respectively. This means one has to extract the base-2 (resp. base-3) digits of N. The advantage of binary is that, because of our hardware design choices, we already have the base-2 digits of N; extracting them requires only SHIFT and AND instructions. For base-3 one would have to divide by 3, a significantly more expensive operation. [this assumes N is a variable input; if N is a pre-chosen constant one can of course precalculate the required steps without performing any divisions] -If we were living in a world where our design choices favored some form of ternary, maybe this question would have been asking why base-2 isn't preferred over base-3. :-)<|endoftext|> -TITLE: Simplifying integral $\int_4^3 \sqrt{(x - 3)(4 - x)} dx$ by an easy approach -QUESTION [6 upvotes]: So I have this Integral $$\int_4^3 \sqrt{(x - 3)(4 - x)} dx$$ -I know I can easily evaluate it using the by first converting it into this form $$\int \sqrt{a^2 + x^2} dx$$ and then using the direct formula for this. -But since this one's a definite integral and while evaluating it's getting very long and taking time to solve. Also the probability of committing a mistake is high. -I was wondering if there's an easy approach to evaluate such integrals without doing this much maths. -This question appeared in my exam for for just 2 marks and it took me a long time to solve. I don't think this much calculation is justified for just 2 marks. -Kindly help me with an easy approach. - -REPLY [2 votes]: Here a elaboration. -Given integral is $\int_4^3 \sqrt{(x-3)(4-x)} \;\mathbb{d}x$ -if $x=3\sin^2\theta+4\cos^2\theta$, -then integral is from $0\leq\theta\leq\frac{\pi}{2}$. -$\mathbb{d}x=(3\times 2\times sin\theta\times\cos\theta-4\times2\times\cos\theta\times\sin\theta)\mathbb{d}\theta$ -$\mathbb{d}x=-2\sin\theta\cos\theta\;\mathbb{d}\theta$ -$$I=\int_0^{\frac{\pi}{2}}\cos\theta\times\sin\theta\times(-2\sin\theta\cos\theta)\mathbb{d}\theta$$ -If you know beta function, it's a pretty straightforward integral but even if you don't, just use $2\sin\theta\cos\theta=\sin(2\theta)$<|endoftext|> -TITLE: Hoffman-Wielandt Theorem Proof -QUESTION [6 upvotes]: Exercise 3.3 of Izenman's Modern Multivariate Statistical Techniques: let $\mathbf{A}$, $\mathbf{B}$ be symmetric $J \times J$ matrices, with eigenvalues $\{\lambda_j(\mathbf{A})\}$ and $\{\lambda_j(\mathbf{B})\}$ respectively, arranged in descending order with respect to $j$ (so $\lambda_1$ is largest, $\lambda_J$ is the smallest for both matrices). Prove that $$\sum_{j=1}^{J}\left[\lambda_j(\mathbf{A}) - \lambda_j(\mathbf{B})\right]^2 \leq \text{tr}\{(\mathbf{A}-\mathbf{B})(\mathbf{A}-\mathbf{B})^{T}\}\text{.}$$ -The hint says to use spectral decomposition, so -$$\begin{align*} -\mathbf{A} &= \sum_{j=1}^{J}\lambda_j(\mathbf{A})\mathbf{v}_j(\mathbf{A})\mathbf{v}^T_j(\mathbf{A}) \\ -\mathbf{B} &= \sum_{j=1}^{J}\lambda_j(\mathbf{B})\mathbf{v}_j(\mathbf{B})\mathbf{v}^T_j(\mathbf{B}) -\end{align*}$$ -where the $\mathbf{v}_j(\cdot)$ denote the eigenvectors of the matrix $\cdot$ corresponding to $\lambda_j$. Then it says to express $$\text{tr}\{(\mathbf{A}-\mathbf{B})(\mathbf{A}-\mathbf{B})^{T}\}$$ -in terms of the decomposition. I have -$$\mathbf{A}-\mathbf{B} = \sum_{j=1}^{J}[\lambda_j(\mathbf{A})\mathbf{v}_j(\mathbf{A})\mathbf{v}^T_j(\mathbf{A})-\lambda_j(\mathbf{B})\mathbf{v}_j(\mathbf{B})\mathbf{v}^T_j(\mathbf{B})] $$ -and -$$(\mathbf{A}-\mathbf{B})^{T} = \mathbf{A}^{T}-\mathbf{B}^{T} = \sum_{j=1}^{J}[\lambda_j(\mathbf{A})\mathbf{v}^T_j(\mathbf{A})\mathbf{v}_j(\mathbf{A})-\lambda_j(\mathbf{B})\mathbf{v}^T_j(\mathbf{B})\mathbf{v}_j(\mathbf{B})]\tag{1}\text{.}$$ -I suppose we could assume the vectors are normalized, so we get $\mathbf{v}^T_j(\mathbf{A})\mathbf{v}_j(\mathbf{A}) = \mathbf{v}^T_j(\mathbf{B})\mathbf{v}_j(\mathbf{B}) = 1$. But I'm not sure what else to do. Direct multiplication looks like a very messy approach (which would possibly involve induction on $J$), but I thought I'd ask here for suggestions. - -REPLY [13 votes]: Symmetric matrices are orthogonally diagonalisable. Let $A=U\Lambda U^T$ and $B=V\Sigma V^T$, where $U,V$ are real orthogonal and the eigenvalues in the two diagonal matrices $\Lambda,\Sigma$ are arranged in descending order. Let also $W=U^TV$. Then the inequality in question is equivalent to -$$ -\operatorname{tr}\left((\Lambda-\Sigma)^2\right) -\le \operatorname{tr}\left((U\Lambda U^T-V\Sigma V^T)^2\right).\tag{1} -$$ -Expand the square terms on both sides, we may in turn rewrite $(1)$ as -$$ -\operatorname{tr}(\Lambda W\Sigma W^T) -\le \operatorname{tr}(\Lambda\Sigma).\tag{2} -$$ -It is well-known that the LHS of the above inequality is maximised when $W=I$ (and therefore the inequality is true). To see this, let $S$ be the entrywise square of the real orthogonal matrix $W$. Then $S$ is a doubly stochastic matrix and the LHS is equal to $\sum_{i,j}\lambda_i\sigma_js_{ij}$, which is a linear function in the entries of $S$. By Birkhoff-von Neumann theorem, the set of all doubly stochastic matrices is the convex hull of all permutation matrices. Therefore the LHS of $(2)$ is maximised when $W$ is a permutation matrix. It is easy to see that among all permutation matrices, $W=I$ gives the global maximum.<|endoftext|> -TITLE: Distance to origin of tangent plane to ellipsoid -QUESTION [5 upvotes]: We have an $n$-dimensional ellipsoid described by: $$\frac{x_1^2}{a_1^2}+\dots+\frac{x_n^2}{a_n^2}=1$$ and we construct the hyperplane through any $x \in$ the ellipsoid which is tangent to the ellipsoid at $x$. -Prove that $D(x)$, the distance from this hyperplane to the origin, is: $$D(x)=\frac{1}{\sqrt{\frac{x_1^2}{a_1^4}+\dots+\frac{x_n^2}{a_n^4}}}$$ -I know that the plane tangent to the ellipsoid at a point on the ellipsoid we can call $y_0=(y_1,\dots,y_n)$ can be written as: $$\frac{x_1y_1}{a_1^2}+\dots+\frac{x_ny_n}{a_n^2}=1$$ -I don't see how to get from this description of the tangent plane to an expression for the distance to the origin only in terms of $x$. How do we deal with the fact that the description is at a specific point? -Edit: fixed the equation for D - -REPLY [2 votes]: I believe there is an error in the formula for the distance. -If $(x_1,\cdots,x_n)$ is your point on the ellipsoid, we can find a vector orthogonal to the ellipsoid by taking the gradient of the function -\begin{equation} -F = \frac{x_1^2}{a_1^2}+\frac{x_2^2}{a_2^2}+\cdots+\frac{x_n^2}{a_n^2}-1 -\end{equation} -so we'll have -\begin{equation} -\nabla F =2 \left(\frac{x_1}{a_1^2},\frac{x_2}{a_2^2},\cdots,\frac{x_n}{a_n^2}\right) -\end{equation} -The distance of the tangent plane from the origin can be found by finding a number $\lambda$ such that $\lambda\nabla F$ belongs to the plane. In this way you'll get that the distance is $\|\lambda\nabla F\|$. -In particular, we have that -\begin{equation} -y_i=2\lambda\frac{x_i}{a_i^2} -\end{equation} -so we should solve -\begin{equation} -2\lambda\left(\frac{x_1^2}{a_1^4}+\frac{x_2^2}{a_2^4}+\cdots+\frac{x_n^2}{a_n^4}\right)-1=0 -\end{equation} -which yelds -\begin{equation} -\lambda=\frac{1}{2}\left(\frac{x_1^2}{a_1^4}+\frac{x_2^2}{a_2^4}+\cdots+\frac{x_n^2}{a_n^4}\right)^{-1} -\end{equation} -Now, -\begin{equation} -\|\lambda\nabla F\|=\lambda\|\nabla F\|=\lambda \cdot 2\left(\frac{x_1^2}{a_1^4}+\cdots+\frac{x_n^2}{a_n^4}\right)^{1/2} -\end{equation} -so -\begin{equation} -\|\lambda\nabla F\|=\frac{1}{2}\left(\frac{x_1^2}{a_1^4}+\cdots+\frac{x_n^2}{a_n^4}\right)^{-1}\cdot 2\left(\frac{x_1^2}{a_1^4}+\cdots+\frac{x_n^2}{a_n^4}\right)^{1/2}=\left(\frac{x_1^2}{a_1^4}+\cdots+\frac{x_n^2}{a_n^4}\right)^{-1/2} -\end{equation}<|endoftext|> -TITLE: Pullback of a complex $ 1$-form -QUESTION [5 upvotes]: Let $p = \operatorname{exp} : \mathbb{C} \to \mathbb{C}^*$ be a covering and $(U,z)$ a chart of $\mathbb{C}^*$ with $z = x + iy$. Let $\omega = dz/z$ be a one-form on $U$. -Problem: Find the pullback $p^*\omega$. -My try: We can write $p^* \frac{1}{z}dz = p^*\frac{1}{z} \, d(p^*z) = p^*\frac{1}{z} \, p^*(dz) = (\frac{1}{z} \circ p)(dz \circ p)$. I also tried making sens of $\frac{1}{z} \circ p (a)$ for some $a \in U$. Then we get -$$ -\frac{1}{z} e^a = \frac{1}{x(e^a) + iy(e^a)}. -$$ -I have no idea what makes sense to do or try. I have very little intuition for this. - -REPLY [3 votes]: Look at a restriction on an open chart where $p : (V,w) \mapsto (U,z)$ is an isomorphism. -Then $p^*(dz) = (dp/dw) dw = (\exp w) dw$ and so $p^*(\frac 1z dz) = \frac 1{p(w)}p^*(dz) = \frac 1{\exp w}(\exp w) dw = dw$. - -Notice that if $\tau$ is a translation $w \in (\Bbb C,+) \mapsto w +a \in (\Bbb C,+)$ then $\tau^*(dw) = dw$ so $dw$ is invariant by translation. -Likewise, since $\exp$ is a group morphism $(\Bbb C,+) \to (\Bbb C^*,\times)$, if $\tau$ is a scalar multiplication $z \in (\Bbb C^*,\times) \mapsto bz \in (\Bbb C^*,\times)$ then $\tau^*(dz/z) = dz/z$ so $dz/z$ is invariant by scalar multiplication. -So they are both very important $1$-forms for their respective groups.<|endoftext|> -TITLE: Is there an functor without an adjoint? -QUESTION [5 upvotes]: So I'm doing some research into category theory, and I don't know whether this is a trivial question or not so I'll ask it anyway. -Which functors don't have left adjoints? -I know there must be some, as otherwise there would be no point for the adjoint functor theorem or knowing that right adjoints preserve limits etc. But I just can't think of any. A couple of examples would be great, and even better if you could explain how they defy the adjoint functor theorem, but just to know some to look into would be really helpful. -Many thanks. - -REPLY [6 votes]: Special Adjoint Functor Theorem (SAFT) states that a functor $T\colon\mathcal{A}\to\mathcal{B}$ from a good category $\mathcal{A}$ (category which is locally small, complete, well-powered, has a small cogenerating family) to a locally small category $\mathcal{B}$ has a left adjoint if and only if it is continuous (preserves small limits). There are many other formulations, see nlab article. -For example, take $\mathbf{Set}$. It satisfies SAFT, hence the functor $T\colon\mathbf{Set}\to\mathcal{B}$ has a left adjoint iff it preserves limits. Take a non-empty set $X$, then the functor $$(-\times X)\colon\mathbf{Set}\to\mathbf{Set}$$ -hasn't a left adjoint (because it doesnt't preserve products), but it actually has a right adjoint. -Вesides SAFT, there are plenty of cases when a functor has no left adjoints. For instance, take a functor $\mathbf{0}\to\mathcal{A}$ from the empty category to a non-empty category. Then it of course has no left (and right) adjoints by the trivial reason. -Note, that if you find a functor without a right adjoint, then you automatically get a functor without a left adjoint (you can simply take its dual). For example, the functor $$\text{Mor}\colon\mathbf{Cat}\to\mathbf{Set},$$ -which maps a small category to its set of morphisms, has no right adjoints (because it doesn't preserve coequalizers). Hence, the dual functor $\text{Mor}^{\text{op}}$ has no left adjoints.<|endoftext|> -TITLE: Problem about uniform continunity on $[0,\infty)$ -QUESTION [6 upvotes]: The question is the following: -$f(x)$ is uniformly continuous on $[0, \infty)$ and for any $x > 0$, $\lim\limits_{n\to \infty}f(x+n) = 0$, where $n \in \mathbb{Z}_{>0}$. Prove that $\lim\limits_{x\to \infty} f(x) = 0$. -Hint: Divide $[0, 1]$ into small equal-length intervals -I do not understand what this question means. What exactly it is asking to be proved? Wouldn't it be obvious that $\lim\limits_{x\to \infty} f(x) = 0$ since $\lim\limits_{n\to \infty}f(x+n) = 0$. Also, what is the meaning or help from the hint? -Thanks for your help! I am so confused... - -REPLY [3 votes]: Fix $\varepsilon>0$. Since $f$ is uniformly continuous, there exists $\delta>0$ such that $x,y\geq 0$ and $|x-y|<\delta$ implies that $|f(x)-f(y)|<\varepsilon$. -Now choose $m\in\mathbb{N}$ such that $\frac{1}{m}<\delta$, and let $x_1=\frac{1}{m},x_2=\frac{2}{m},\dots,x_m=1$. By hypothesis, for each $1\leq i\leq m$ we can choose an $N_i$ such that $|f(x_i+n)|<\varepsilon$ for all $n\geq N_i$. Taking $N=\max\{N_1,\dots,N_m\}$, it follows that -$$ |f(x_i+n)|<\varepsilon $$ -for all $n\geq N$, and all $1\leq i\leq m$. -For the last step, suppose that $x>N$. There is an integer $n\geq N$ such that $y=x-n\in[0,1]$, hence there is some $i$ such that $|y-x_i|\leq \frac{1}{m}<\delta$. Therefore also -$$|(x_i+n)-x|=|(x_i+n)-(y+n)|<\delta$$ -hence -$$ |f(x)|\leq |f(x)-f(x_i+n)|+|f(x_i+n)|<\varepsilon+\varepsilon=2\varepsilon$$ -Since $\varepsilon$ was any positive real number, this shows that $\lim_{x\to\infty}f(x)=0$.<|endoftext|> -TITLE: Is it really true that "if a function is discontinuous, automatically, it's not differentiable"? -QUESTION [12 upvotes]: I while back, my calculus teacher said something that I find very bothersome. I didn't have time to clarify, but he said: - -If a function is discontinuous, automatically, it's not differentiable. - -I find this bothersome because I can think of many discontinuous piecewise functions like this: -$$f(x) = -\begin{cases} -x^2, & \text{$x≤3$} \\ -x^2+3, & \text{$x>3$} -\end{cases}$$ -Where $f'(x)$ would have two parts of the same function, and give: -$$\begin{align} -f'(x) = && -\begin{cases} -2x, & \text{$x≤3$} \\ -2x, & \text{$x>3$} -\end{cases} \\ -= && 2x -\end{align}$$ -So I'm wondering, what exactly is wrong with this? Is there something I'm missing about what it means to be "continuous"? Or maybe, are there special rules for how to deal with the derivatives of piecewise functions, that I don't know about. - -REPLY [12 votes]: Flagrantly ignoring your specific example: suppose a function $f$ is differentiable at a point $x$. Then by definition of differentiability: -$$\lim_{h\rightarrow0}\frac{f(x+h) - f(x)}{h}$$ -must exist (and by this notation I mean the limits exist in both the positive and negative directions and are equal). Since the bottom of that fraction approaches $0$, it's necessary for the top also to approach $0$, or else the fraction diverges. But the top approaching $0$ is just the definition of $f$ being continuous at $x$. So a function that isn't continuous can't be differentiable. -So, your example fails to be differentiable for the same reason that it fails to be continuous, which is that top of that fraction tends to $3$, not $0$, when approached from the positive direction.<|endoftext|> -TITLE: Showing that the square of Brownian motion, minus time, is a martingale -QUESTION [9 upvotes]: What exactly are we supposed to do to show what they have given is a martingale. -If I try to follow through I'm getting a bit confused. In the third to last line I don't understand how they have simplified the first two terms to get $0$ and $t-s$ respectively. - -REPLY [2 votes]: For a Wiener process you have that -$$(W_t - W_s) \sim N(0, t-s)$$ -Therefore the first term in the third to last line becomes $0$. -and the second term becomes $t-s$.<|endoftext|> -TITLE: If a random variable is independent from the two components of a random vector, are the random vector and the random variable independent? -QUESTION [5 upvotes]: in my probability class I was asked this seemingly very tricky question dealing with random variables and vectors: - -Let $ X,Y,Z $ be random variables with PDFs (continuous) such that we know that Z and X are independent and the variables Z and Y are independent. We are asked to prove or disprove (give a counterexample) that the random vector $ (X,Y) $ and $ Z $ are independent. - -I have tried to prove it just with the basic identities and definitions but got nothing so maybe it is false and we must give a counterexample? I do not even know how to deal with this, so I really need the help. Thanks all helpers. - -REPLY [2 votes]: Another example: Toss a fair die twice. Let $X$ be the number showing on the first toss, and $Y$ the number showing on the second toss. Let $Z$ be the indicator of the event that $X+Y$ is even. Then $X$ and $Z$ are independent, $Y$ and $Z$ are independent, but $Z$ is clearly a non-random function of $(X,Y)$ and so $Z$ is not independent of $(X,Y)$.<|endoftext|> -TITLE: How to integrate $\int\limits_{0}^{\pi/2}\frac{dx}{\cos^3{x}+\sin^3{x}}$? -QUESTION [13 upvotes]: I have$$\int\limits_{0}^{\pi/2}\frac{\text{d}x}{\cos^3{x}+\sin^3{x}}$$ -Tangent half-angle substitution gives a fourth-degree polynomial in the denominator that is difficult to factor. - -REPLY [8 votes]: One may write -$$ -\begin{align} -\int_0^{\pi/2}\frac{\text{d}x}{\cos^3{x}+\sin^3{x}}&=\int_0^{\pi/2}\frac{\text{d}x}{(\cos x+\sin x)(\cos^2{x}-\cos x\sin x+\sin^2{x})} -\\\\&=\frac{\sqrt{2}}2\int_0^{\pi/2}\frac{\text{d}x}{\cos(x-\frac{\pi}4)\:(1-\frac12\sin(2x))} -\\\\&=\frac{\sqrt{2}}2\int_{-\pi/4}^{\pi/4}\frac{\text{d}u}{\cos u\:\left(1-\frac12\cos(2u)\right)} -\\\\&=\sqrt{2}\int_0^{\pi/4}\frac{\cos u\:\text{d}u}{\left(1-\sin^2u\right)\:\left(\sin^2u+\frac12\right)} -\\\\&=\sqrt{2}\int_0^{\sqrt{2}/2}\frac{\text{d}v}{\left(1-v^2\right)\:\left(v^2+\frac12\right)} -\\\\&=\frac{\pi}3+\frac{2\sqrt{2}}{3}\log\left(1+\sqrt{2}\right). -\end{align} -$$<|endoftext|> -TITLE: Is there a connection between the "independent sets" in matroids and "independent sets" in graph theory? -QUESTION [5 upvotes]: I've been reading up on matroids recently, which are used in the theory of greedy algorithms. A matroid is a pair $(X, I)$ where $X$ is a set and $I \subseteq \wp(X)$ is a family of sets over $X$ called the independent sets in $X$. -It occurred to me that I'd seen the term "independent set" also used in a graph-theoretic context to refer to a set of nodes in a graph where no two nodes in the set are adjacent. -I'm not immediately seeing a connection between these two kinds of independent sets. Notably, in a matroid, all maximal independent sets are required to have the same cardinality, while in a graph theory context, it's possible for there to be many different maximal independent sets of differing cardinalities. -Is there a connection between these two concepts of "independent sets," or is the terminology just an accident of history? - -REPLY [5 votes]: Expanding on @Moritz's comment, it seems like the answer comes from a related mathematical structure called an independence system, which is a pair $(X, I)$ where $X$ is a ground set, $I \subseteq \wp(A)$, and $I$ obeys the following properties: - -$I \ne \emptyset$, and -$\forall S \in I. \wp(S) \subseteq I$. - -The sets in $I$ are called independent sets. The set of all independent sets in a graph $G$ form an independence system, since there's always at least one independent set (namely, the empty set) and any subset of an independent set is also an independent set. -A matroid is an independence structure that also satisfies the exchange property, which is something that independent sets in a graph-theoretic sense do not obey. So in that sense, the connection between independent sets in graph theory and independent sets in matroids comes from the independence structure of a matroid, not the exchange property.<|endoftext|> -TITLE: Cantor set minus endpoints homeomorphic to irrationals? -QUESTION [17 upvotes]: $C=$ Cantor set -$C_1=$ set of points in $C$ that are adjacent to removed intervals -$C_2=C\setminus C_1$ (all of the "non-endpoints") - -QUESTION: Is $C_2$ homeomorphic to $\overline {\mathbb Q}$, the set of irrationals? -I see no obvious reason why they would not be homeomorphic. Both are zero dimensional, nowhere locally compact, cardinality $2^\omega$, etc. - -REPLY [5 votes]: Here is a more direct proof. Note that $C_2$ consists of those numbers between $0$ and $2$ whose base $3$ expansion is nonterminating and consists of $0$s and $2$s. We can take any such expansion, replace the $2$s with $1$s, and consider it as a binary expansion. We then get a number between $0$ and $1$ which has nonterminating binary expansion, i.e. a number which is not a dyadic rational. Writing $D$ for the dyadic rationals in $(0,1)$, we now have a bijection $C_2\to (0,1)\setminus D$, and it is not too hard to check directly that this bijection is a homeomorphism. (It is actually the restriction to $C_2$ of the Cantor function.) -Now $D$ is a countable dense linear order without endpoints, so by a standard back-and-forth argument it is order-isomorphic to $\mathbb{Q}$. This isomorphism extends to an isomorphism between the Dedekind-completions of $D$ and $\mathbb{Q}$, which are just $(0,1)$ and $\mathbb{R}$. So we have an order-isomorphism (and hence homeomorphism) $(0,1)\to \mathbb{R}$ which sends $D$ to $\mathbb{Q}$. It thus also sends $(0,1)\setminus D$ to $\mathbb{R}\setminus\mathbb{Q}$. We thus get a homeomorphism $(0,1)\setminus D\to \mathbb{R}\setminus\mathbb{Q}$, which we can compose with our earlier homeomorphism to get a homeomorphism $C_2\to\mathbb{R}\setminus\mathbb{Q}$.<|endoftext|> -TITLE: Analytic function on unit disk has finitely many zeros -QUESTION [5 upvotes]: I am studying complex analysis from Theodore Gamelin's text and Exercise 1 of chapter IX.2 says that if $f$ is analytic inside the open unit disk and continuous on its boundary that satisfies $|f(z)| = 1$ for $|z| = 1$, then $f$ is a finite Blaschke product. Clearly, this would imply that $f$ has only finitely many zeros in the open unit disk. -But the proof of it already assumes this fact. -So my question is that is it trivial that such an $f$ has finitely many zeros in the open unit disk? - -REPLY [4 votes]: Let $\mathbb{D}$ denote the open unit disc. In general, an analytic function $f:\mathbb{D}\to\mathbb{C}$ is allowed to have countably many zeros in $\mathbb{D}$. As Friedrich has pointed out, -$$ -\sin\left(\frac{1}{1+z}\right) -$$ -is an example of a function that is analytic on $\mathbb{D}$ and has infinitely zeros inside $\mathbb{D}$. -However, if we assume that $f$ is continuous on $\mathbb{D}$, and also that $|f(z)| = 1$ for $|z|=1$, then the story changes. Suppose $f$ has countably many zeros $z_n$ in $\mathbb{D}$. Then by compactness, the set $\{z_n\}$ has a limit point in $\overline{\mathbb{D}}$. -The zeros cannot have a limit point on boundary of the unit disc, since if $z_{n_k}\to z_\infty\in\partial\mathbb{D}$ then $f(z_{n_k})\to f(z_\infty)$ by continuity, but $|f(z_{n_k})| = 0$ and $|f(z_\infty)| = 1$, contradiction. -So the limit point in $\mathbb{\overline{\mathbb{D}}}$ must lie inside $\mathbb{D}$. But then $f$ has a sequence of zeros converging inside its domain of definition, and since $f$ is analytic it follows that $f \equiv 0$. This is a contradiction if $f$ is assumed nontrivial. -Therefore it follows that if $f$ is nontrivial, then $f$ can only have finitely many zeros inside $\mathbb{D}$. At this point one can express $f$ as a product of finitely many Blaschke factors using a consequence of the Schwarz lemma.<|endoftext|> -TITLE: Perimeter and area of a regular n-gon. -QUESTION [5 upvotes]: A friend of mine asked me how to derive the area and perimeter of a regular n-gon with a radius r for a design project he is working on. I came up with this, but I want to make sure I didn't make any errors before giving it to him. -First, I assumed that the n-gon was inscribed in a circle of radius r centered at the origin, with the first vertex of the circle being at the point $(r,0)$. -The vertices of the n-gon will divide the circle into n equal sections. Because the total angle of a circle is $2\pi$, then the angle between the x-axis and the second vertex is $\frac{2\pi}{n}$. Using trigonometry, the coordinates of this vertex are $\left(r\cos\left(\frac{2\pi}{n}\right), r\sin\left(\frac{2\pi}{n}\right)\right)$. -Now, the origin, the first vertex, and the second vertex form a triangle. The edge of this triangle which touches the circle in two places, using the distance formula, will have a length of $r\sqrt{\left(cos\left(\frac{2\pi}{n}\right)-1\right)^2 + \left(sin\left(\frac{2\pi}{n}\right)\right)^2}$. -Now, the n-gon will be made up of n of these triangles, and so the perimeter is: $nr\sqrt{\left(cos\left(\frac{2\pi}{n}\right)-1\right)^2 + \left(sin\left(\frac{2\pi}{n}\right)\right)^2}$. -Now, the triangle has a base of r and a height of $r\cdot sin(\frac{2\pi}{n})$. There area of a triangle is half the product of its base and height, so the area of the triangle is $\frac{r^2sin\left(\frac{2\pi}{n}\right)}{2}$. -Agains, the n-gon is made up of n of these triangles, so its area is: $\frac{nr^2sin\left(\frac{2\pi}{n}\right)}{2}$ - -REPLY [6 votes]: Consider a regular polygon with side length $s$ inscribed in a circle with radius $r$. Let $\theta$ be the measure of a central angle subtended by a side of the regular polygon as shown in the figure below. - -As you observed, since a full revolution is $2\pi$ radians, each central angle that subtends a side of an inscribed regular polygon with $n$ sides has measure -$$\theta = \frac{2\pi}{n}$$ -Each triangle that is formed by connecting the center of the circle to adjacent vertices of the inscribed regular polygon is isosceles since the segments connecting the center to the vertices are radii of the circle. -Let's look more carefully at a triangle formed by connecting the center of the circle to adjacent vertices of the regular polygon. If we draw an altitude from the vertex angle to the base of an isosceles triangle, it bisects both the vertex angle and the base, as shown in the Figure below. - -The perimeter of a regular polygon with $n$ sides of side length $s$ is $P = ns$. Since -$$\frac{s}{2} = r\sin\left(\frac{\theta}{2}\right)$$ -and -$$\frac{\theta}{2} = \frac{1}{2} \cdot \frac{2\pi}{n} = \frac{\pi}{n}$$ -we have -$$\frac{s}{2} = r\sin\left(\frac{\pi}{n}\right) \implies s = 2r\sin\left(\frac{\pi}{n}\right)$$ -Hence, the perimeter of the regular polygon is -$$P = ns = n\left[2r\sin\left(\frac{\pi}{n}\right)\right] = 2nr\sin\left(\frac{\pi}{n}\right)$$ -Note that the length of the altitude of the triangle is -$$a = r\cos\left(\frac{\theta}{2}\right) = r\cos\left(\frac{\pi}{n}\right)$$ -Hence, the area enclosed by the triangle is -\begin{align*} -A_{\triangle} & = \frac{1}{2}sa\\ -& = \frac{1}{2}\left[2r\sin\left(\frac{\pi}{n}\right)\right]\left[r\cos\left(\frac{\pi}{n}\right)\right]\\ -& = \frac{1}{2}r^2\left[2\sin\left(\frac{\pi}{n}\right)\cos\left(\frac{\pi}{n}\right)\right]\\ -& = \frac{1}{2}r^2\sin\left(\frac{2\pi}{n}\right) -\end{align*} -Since the area enclosed by the regular polygon is comprised of $n$ such triangular regions, the area enclosed by the regular polygon is -$$A = \frac{1}{2}nr^2\sin\left(\frac{2\pi}{n}\right)$$ -which agrees with the answer you obtained by taking one of the legs as the base of the triangle.<|endoftext|> -TITLE: Classification of $O(2)$-bundles in terms of characteristic classes. -QUESTION [6 upvotes]: It is well-known that $SO(2)$-principal bundles over a manifold $M$ are topologically characterized by their first Chern class. I was wondering what was the characterization of $O(2)$-bundles in terms of characteristic classes. I guess the first and second Setiefel-Whitney classes are necessary for the topological characterization of $O(2)$-bundles, but they can't be enough, because if $w_{1} = 0$ then one should recover the classification of $SO(2)$-bundles, which is given by the first Chern class and not by the second Stiefel-Whitney class. -Thanks. - -REPLY [3 votes]: We have a fiber sequeunce $BSO(2)\to BO(2)\to B\mathbb{Z}/(2)$, and so we have that a $O(2)$-bundle, or a map $X\to BO(2)$ factors $X\to BSO(2)$ if and only if the composite $X\to B\mathbb{Z}/(2)$ is null-homotopic. $Hom_{\mathcal{h}Top}(X, B\mathbb{Z}/(2))=H^1(X; \mathbb{Z}/(2))$, so we have a class $x\in H^1(X; \mathbb{Z}/(2))$ representing the isomorphism class of the bundle. This class is clearly the pullback of the the universal class $x\in H^1(BO(2); \mathbb{Z}/(2))=(\mathbb{Z}/(2)[w_1, w_2])_{deg=1}=\mathbb{Z}/(2)w_1$. We note that since $O(2)\neq SO(2)\times \mathbb{Z}/(2)$, this class cannot vanish identically, so that $x=w_1$. Now we are left with an $SO(2)$-bundle, which you already know about!<|endoftext|> -TITLE: Three angles are linearly independent over $\mathbb{Q}$? -QUESTION [7 upvotes]: If$$\tan \alpha = 1, \text{ }\tan \beta = {3\over 2}, \text{ }\tan \gamma = 2,$$then does it follow that $\alpha$, $\beta$, $\gamma$ are linearly independent over $\mathbb{Q}$? -It is possible to test combinations $m\alpha+n\beta+\ell\gamma$ with some small integer coefficients $m,n,\ell$. The tool for doing that is the sum formula for tangents of two angles with known tangents: -$$ -\tan(x\pm y)=\frac{\tan x\pm\tan y}{1\mp\tan x\tan y}. -$$ -For example, judging from a picture $\beta+2\gamma$ is relatively close to $\pi=4\alpha$, but the calculations: -$$ -\tan 2\gamma=\frac{2+2}{1-2\cdot2}=-\frac43, -$$ -$$ -\tan(\beta+2\gamma)=\frac{3/2-4/3}{1+(3/2)(4/3)}=\frac1{18} -$$ -show that it is not a match. - -REPLY [15 votes]: The angles are linearly independent. -Since $\alpha$ is a rational multiple of $\pi$, the question is whether, letting $\beta = \arctan 3/2$ and $\gamma = \arctan 2$, we have -$$m\beta + n\gamma \equiv 0 \pmod{\pi}$$ -for some integers $m$ and $n$, not both zero. -If this were the case, we would have -$$1 = (e^{2i\beta})^m (e^{2i\gamma})^n = \left( \frac{2+3i}{2-3i}\right)^m \left( \frac{1+2i}{1-2i}\right)^n.$$ -But since $2+3i$, $2-3i$, $1+2i$, $1-2i$ are all non-associated irreducible elements in $\mathbf{Z}[i]$, which is a unique factorization domain, this is absurd.<|endoftext|> -TITLE: Show that $SL(n, \mathbb{R})$ is a $(n^2 -1)$ smooth submanifold of $M(n,\mathbb{R})$ -QUESTION [8 upvotes]: I need to show for $n=3$ that $SL(n,\mathbb{R})=\{A \in M(n, \mathbb{R}) : detA=1 \}$ is a $(n^2 -1)$ dimensional smooth submanifold of the vector space $M(n,\mathbb{R})$ of all real $n \times n$ matrices. I would assume that I need to use the regular value theorem and use the determinant map to get the result, but I'm a bit unsure on how to set this up correctly. Any help would be appreciated. - -REPLY [6 votes]: Ben West's solution is extremely clean and simple, and it should be accepted as the answer. -This is the solution that I was thinking about. -$GL_n = \det^{-1}(\mathbb{R}\setminus\{0\})$ -Since $\det$ is continuous, $GL_n$ is an open subset of $M_n$, meaning it has dimension $n^2$. -Restrict $\det$ to $GL_n$. Then, for any $A\in GL_n$ and any $B\in M_n \sim T_A GL_n$, we have: -\begin{align} -d(\det)_A(B) &= \lim_{t\to0} \frac{\det(A+tB) - \det(A)}{t} \\ -& =\det(A) \lim_{t\to0} \frac{\det(I+tA^{-1}B)-1}{t}\\ -&= \det(A)\mathrm{tr}(A^{-1}B) -\end{align} -Since $\det$ maps into a 1-dimensional space, we just need one $B$ that makes $\mathrm{tr}(A^{-1}B)≠0$. So, take $B=A$. -This shows that every non-zero number is a regular value of $\det$.<|endoftext|> -TITLE: Proving that $\int_0^1 \frac{\log^2(x)\tanh^{-1}(x)}{1+x^2}dx=\beta(4)-\frac{\pi^2}{12}G$ -QUESTION [18 upvotes]: I am trying to prove that -$$I=\int_0^1 \frac{\log^2(x)\tanh^{-1}(x)}{1+x^2}dx=\beta(4)-\frac{\pi^2}{12}G$$ -where $\beta(s)$ is the Dirichlet Beta function and $G$ is the Catalan's constant. I managed to derive the following series involving polygamma functions but it doesn't seem to be of much help. -$$ -\begin{align*} -I &=\frac{1}{64}\sum_{n=0}^\infty \frac{\psi_2 \left(\frac{n}{2}+1 \right) -\psi_2\left(\frac{n+1}{2} \right)}{2n+1} \\ -&= \frac{1}{8}\sum_{n=1}^\infty \frac{\psi_2(n)}{2n-1}-\frac{1}{32}\sum_{n=1}^\infty\frac{\psi_2\left(\frac{n}{2}\right)}{2n-1} -\end{align*} -$$ -Numerical calculations show that $I \approx 0.235593$. - -REPLY [2 votes]: The generalization of the main integral follows easily by employing the same ideas used in this post and the previous one. - -Let $n$ be a natural number. Then, we have - $$\int_0^1 \frac{\log^{2n}(x)\operatorname{arctanh}(x)}{1+x^2}\textrm{d}x$$ -$$=\lim_{s\to0}\frac{d^{2n}}{ds^{2n}}\left(\frac{\pi}{16}\cot \left(\frac{\pi s}{2}\right) \left(\psi \left(\frac{3}{4}-\frac{s}{4}\right)-\psi\left(\frac{1}{4}-\frac{s}{4}\right)\right)-\frac{\pi ^2 }{16} \csc \left(\frac{\pi s}{2}\right)\right),$$ - where $\psi$ represents the Digamma function. - -Another similar generalization - -Let $n$ be a natural number. Then, we get - $$\int_0^1 \frac{\log^{2n}(x)\arctan(x)}{1-x^2}\textrm{d}x$$ -$$=\frac{\pi}{4} \left(1-2^{-2 n-1}\right) \zeta (2 n+1)(2 n)!$$ -$$-\lim_{s\to0}\frac{d^{2n}}{ds^{2n}}\left(\frac{\pi}{16} \csc \left(\frac{\pi s}{2}\right) \left(\pi \cos \left(\frac{\pi s}{2}\right)+\psi\left(\frac{s+1}{4}\right)-\psi\left(\frac{s+3}{4}\right)\right)\right),$$ - where $\zeta$ represents the Riemann zeta function and $\psi$ denotes the Digamma function. - - -A solution in large steps by Cornel I. Valean to the main integral -$$\int_0^1 \frac{\log^2(x)\operatorname{arctanh}(x)}{1+x^2}dx$$ -We follow the strategy used to the auxiliary result from the previous post, and then we immediately arrive at - -$$\int_0^1 \frac{\log^2(x)\operatorname{arctanh}(x)}{1+x^2}dx=\frac{1}{2}\Re\biggr\{ \int_0^{\infty } \frac{\log ^2(x) \operatorname{arctanh}(x)}{1+x^2} \textrm{d}x\biggr \}$$ -$$=\frac{1}{2} \int_0^{\infty }\left(PV\int_0^1 \frac{x \log ^2(x)}{(1-y^2 x^2)(1+x^2)} \textrm{d}y\right)\textrm{d}x$$ -$$=\frac{1}{2}\int_0^1\left(PV\int_0^{\infty} \frac{x \log ^2(x)}{(1-y^2 x^2)(1+x^2)} \textrm{d}x\right)\textrm{d}y$$ -$$=\frac{\pi^2}{12}\int_0^1 \frac{\log(y)}{1+y^2}\textrm{d}y-\frac{1}{6}\int_0^1 \frac{\log^3(y)}{1+y^2}\textrm{d}y=\beta(4)-\frac{\pi^2}{12}G,$$ - as desired. - -End of story. -A note: Using the Cauchy product $\displaystyle \frac{\operatorname{arctanh}(x)}{1+x^2}=\sum _{n=1}^{\infty } \sum _{k=1}^n \frac{(-1)^{n+k} x^{2 n-1}}{2 k-1}$, an the value of the main integral, we immediately obtain the beautiful series - -$$\sum _{n=1}^{\infty }\frac{(-1)^{n-1}}{n^3} \sum _{k=1}^n \frac{(-1)^{k-1}}{2 k-1}=4\beta(4)-\frac{\pi^2}{3}G.$$ - -Some kind of bonus: Using the integrals relation obained with integration by parts as shown in Shobhit Bhatnagar's post and combining it with the results obtained in this post and the previous one, we obtain the value of the other integral, - -$$\int_0^1\frac{\log^2(x)\arctan(x)}{1-x^2}\textrm{d}x= -\beta(4)-\frac{\pi^2}{24}G+\frac{7\pi}{16}\zeta(3).$$ - -A note: It's clear the generalization $\displaystyle \int_0^1 \frac{\log^{2n}(x)\arctan(x)}{1-x^2}\textrm{d}x$ may be approached in the same way as $\displaystyle \int_0^1 \frac{\log^2(x)\operatorname{arctanh}(x)}{1+x^2}dx$.<|endoftext|> -TITLE: How to prove that factors of homogeneous polynomial are homogeneous? -QUESTION [17 upvotes]: How to prove that factors of homogeneous polynomial are homogeneous? -I was thinking that for a homogeneous polynomial of degree $n$, -$f(ax_1,....,ax_n)=a^nf(x_1,....,x_n)$ where $a\in k$ -Now if $f=f_1...f_r$ and atleast one $f_i$ is not homogeneous then we'll not get $a^n$ but one flaw in this arguement is that this might possible that one $f_i$ and another $f_j$ both non-homogeneous gives rise to a homogeneous polynomial so how to fix this problem? - -REPLY [2 votes]: Say $f\in \mathbb{K}[x_1,\dots , x_n]$ a homogeneous polynomial with degree $d>0$. -Write his factorization $f=g_1\dots g_h$ with $g_i$ irreducible with degree $d_i>0$ for all $i=1,\dots ,h$. -Write each $g_i$ as sum of his homogeneous components, say $g_i=\sum_{j=0}^{d_i}g_{i,j}$ with $g_{i,j}$ homogeneous of degree $j$, so that $g_{i,d_i}\neq 0$ for all $i=1,\dots,h$. -Of course $\sum_i d_i=d$ and $f=g_1\dots g_h=\text{terms of degree}< d+\prod_{i=1}^{h}g_{i,d_i}$. -By $\prod_{i=1}^{h}g_{i,d_i}\neq 0$ and because of homogeneity of $f$, the terms of degree lower than $d$ must vanish and we must have $f=\prod_{i=1}^{h}g_{i,d_i}$. -Now we write the factorization of $g_{i,d_i}=\prod_{j=1}^{k_i}r_{i,j}$ with each $r_{i,j}$ irreducible of positive degree. -So we have $g_1\dots g_h=f=(\prod_{j=1}^{k_1}r_{1,j})\dots(\prod_{j=1}^{k_h}r_{h,j})$. Now because of uniqueness of decomposition, we must have $k_1=\dots =k_h=1$ (otherwise we would have a factorization at left shorter than that at right) so $r_{i,1}=g_{i,d_i}$ is homogeneous for all $i=1,\dots, h$. -So we have $g_1\dots g_h=g_{1,d_1}\dots g_{h,d_h}$ and again because of uniqueness of decomposition we have $g_{i,d_i}=g_i$ for all $i=1,\dots,h$ up to rienumeration and multiplication by a costant, so each $g_i$ is homogeneous.<|endoftext|> -TITLE: A space is contractible if and only if its identity map is nullhomotopic -QUESTION [8 upvotes]: My definition is that a space $X$ is contractible if it is homotopy equivalent to a point, i.e. there exists $f:X\rightarrow\{pt\}$ and $g:\{pt\}\rightarrow{X}$ such that $f\circ{g}\simeq{id}_{\{pt\}}$ and $g\circ{f}\simeq{id}_X$. I see all over the place (without proof) that a space is contractible if and only if its identity map is nullhomotopic, i.e. there exists a homotopy $F:X\times{I}\rightarrow{X}$ such that $F(x,0)=id_X$ and $F(x,1)$=constant. -I have seen many other statements which would imply this - such as a space $X$ is contractible if and only if every map $f:X\rightarrow{Y}$, for arbitrary $Y$, is nullhomotopic - but all seem to use the statement above as part of the proof. I feel like it should be easy but have got nowhere. - -REPLY [10 votes]: Suppose you have a homotopy $F$ as you describe, and let $x_0\in X$ be the constant you mention (ie $F(x,1)=x_0$ for all $x\in X$). -Take$f:X\to \{x_0\}$ the only possible map and $g: \{x_0\}\to X$ the inclusion map. Then obviously $f\circ g$ is the identity of $\{x_0\}$. Now $g\circ f$ is the constant map $X\to X$, $x\mapsto x_0$, so $F$ is precisely a homotopy between $Id_X$ and $g\circ f$; thus $g\circ f\simeq Id_X$. -Actually you see that the equivalence is just the very definition of what it means that the identity map is homotopic to a constant map, since a constant map is just the composite of a map $X\to \{pt\}$ and a map $\{pt\}\to X$.<|endoftext|> -TITLE: For differentiable functions $f,g$, $\nabla f(x)=g(x)x$. Then $f$ is constant on S. -QUESTION [6 upvotes]: Problem saying that : - -$f:\mathbb{R}^{n}\rightarrow\mathbb{R}$ is differentiable. Assume - that there is a differentiable function - $g:\mathbb{R}^{n}\rightarrow\mathbb{R}$ such that $\nabla - f(x)=g(x)x$ . Show that $f$ is constant on - $S=\{x\in\mathbb{R}^{n}:||x||=r\}$ where $r$ is positive constant. - -For $x=(x_1,\dots ,x_n)$ and $\nabla f=(\frac{\partial f}{\partial x_{1}},\dots,\frac{\partial f}{\partial x_{n}})$, problem says $\frac{\partial f}{\partial x_{i}}=g(x)x_{i}$. It seems to me that to solve this problem, knowing the relation between norm of gradient and its value is crucial. How can I do? -Notification : This question is edited since it's about same problem and the former one is about just notation. - -REPLY [4 votes]: Note that $f$ is constant on a sphere $S$ of radius $r$ iff for any curve $c :(-\epsilon,\epsilon)\rightarrow S$, $f\circ c(t)=C$ for all $t$ iff $ \frac{d}{dt} (f\circ c)(t)=0$ -If $c$ is a curve on the sphere so $|c(t)|=r$ Then $$ c'(t)\cdot c(t)=0 $$ -Hence \begin{align} \frac{d}{dt} f\circ c(t) &=\nabla f\cdot c' \\ - &= g(c(t))c(t)\cdot c'(t) =0 \end{align} -That is $f$ is constant on the curve. -(And note that gradient of $f$ is radial direction That is hypersurface, which is level surface of $f$, has unit normal of radial direction. Such kind of hypersurface is a sphere.)<|endoftext|> -TITLE: How to express $f(n\alpha)$ in terms of $f(\alpha)$ -QUESTION [13 upvotes]: Original question: Let $f:\mathbb{R}\to\mathbb{R}$ be a function defined by $f(x)=\dfrac{a^x-a^{-x}}{2}$, where $a>0$ and $a\ne 1$, and $\alpha$ be a real number such that $f(\alpha)=1$. Find $f(2\alpha)$.$^1$ - -A few years ago, I was a high school student and solved it. Now I am reading the book again, because I has begun teaching my cousin since last week. When I revisited the question, suddenly I wanted to find $f(2\alpha)$, $f(3\alpha)$, $f(4\alpha),\;\dots$ -\begin{align} -f(2\alpha)&=\frac{a^{2\alpha}-a^{-2\alpha}}{2}\\ -&=\frac{(a^{\alpha}-a^{-\alpha})(a^{\alpha}+a^{-\alpha})}{2}\\ -&=f(\alpha)\sqrt{a^{2\alpha}+2+a^{-2\alpha}}\\ -&=f(\alpha)\sqrt{(a^{\alpha}-a^{-\alpha})^2+4}\\ -&=f(\alpha)\sqrt{4(f(\alpha))^2+4}\\ -f(3\alpha)&=f(\alpha)(4(f(\alpha))^2+3)\;(\text{calculations skipped})\\ -f(4\alpha)&=f(\alpha)(4(f(\alpha))^2+2)\sqrt{4(f(\alpha))^2+4} -\end{align} - -My question: Can we express $f(n\alpha)$ in terms of $f(\alpha)$? (Here $f(\alpha)$ isn't necessarily $1$.) - -Attempt: It is known that $x^n - y^n = (x-y)(x^{n-1}+x^{n-2}y+x^{n-3}y^2+\cdots+y^{n-1})$ for $n\in \mathbb{N}$, so -$$f(n\alpha)=\frac{a^{n\alpha}-a^{-n\alpha}}{2}=\frac{(a^{\alpha}-a^{-\alpha})(a^{(n-1)\alpha}+a^{(n-3)\alpha}+\cdots+a^{(3-n)\alpha}+a^{(1-n)\alpha})}{2}.$$ -However, $(a^{(n-1)\alpha}+a^{(n-3)\alpha}+\cdots+a^{(3-n)\alpha}+a^{(1-n)\alpha})$ term is annoying me. I think it will be $2(f((n-1)\alpha)+f((n-3)\alpha)+\cdots+?)$, but I have no idea how to do next. -Partial solution is also appreciated. - -$^1$ It was translated from Korean to English by me. Reference: Sunwook Hwang and 12 other authors (2010).『수학Ⅰ 익힘책』. Seoul: (주)좋은책신사고. page 62. - -REPLY [3 votes]: From $f(\alpha)=\frac{a^\alpha - a^{-\alpha}}{2}$, we can induce the quadratic equation for $a^{\alpha}$: -$$ -a^{2\alpha}-2f(\alpha)a^{\alpha}-1=0. -$$ -By the quadratic formula, we get -$$ -a^{\alpha}=f(\alpha)\pm \sqrt{(f(\alpha))^2+1} -$$ -and using $a^{\alpha}>0$ we eliminate one possibility. Thus -$$ -a^{n\alpha}=(a^{\alpha})^n =\left(\sqrt{f(\alpha)^2+1} + f(\alpha)\right)^n -$$ -and -\begin{align} -a^{-n\alpha}&=\frac{1}{\left(\sqrt{f(\alpha)^2+1} + f(\alpha)\right)^n}\\ -&=\frac{\left(\sqrt{f(\alpha)^2+1} - f(\alpha)\right)^n}{\left(\sqrt{f(\alpha)^2+1} + f(\alpha)\right)^n\left(\sqrt{f(\alpha)^2+1} - f(\alpha)\right)^n}\\ -&=\frac{\left(\sqrt{f(\alpha)^2+1} - f(\alpha)\right)^n}{(f(\alpha)^2+1-f(\alpha)^2)^n}\\ -&=\left(\sqrt{f(\alpha)^2+1} - f(\alpha)\right)^n. -\end{align} -Therefore -$$ -f(n\alpha)=\frac{1}{2}\left(\left(\sqrt{f(\alpha)^2+1} + f(\alpha)\right)^n -\left(\sqrt{f(\alpha)^2+1} - f(\alpha)\right)^n\right) -$$ -Checking the formula for $n=2$: -\begin{align} -f(2\alpha)&=\frac{1}{2}\left(\left(\sqrt{f(\alpha)^2+1} + f(\alpha)\right)^2 -\left(\sqrt{f(\alpha)^2+1} - f(\alpha)\right)^2\right)\\ -&=\frac{1}{2}(f(\alpha)^2+1+2f(\alpha)\sqrt{f(\alpha)^2+1} + f(\alpha)^2 -f(\alpha)^2-1+2f(\alpha)\sqrt{f(\alpha)^2+1}-f(\alpha)^2)\\ -&=\frac{1}{2}\cdot 4f(\alpha)\sqrt{f(\alpha)^2+1}\\ -&=2f(\alpha)\sqrt{f(\alpha)^2+1} -\end{align}<|endoftext|> -TITLE: Find $\lim_{a\to \infty}\frac{1}{a}\int_0^{\infty}\frac{x^2+ax+1}{1+x^4}\cdot\arctan(\frac{1}{x})dx$ -QUESTION [6 upvotes]: Find -$$ -\lim_{a\to \infty} - \frac{1}{a} - \int_0^{\infty}\frac{x^2+ax+1}{1+x^4} \arctan\left(\frac{1}{x}\right)dx -$$ - -I tried to find -$$ -\int_0^{\infty} \frac{x^2+ax+1}{1+x^4}\arctan\left(\frac{1}{x}\right) dx -$$ -Let -$$ -I(a) = \int_0^\infty - \frac{x^2+ax+1}{1+x^4}\arctan\left(\frac{1}{x}\right) dx -$$ -Let -$$ -\begin{split} -t &= \arctan(1/x) \\ -\frac{dt}{dx} &= \frac{-1}{1+x^2} \\ -dt &= \frac{-1}{1+x^2}dx -\end{split} -$$ -I am stuck here, there seems no way to further solving. - -REPLY [8 votes]: One may observe that, as $a\to \infty$, -$$ -\begin{align} -&\frac1a - \int_0^{\infty}\frac{x^2+ax+1}{1+x^4} \arctan\left(1/x\right)\:dx- - \color{red}{\int_0^{\infty}\!\!\frac{x}{1+x^4} \arctan\left(1/x\right)\:dx} -\\\\&=\frac1a\int_0^{\infty}\frac{x^2+ax+1}{1+x^4} \arctan\left(1/x\right)\:dx- - \frac1a\int_0^{\infty}\frac{ax}{1+x^4} \arctan\left(1/x\right)\:dx -\\\\&=\frac1a\int_0^{\infty}\frac{x^2+1}{1+x^4} \arctan\left(1/x\right)\:dx -\: \longrightarrow \: 0,\tag1 -\end{align} -$$ since the latter integral is convergent. -On the other hand, we have -$$ -\int_0^{\infty}\frac{x}{1+x^4} \:\arctan\left(1/x\right)\:dx=\int_0^{\infty}\frac{x}{1+x^4}\: \arctan\left(x\right)\:dx\quad (x \to 1/x) -$$ using, $\displaystyle \arctan\left(x\right)+\arctan\left(1/x\right)=\frac{\pi}2$ ($x>0$), one gets -$$ -\begin{align} -&\int_0^{\infty}\!\!\frac{x}{1+x^4}\: \arctan\left(x\right) \:dx+\int_0^{\infty}\!\!\frac{x}{1+x^4}\:\arctan\left(1/x\right)\:dx -\\\\& =\frac{\pi}2\int_0^{\infty}\!\!\frac{x\:dx}{1+x^4} -\\\\&=\frac{\pi}4\int_0^{\infty}\!\!\frac{du}{1+u^2} -\\\\& =\frac{\pi^2}8, -\end{align} -$$ thus -$$\displaystyle \color{red}{\int_0^{\infty}\!\frac{x}{1+x^4}\:\arctan\left(1/x\right)\:dx}=\frac{\pi^2}{16}$$ giving, from $(1)$, - -$$ -\lim_{a\to \infty} - \frac{1}{a} - \int_0^{\infty}\frac{x^2+ax+1}{1+x^4}\: \arctan\left(\frac{1}{x}\right)dx=\frac{\pi^2}{16}.\tag2 -$$<|endoftext|> -TITLE: A prime ideal $\mathfrak{p}$ decomposes in $\mathbb{Q}(\zeta_{12})/\mathbb{Q}(i)$ iff it is generated by $\alpha\in1+3\Bbb{Z}[i]$ -QUESTION [5 upvotes]: Prove that for a nonzero prime ideal $\mathfrak{p}$ of $\mathbb{Z}[i]$ which does not divide $3$, $\mathfrak{p}$ decomposes completely in the quadratic extension $\mathbb{Q}(\zeta_{12})/\mathbb{Q}(i)$ if and only if $\mathfrak{p} = (\alpha)$ for some $\alpha \in \mathbb{Z}[i]$ such that $a \equiv 1 \text{ mod }3\mathbb{Z}[i]$. - -What I have attempted so far. I've worked out an example of this. The prime divisor $(2 + 3i)$ of $13$ is generated by $-2 - 3i = 1 - (3 + 3i) \equiv 1 \text{ mod }3\mathbb{Z}[i]$. We have$$13 = \prod_{a = 1, 5, 7, 11} (2 - \zeta_{12}^a),\text{ }3 + 2i = i(2 - \zeta_{12})(2 - \zeta_{12}^5)$$in $\mathbb{Z}[\zeta_{12}]$. -I know that for a prime number $p \neq 2$, $3$, $p = x^2 + 9y^2$ for some $x$, $y \in \mathbb{Z}$ if and only if $p \equiv 1 \text{ mod }12$. -I don't know what do from here with the original statement. Could anybody help? - -REPLY [3 votes]: Since everything in the tower $\Bbb{Q}(\zeta_{12})/\Bbb{Q}(i)/\Bbb{Q}$ is abelian Galois, and what happens in the extension $\Bbb{Q}(i)/\Bbb{Q}$ is well known, we can as well look at the picture all the way down to the level of rational primes. So assume that $\mathfrak{p}$ is above the rational prime $p$. -Excluding the ramified cases $p=2$ and $p=3$ for now, we can use Dedekind's theorem telling us that the splitting behavior of $p$ in $\Bbb{Q}(\zeta_{12})$ (resp. $\Bbb{Q}(i)$) is accurately reproduced in the factorization of the minimal polynomial $x^4-x^2+1$ of $\zeta_{12}$ (resp. the minimal polynomial $x^2+1$ of $i$) modulo $p$. This depends on the residue class of $p$ modulo $12$ as follows. The fact that whenever $p>3$ we have $p^2\equiv1\pmod{12}$ comes to the fore. - -If $p\equiv1\pmod{12}$, then there are twelfth roots of unity in the prime field $\Bbb{F}_p$. Consequently $x^4-x^2+1$ splits into linear factors over $\Bbb{F}_p$, and $p$ decomposes totally in $\Bbb{Q}(\zeta_{12})/\Bbb{Q}$. So in this case $\mathfrak{p}$ also decomposes. -If $p\equiv5\pmod{12}$, then we need to go to the extension $\Bbb{F}_{p^2}$ to find twelfth roots of unity. Therefore $x^4-x^2+1$ is a product of two irreducible quadratic factors modulo $p$, and $p$ is a product of two prime ideals with inertia degree $f=2$. But as $p$ decomposes in $\Bbb{Q}(i)$, this means that $\mathfrak{p}$ is inert. -If $p\equiv7\pmod{12}$ or $p\equiv11\pmod{12}$, then, again $x^4-x^2+1$ splits into a product of two irreducible quadratic factors modulo $p$. However, this time $p$ is inert in $\Bbb{Q}(i)$, so $\mathfrak{p}=(p)$ decomposes in the extension $\Bbb{Q}(\zeta_{12})/\Bbb{Q}(i)$. - -Relating this to residue class of generator of $\mathfrak{p}$ is not difficult. Observe that we have the liberty to replace $\alpha$ with $i^k\alpha$ if the necessity arises. The multiplicative group of the field $\Bbb{Z}[i]/3\Bbb{Z}[i]$ -consists of the subgroup $H=\langle i\rangle$ and its coset $(1+i)H$. Therefore we only need to know whether $\alpha+3\Bbb{Z}[i]\in H$ or not. A useful way of distinguishing the cosets is the observation that $\alpha+3\Bbb{Z}[i]\in H$, iff $N(\alpha)=\alpha\overline{\alpha}\equiv1\pmod3$. This can be verified case-by-case: $N(i^k)=1$, $N((1+i)i^k)=2$. - -If $p\equiv1\pmod{12}$, then there are two prime ideals $\mathfrak{p}$, $\mathfrak{p'}$ above $p$, generated by some Gaussian integers $\alpha$ and $\overline{\alpha}$ respectively. In this case $p=N(\alpha)=\alpha\overline{\alpha}$, so $\alpha\overline{\alpha}=p\equiv1\pmod3$, and we can conclude that both $\alpha$ and $\overline{\alpha}$ have an associate in $1+3\Bbb{Z}[i]$. -If $p\equiv5\pmod{12}$, then a similar argument shows that the generators $\alpha$ and $\overline{\alpha}$ of the two prime ideals above $p$ satisfy -$N(\alpha)=N(\overline{\alpha})=p\equiv2\pmod3$, so in this case $\mathfrak{p}$ hasn't got a generator in $1+3\Bbb{Z}[i]$. This is just as well as we earlier saw that $\mathfrak{p}$ and $\mathfrak{p'}$ are both inert. -In the case $p\equiv7\pmod{12}$ we see that $\mathfrak{p}=(p)$. Here $p\equiv1\pmod3\Bbb{Z}[i]$. As we saw that $\mathfrak{p}$ splits, this is what was to be verified. -In the case $p\equiv11\pmod{12}$ we again have $\mathfrak{p}=(p)$, and this prime ideal splits in $\Bbb{Q}(\zeta_{12})/\Bbb{Q}(i)$. This time we can use $\alpha=-p\in1+3\Bbb{Z}[i]$ as a generator. - -This takes care of all the cases except the prime ideal $\mathfrak{p}=(1+i)$. Leaving that to you.<|endoftext|> -TITLE: Is this series absolutely convergent (doesn't look like an easy problem)? -QUESTION [7 upvotes]: Is the series -$$ -\sum_{n=1}^{\infty} \frac{\cos n}{n} -$$ -absolutely convergent? -(I've got a feeling that most probably it isn't due to the fact that for given -$\varepsilon>0$ we can find infinitely many $n$ such that -$|\cos n|>1-\varepsilon$, the problem is - how dense is the set of these $n$?) - -REPLY [2 votes]: $\cos(2n-1)\pi/2 =0$ for all $n\ge1$ -Set intervals as $[(2n-1)\pi/2-\pi/12, (2n-1)\pi/2+\pi/12]$ for each $n$. -Then |cos$x$|<=|cos$5\pi/12$| for all $x$ in $[(2n-1)\pi/2-\pi/12, (2n-1) \pi/2+\pi/12]$. Otherwise $|\cos x|>|\cos5\pi/12|$. -Since the length of $[(2n-1) \pi/2-\pi/12, (2n-1)\pi/2+\pi/12]$ is smaller than $1$, at least one of two consecutive natural number is not in $[(2n-1)\pi/2-\pi/12, (2n-1)\pi/2+\pi/12]$, of which the value of $|\cos|$ is larger than $|\cos5\pi/12|$. -Consequently the given series is larger than the infinite summation of $|\cos\pi/12|/2n$ for all $n\ge1$ so the given series diverges. -Or you can try other method using $\cos^2 x+\sin^2 x=1$ and proving the infinite summation of $cos^{2}n/n$ diverges. Other trigonometric formulas are also useful as well.<|endoftext|> -TITLE: Maximize the number of edges in a bipartite graph with no 4-cycles -QUESTION [5 upvotes]: Consider an undirected bipartite graph which has $n$ nodes in each component such that there are no cycles of length equal to $4$, and such that each pair of nodes has at most $1$ edge between them. What is the maximum number of the edges in this graph? - -An equivalent problem on the field of set theory: -There are $n$ sets $A_1,A_2,...,A_n$ where $A_i$ is a subset of $\{1,2,...,n\}$ and where $|A_i \cap A_j| \leq 1$ for $i\neq j$. We want to maximize $$\sum_{i=1}^{n} |A_i|.$$. - -REPLY [6 votes]: This problem was studied by I. Reiman in this paper. -He obtains $E\leq \frac{1}{2}(n+n\sqrt{4n-3})$. The bound is sharp when $n=p^2+p+1$ and $p$ is a prime power (by taking the Levi graph of the projective plane of order $p$). -I found this in the section "Extremal Problems of Paul Erdos on Circuits -in Graphs" of this book.<|endoftext|> -TITLE: On expectation of maximum of gaussians -QUESTION [5 upvotes]: Let $X_1,\ldots,X_n$ be i.i.d $\mathcal{N}(0,1)$ random variables. I am trying to prove that -\begin{align} -(a)\ \ \mathbb{E} \left[ \max_{i}X_i\right] & \asymp\mathbb{E} \left[ \max_{i}|X_i|\right] \asymp \sqrt{\log n},\\ -(b) \ \ \mathbb{E} \left[ \max_{i}X_i\right] &= \sqrt{2 \log n}+o(1) -\end{align} -where $A \asymp B$ means there exists universal constants $m,M >0$ such that $mA \leq B \leq MA$. -For part (a), I was able to prove the upper bound that $\mathbb{E} \left[ \max_{i}X_i\right] \leq \sqrt{2 \log n}$ using Jensen's inequality. How do I prove the lower bound and the fact that the two expectations are equivalent? I've been given the following hint: $\mathbb{P}(\max_{i}X_i \geq t)=1-\mathbb{P}(X_1 \leq t)^n$. - -REPLY [6 votes]: A mostly-worked-out answer to the lower bound in part a: -$$E[\max_i X_i]=E[\max_i X_i 1_{\max_i X_i \geq 0}]+E[\max_i X_i 1_{\max_i X_i<0}].$$ -We want to throw out that negative piece. Intuitively, it is unlikely to happen at all and it has bounded expectation. More rigorously, it goes to zero in probability (the probability of it being nonzero is $2^{-n}$) and is pointwise decreasing in magnitude, so by dominated convergence -$$E[\max_i X_i] \geq E[\max_i X_i 1_{\max_i X_i \geq 0}] + o(1) \\ -=\int_0^\infty 1-\Phi(t)^n dt + o(1).$$ -using the hint and a standard fact from Lebesgue integration of nonnegative functions. Denote the first term by $I$. -Next -$$I \geq \int_0^{\sqrt{2 \log(n)}} 1-\Phi(t)^n dt$$ -by simply throwing out regions of positive area. -On $[0,1]$ we have the simple bound $1-\Phi(t)^n \geq 1-\Phi(1)^n$. On $[1,\sqrt{2 \log(n)}]$ we have the bound $\Phi(t) \leq 1-\frac{1}{\sqrt{2 \pi}} e^{-t^2/2}$. (Cf. https://mikespivey.wordpress.com/2011/10/21/normaltails/) Hence -$$I \geq 1-\Phi(1)^n + \int_1^{\sqrt{2 \log(n)}} 1-\left ( 1-\frac{1}{\sqrt{2 \pi}} e^{-t^2/2} \right )^n dt.$$ -As for the remaining piece, we're integrating a decreasing function, so we get a lower bound by substituting in the upper limit: -$$I \geq 1-\Phi(1)^n+\int_1^{\sqrt{2 \log(n)}} 1-\left ( 1-\frac{1}{\sqrt{2 \pi}} n^{-1} \right )^n dt.$$ -The sequence of numbers in the integrand converges to $1-e^{-\frac{1}{\sqrt{2 \pi}}}>0$, so it is bounded below by $1-e^{-\frac{1}{\sqrt{2 \pi}}}-\varepsilon=:C$ for large enough $n$ depending on $\varepsilon$. Then we get the bound -$$I \geq C(\sqrt{2 \log(n)}-1)+1-\Phi(1)^n.$$ -Returning to the original problem we have -$$E[\max_i X_i] \geq C(\sqrt{2 \log(n)}-1)+1-\Phi(1)^n+o(1)$$ -which gives the lower bound for part a for sufficiently large $n$. A finite collection of $n$ can always be handled (why?) so we are done. -To solve part b we would need to be able to repeat the derivation to get $C=1$, and I'm not really sure how to do that. One idea would be to change variables to $u=\Phi(t)$, which would give -$$\int_0^\infty 1-\Phi(t)^n dt = \int_{1/2}^1 (1-u^n)\frac{dt}{du} du.$$ -where $\frac{dt}{du}$ is the reciprocal of the normal density, written as a function of the normal CDF itself. Perhaps it is possible to get an appropriate series expansion for this quantity to get the result.<|endoftext|> -TITLE: Where can I find Galois original paper? -QUESTION [7 upvotes]: As we all know Galois is an ultimate math prodigy. At age 17 or 18 he published a paper which we now know as Galois theory. I want to just see how he thought mathematics by seeing his original paper(of-course translated in english). -Thanks! - -REPLY [4 votes]: The best Galois full edition you will ever find is found here: -https://uberty.org/wp-content/uploads/2015/11/Peter_M._Neumann_The_Mathematical_Writings.pdf<|endoftext|> -TITLE: Combinatorial proof of the identity ${n+1\choose k+1}={k\choose k}+{k+1\choose k}+...+{n-1\choose k}+{n\choose k}$ -QUESTION [6 upvotes]: WTS: -${n+1\choose k+1}={k\choose k}+{k+1\choose k}+...+{n-1\choose k}+{n\choose k}.$ -Algebraically, it is possible, although tedious to show. Is there any combinatorial approach to show this fact? - -REPLY [5 votes]: We are choosing $k+1$ numbers from the numbers $1,2,3,\dots,n+1$. There are $\binom{n+1}{k+1}$ ways to do this. Let us count the number of choices in another way. -Maybe $1$ is the smallest number chosen. Then there are $\binom{n}{k}$ ways to choose the remaining $k$. -Maybe $2$ is the smallest number chosen. Then there are $\binom{n-1}{k}$ ways to choose the remaining $k$. -Maybe $3$ is the smallest number chosen. Then there are $\binom{n-2}{k}$ ways to choose the remaining $k$. -And so on. Add up. We get our sum (backwards).<|endoftext|> -TITLE: Limit of uniformly converging volume-preserving homeomorphisms -QUESTION [18 upvotes]: Definition A continuous map $f\colon \mathbb{R}^n \to \mathbb{R}^n$ is volume-preserving if, for every Borel set $V\subset\mathbb{R}^n$, $\mathcal{L}^n(V) = \mathcal{L}^n(f^{-1}(V))$. -I am wondering if the following holds: - -Suppose $f_n\colon \mathbb{R}^n \to \mathbb{R}^n$ is a volume-preserving homeomorphism for each $n\in\mathbb{N}$. - If $f_n$ converges uniformly to $f$, then $f$ is a volume-preserving - homeomorphism. - -So far, we know that $f$ is volume-preserving for the following reason. Let $\phi \in C_c^\infty$. Because $f_n$ is volume-preserving, $\int \phi\circ f_n\,dx = \int \phi\,dx$. As $f_n \to f$ uniformly, one can show that $\int \phi\circ f_n\,dx \to \int \phi \circ f\,dx$. Now we know that $\int \phi \circ f\,dx = \int \phi\,dx$, and so $f$ is volume-preserving. - -REPLY [4 votes]: This statement is not correct, the limit might not be a homeomorphism. Here is a sketch of an example in the plane: Pick a sequence of concentric circles $C_n$ of radius $1/n$ centered at $0$, let $D_n$ be the disk bounded by $C_n$, and $A_n$ the annulus between $C_n$ and $C_{n+1}$. Now pick a corresponding sequence of nested ellipses $E_n$, with $E_1 = C_1$ the unit circle, such that the area enclosed by $E_n$ is the same as the one enclosed by $C_n$, and such that the ellipses converge to a non-trivial interval $I$, e.g., $I = [-1/2, 1/2] \times \{0\}$. Let $F_n$ be the domain bounded by the ellipse $E_n$, and let $B_n$ be the topological annulus between $E_n$ and $E_{n+1}$. Then $F_n$ has the same area as $D_n$, and $B_n$ has the same area as $A_n$. Pick a sequence of orientation-preserving diffeomorphisms $\phi_n : E_n \to C_n$ with $\phi_1 = \textrm{id}$. By a classical result there exist area-preserving maps $g_n: F_n \to D_n$ (ellipse to disk) and $h_n: B_n \to A_n$ (elliptical annulus to round annulus) with boundary values given by $\phi_n$ and $\phi_{n+1}$. We also define $h_0 = \textrm{id}$ outside of the unit disk $C_1 = E_1$. Now define $f_n$ to agree with $h_k$ for $0 \le k < n$ outside of $E_n$, and with $g_n$ inside of $E_n$. Then it is easy to check that $(f_n)$ converges uniformly to an area-preserving map $f$ of the plane which maps the whole interval $I$ to the point $0$.<|endoftext|> -TITLE: Two points of view on constructible sets -QUESTION [5 upvotes]: This question is aimed at understanding the relationship between two different definitions of the constructible sets in a Noetherian scheme, both of which I encountered in Atiyah-MacDonald's Introduction to Commutative Algebra (henceforth AM). It follows up a question I asked before, which was beautifully answered by user hot_queen. -The setup: -Let $X$ be a set and $\{U_\lambda\}_{\lambda\in\Lambda}\subset 2^X$ a family of subsets of $X$ that is closed under finite intersection, so it serves as a base for a topology $\mathscr{T}$. -Let $\mathscr{F}$ be the smallest family of subsets of $X$ that contains $\mathscr{T}$ and is closed under complementation and finite intersection. By exercise 20 of ch. 7 in AM, $\mathscr{F}$ is equivalently the family of finite unions of locally closed sets. AM defines this as the constructible sets in exercise 21 of the same chapter. -Meanwhile, let $\mathscr{G}$ be the coarsest topology in which every $U_\lambda$ is clopen. In exercises 27-28 of chapter 3 of AM, it is shown that if $X$ is the Spec of a ring $B$ and the $U_\lambda$'s are the standard basic opens of the Zariski topology, then this is precisely the topology in which the images in $X$ of the Specs of all $B$-algebras are taken as the closed sets. AM defines this as the constructible topology in exercise 27. AM notes that the closed sets are exactly the images of Specs, so I infer that it is the closed sets of this topology (i.e. the image in $2^X$ of the family $\mathscr{G}$ under complementation of each member; call it $\mathscr{G}^c$) that AM means to refer to as constructible. -$\mathscr{F}$ is not equal to $\mathscr{G}$ or $\mathscr{G}^c$ in the generality in which I've defined them. In fact $\mathscr{G}$ depends on the base $\{U_\lambda\}$ chosen for $\mathscr{T}$ whereas $\mathscr{F}$ only depends on $\mathscr{T}$. See hot_queen's answer to my question linked above for beautiful simple examples of the differences. (I framed that question in terms of $\mathscr{F}$ vs. $\mathscr{G}$, since I forgot the context in AM ch. 3 of the def. of $\mathscr{G}$, but the examples show the same for $\mathscr{G}^c$ because $\mathscr{F}$ is invariant under complementation.) -I assume, however, since AM is using the same word for them, that if $X$ is the Spec of a noetherian ring, $\mathscr{T}$ is the Zariski topology, and $\{U_\lambda\}$ are the standard basic sets $X_f$ of this topology (i.e. the Specs of the localizations of the underlying ring at single elements), then $\mathscr{F}$ coincides with $\mathscr{G}^c$. So: -My questions: -1) Is this true? (I.e. that $\mathscr{F}=\mathscr{G}^c$ if $X$ is the Spec of a Noetherian ring, $\mathscr{T}$ is the Zariski topology, and the $\{U_\lambda\}$'s are the standard Zariski basis opens?) -2) If yes, which if any of these assumptions can be loosened? Does it remain true if we use a different base for the Zariski topology (e.g. the whole topology)? If yes, does it matter what base? If it doesn't, can the statement be loosened to if we just assume $(X,\mathscr{T})$ is a noetherian space (rather than the Spec of a noetherian ring)? And what are the proofs and/or counterexamples? -Thanks in advance. - -REPLY [2 votes]: No, $\mathscr{F}$ and $\mathscr{G}^c$ are usually very different. For instance, suppose $X$ is irreducible, Noetherian, and $1$-dimensional, so that it has a generic point $g\in X$ and a nonempty set $U\subseteq X$ is Zariski-open iff it contains $g$ and is cofinite. Then $\mathscr{F}$ consists of the sets that either are cofinite and contain $g$ or are finite and do not contain $g$. But $\mathscr{G}$ contains every singleton other than $\{g\}$, and thus contains all subsets of $X\setminus\{g\}$ since it is closed under arbitrary unions. So $\mathscr{G}^c$ contains all subsets of $X$ containing $g$ (in fact, it consists of exactly the sets that either contain $g$ or are finite). -What is true is that $\mathscr{F}=\mathscr{G}\cap\mathscr{G}^c$. That is, the constructible sets are the clopen sets in the constructible topology. In fact, this is true for Spec of any ring as long as you change the definition of $\mathscr{F}$ slightly: $\mathscr{F}$ should be the smallest collection of sets containing the basic Zariski-open sets $U_\lambda$ and closed under finite Boolean operations (in the non-Noetherian case, this is different from your $\mathscr{F}$ since not every Zariski-open set can be obtained as a finite union of basic open sets). -With this definition, let us prove that $\mathscr{F}=\mathscr{G}\cap\mathscr{G}^c$ always holds. Clearly $\mathscr{F}\subseteq\mathscr{G}\cap\mathscr{G}^c$. To prove the reverse inclusion, we will first show that the $\mathscr{G}$-topology is compact. To show this, suppose you have a collection of sets, each of which is either basic open or closed with respect to the Zariski topology, and these sets have the finite intersection property; we wish to show their intersection is nonempty. Let $\{U_i\}$ be the basic open sets in your family and $\{C_j\}$ be the closed sets; we may assume each of these collections is closed under finite intersections. Note that any basic open set is compact in the Zariski topology (since it is Spec of a localization of the ring), and so $U_i\cap \bigcap C_j$ is still nonempty for each $i$. Thus writing $C=\bigcap C_j$, the family $\{U_i\}\cup\{C\}$ still has the finite intersection property. But now $C$ is a closed subset of $X$ and thus is Spec of some ring $A$, and the sets $U_i\cap C$ are the sets of prime ideals of $A$ that do not contain $f_i$, for some elements $f_i\in A$. The finite intersection property of these sets says that for any finite collection of the $f_i$, the multiplicatively closed set they generate does not contain $0$. It follows that the multiplicatively closed set that all the $f_i$ generate does not contain $0$. Thus localizing at this multiplicatively closed set, we get a nonzero ring, and any prime ideal in this ring gives a point of $C\cap\bigcap U_i$. -Now suppose $C\subseteq X$ is clopen for the $\mathscr{G}$-topology. Since $C$ is open, we can write it as a union of sets from $\mathscr{F}$. Since the $\mathscr{G}$-topology is compact and $C$ is closed, $C$ is compact, so actually only finitely many of our sets from $\mathscr{F}$ are needed to cover $C$. Since $\mathscr{F}$ is closed under finite unions, $C\in\mathscr{F}$. -A similar argument can be shown to work more generally whenever you have a topological space $X$ which is sober and such that the compact open subsets of $X$ are closed under finite intersections and form a basis for the topology of $X$ (in particular, this applies to any Noetherian sober space, since any subset of a Noetherian space is compact). Letting $\mathscr{F}$ denote the collection of sets generated by the compact open subsets of $X$ under finite Boolean operations and $\mathscr{G}$ denote the topology generated by $\mathscr{F}$, we can then prove that the clopen sets for the topology $\mathscr{G}$ are exactly the elements of $\mathscr{F}$. (Sobriety of $X$ is used in place of the ring-theoretic argument I gave above that $C\cap\bigcap U_i$ is nonempty.) -In fact, however, this result is not actually more general, because the spaces $X$ satisfying the hypotheses above (called spectral spaces) are exactly the spaces that are homeomorphic to Spec of a ring. This is a fairly difficult theorem of Hochster; see this paper for a proof and much more on the general theory of spectral spaces (the topology $\mathscr{G}$ is what Hochster calls the patch topology).<|endoftext|> -TITLE: How is it posible that $f + g \in O(f)$? -QUESTION [5 upvotes]: I am confused how to do this question. Intuitively it doesn't even make sense how a function $f$ plus another function is in $O(f)$. How can I approach this question: -$$ -n\log(n^7)+n^{\frac{7}{2}} \in O(n^{\frac{7}{2}}). -$$ -We know the fact that $\log n < n$ and I tried factoring out the $n$ but I am stuck. Any hints would be appreciated, Thanks! - -REPLY [2 votes]: Actually, this shouldn't be too terribly surprising. Indeed, if $g \in O(f)$, then it is always true that $f + g \in O(f)$. To see this, note that since $g \in O(f)$, there exists $n_0, C > 0$ such that for all $n > n_0$, $g(n) \leq Cf(n)$, by definition. Therefore, for all $n > n_0$, $(f+g)(n) = f(n) + g(n) \leq (C+1)f(n)$, which shows that $f + g \in O(f)$, -Intuitively, this makes sense. If $g$ does not grow faster than $f$ asymptotically, then $f + g$ shouldn't grow any faster than $2f$ asymptotically. But that is a constant multiple of $f$, so it has the same asymptotic order. Note that $C = 2$ isn't the correct constant to use, since $g$ may already be larger than $f$ by some constant, but the idea is the key. You may add any finite number of functions that are $O(f)$ to $f$ without making its asymptotic growth any larger. -For your particular problem, write $n\log(n^7) = 7n\log(n)$ and then note that $7n\log(n) \leq 7n^2 \leq 7n^{7/2}$ using the fact that $\log(n) < n$. Therefore, for all $n > 0$, -$$ n\log(n^7) + n^{7/2} \leq 7n^{7/2} + n^{7/2} = 8n^{7/2}, $$ -which shows that $n\log(n^7) + n^{7/2} \in O(n^{7/2}).$<|endoftext|> -TITLE: Mental $n-$th root of $N$ -QUESTION [11 upvotes]: It has been a while since I started thinking about this problem: a fast method to evaluate (in an approximate way) mentally the $n-$th root of a number $N$. I'm talking about great numbers, because otherwise one could handle with the first terms of a Taylor series. -I found time ago a really cute approximation for the square root, which runs like this: you have $N$, and you always can write $N$ as -$$N = q^2 + s$$ -Where $q^2$ is the nearest $N$ perfect square, and $s$ is the remainder. Trivial example: $50 = 49 + 1$. -Thanks to that, one can approximate -$$\sqrt{N} = \sqrt{q^2 \pm s} \approx q \pm \frac{s}{2q}$$ -Fast example -$$\sqrt{43} = \sqrt{36 + 7} \approx 6 + \frac{7}{12} = 6 + 0.58333 = 6.58333(..)$$ -Where actually -$$\sqrt{43} = 6.55743(...)$$ -So somehow one may compute it mentally in a quite easy and fast way. -The $n-$th problem -I started then to think about if such a method could be generalized for some $n-$th square root. With intuition (and a lot of naiveness) I thought about a sort of "mathematical symmetry", something like -$$\sqrt[n]{N} = \sqrt[n]{q^n \pm s} \approx q \pm \frac{s}{nq^{n-1}}$$ -But I don't know if that may work in general. For example: -$$\sqrt[4]{600} = \sqrt[4]{625 - 25} = \sqrt[4]{5^4 - 25} \approx 5 - \frac{25}{4\cdot 5^{4-1}} = 5 - 0.05 = 4.95$$ -And surprisingly -$$4.95^4 = 600.3725(...)$$ -BUT if we mind with the plus sign... -$$\sqrt[4]{600} = \sqrt[4]{256 + 344} = \sqrt[4]{4^4 + 344} \approx 4 + \frac{344}{4\cdot 4^3} = 1.34375 = 5.34375$$ -Where -$$5.34375^4 = 815.4259(...)$$ -So it seems like it works if we pick the correct sign namely when the remainder $s$ is quite small. -What I'm asking for is for the existence of a pretty simple form to approximate roots, namely something like -$$\sqrt[n]{N} = \sqrt[n]{q^n + s} \approx q + f(s, q, n)$$ -In which $f(s, q, n)$ is some good simple function (simple = not a sum of terms, just one). -Or my "intuition" is good enough to work? (Always with picking the correct decision about $\pm$ et cetera) -Thank you all! - -REPLY [2 votes]: There is a pretty simple form to approximate roots, called Newton-Raphson method: -Function (input value, input degree, input num_of_iterations, output root): - Set root = value - Repeat num_of_iterations: - Set temp = root^(degree-1) - Set root = root-(root*temp-value)/(degree*temp) - -Python implementation: -def Function(value,degree,num_of_iterations): - root = float(value) - for n in range(0,num_of_iterations): - temp = root**(degree-1) - root = root-(root*temp-value)/(degree*temp) - return root - -$3$ or $4$ iterations are typically sufficient for a good approximation.<|endoftext|> -TITLE: Examples of classes $\mathcal{C}$ of structures such that every finite group is isomorphic to the automorphism group of a structure in $\mathcal{C}$ -QUESTION [14 upvotes]: Since it is not the case that every group is the automorphism group of a group (see Is every group the automorphism group of a group?), it is natural to ask: what are some examples of classes $\mathcal{C}$ of structures such that for each finite group $G$, there exists a structure $C$ of class $\mathcal{C}$ such that $\text{Aut}(C) \cong G$? -As discussed in Peter Cameron's Automorphisms of graphs, a class $\mathcal{C}$ of structures is said to be universal if every finite group is the automorphism group of a structure in $\mathcal{C}$. As indicated in this article, the following classes of structures are universal: -$\bullet$ The class of graphs (Frucht's theorem); -$\bullet$ The class of trivalent graphs; -$\bullet$ The class of graphs of valency $k$ for fixed $k > 2$; -$\bullet$ The class of bipartite graphs; -$\bullet$ The class of strongly regular graphs; -$\bullet$ The class of Hamiltonian graphs; -$\bullet$ The class of $k$-connected graphs for $k \in \mathbb{N}$; -$\bullet$ The class of $k$-chromatic graphs for $k > 1$; -$\bullet$ The class of finite distributive lattices; -$\bullet$ Switching classes of graphs; -$\bullet$ The class of projective planes; -$\bullet$ The class of Steiner triple systems; and -$\bullet$ The class of balanced incomplete block designs. -It is also known that: -$\bullet$ The class of matroids is universal, as shown in the article On the automorphism group of a matroid; -$\bullet$ The class of finite posets is universal, as shown in the article Automorphism groups of finite posets; and -$\bullet$ The class of complete, connected, locally connected metric spaces of any fixed positive dimension is universal, as discussed in the following link: Automorphism group of a topological space; -$\bullet$ The class of directed acyclic graphs is universal, as discussed in the following link: Can any finite group be realized as the automorphism group of a directed acyclic graph?; and -$\bullet$ The class of finite orthomodular lattices is universal, as proven in the article Every finite group is the automorphism group of some finite orthomodular lattice. -Observe that most of the universal classes given above are classes of combinatorial/discrete structures as opposed to algebraic structures defined in terms of binary operations such as monoids and rings, or geometric structures such as manifolds. It is natural to ask: -(1) What are some other interesting examples of universal classes of structures? -(2) Are there any known examples of universal classes of 'algebraic' structures, i.e. structures endowed with at least one binary operation satisfying certain axioms? Is the class of rings universal? Is the class of monoids universal? Is the class of semigroups universal? -(3) Are there any known examples of universal classes of 'geometric' structures, e.g., structures such as smooth manifolds? -(4) What are some interesting examples of classes of structures which are known to be non-universal? As shown by Polya, one such example is the class of trees. It is also known that the class of planar graphs is not universal. Also, it is known that any minor-closed class of graphs is not universal. - -REPLY [4 votes]: The classes of monoids and semigroups are universal. -Adjoining an identity to a semigroup doesn't change the automorphism group, so it's enough to prove this for semigroups, for which I'll use the fact that the class of directed (acyclic) graphs is universal. -Let $G$ be a directed graph with vertex and edge sets $V$ and $E$. Define a semigroup $S=V\cup E\cup\{0\}$ with all products equal to $0$ except that, for $v\in V$ and $e\in E$, $v^2=v$, $ve=e$ if $v$ is the initial vertex of $e$, and $ev=e$ if $v$ is the terminal vertex of $e$. -Then it is easy to see that the automorphism group of $S$ is the same as that of $G$. -In fact, since universality of directed graphs only requires finite graphs, even the classes of finite monoids and semigroups are universal.<|endoftext|> -TITLE: Solve $x^n+y^n = (x+y)^n$ -QUESTION [13 upvotes]: Find all positive integers $n$ and real numbers $x$ and $y$ satisfying $x^n+y^n = (x+y)^n$. - -We first consider the case that $n$ is even. We have $x^{2k}+y^{2k} = \binom{2k}{0}x^{2k}+\binom{2k}{1}yx^{2k-1}+\cdots+\binom{2k}{2k}y^{2k}$. This can be simplified down to $$\binom{2k}{1}yx^{2k-1}+\binom{2k}{2}y^2x^{2k-2}+\cdots+\binom{2k}{2k-1}xy^{2k-1} = 0 \implies$$ $$\binom{2k}{1}x^{2k}+\binom{2k}{2}yx^{2k-1}+\cdots+\binom{2k}{2k-1}y^{2k} = 0.$$ -Similarly we can form equations if $n$ is even. The form of the terms remind me of derivatives so that may be useful. - -REPLY [5 votes]: $y=0$ is a solution for every $n>0$, so we can assume $y\ne0$. For $n=1$ the equation is satisfied for every pair of numbers $x$ and $y$; thus we can assume $n>1$. -If we set $x=ty$, the equation becomes -$$ -y^n(t^n+1)=y^n(t+1)^n -$$ -so we can as well solve $t^n+1=(t+1)^n$. -Consider the function $f(t)=(t+1)^n-t^n-1$, which we want to find the zeros of. We have -$$ -f(t)=\sum_{k=1}^{n-1}\binom{n}{k}t^k -$$ -so -$$ -\lim_{t\to\infty}f(t)=\infty -$$ -whereas -$$ -\lim_{t\to-\infty}f(t)=\begin{cases} --\infty & \text{if $n$ is even}\\ -\infty & \text{if $n$ is odd} -\end{cases} -$$ -For the derivative -$$ -f'(t)=n((t+1)^{n-1}-t^{n-1}) -$$ -we see it vanishes for -$$ -\left(\frac{t+1}{t}\right)^{\!n-1}=1 -$$ -If $n$ is even, $n-1$ is odd and so the derivative does not vanish. The function has then a single zero. -If $n$ is odd, the derivative vanishes for $t+1=-t$, that is, $t=-1/2$, which is the point of absolute minimum. Since -$$ -f(-1/2)=(1/2)^n-(-1/2)^n-1=\frac{1}{2^{n-1}}-1<0 -$$ -the function has two zeros. -Note that $f(0)=0$ for every $n>1$; if $n$ is odd, $f(-1)=0$. -Summarizing the above facts, we have that the equation $(x+y)^n=x^n+y^n$, for $n>1$, has the trivial solutions “$x=0$ or $y=0$”. If $n$ is odd the equation has also the solution $x=-y$. -No other solutions exist for $n>1$.<|endoftext|> -TITLE: Existence of vector space complement and axiom of choice -QUESTION [6 upvotes]: Let say we live in the category of vector spaces over $\mathbb{R}$ or $\mathbb{C}.$ Here are three sentences: - -Axiom of choice -Every vector space has a base. -For every vector space $V$ and its subspace $E\subset V$ there is a subspace $F\subset V$ such that $V=E\oplus F.$ - -I know how to prove that (1)->(2)->(3). How about the inverese? Do (2)->(1) and (3)->(2) hold? -If this is not the case, then is there some weaker version of AC which imply (3)? - -REPLY [10 votes]: No, there is no weaker choice principle implying (3). It was shown that (3) implies the axiom of choice in $\sf ZF$. -The proof is via an equivalent of the axiom of choice called "The Axiom of Multiple Choice". You can find the details in Rubin & Rubin's "Equivalents of the Axiom of Choice II" as Theorem 6.35 (pp. 119-120 and 122). -The proof is due to Bleicher from 1964 - -M. N. Bleicher, Some theorems on vector spaces and the axiom of choice, Fund. Math. 54 (1964), 95--107. - -It is interesting to note that in a more relaxed setting where there might be atoms (non-set objects) or that the axiom of regularity fails, it is not known whether or not (3) implies the axiom of choice.<|endoftext|> -TITLE: What is the math behind equal-spacing divider tool? -QUESTION [5 upvotes]: I want to know what is the math behind this tool, - -the 10 point divider (Full size image here). -This tool is used to measure equidistant spaces and it's proportional, so you can scale it as much as you want. See this page for its use drawing perspective grids. -I searched the internet but didn't find anything useful except for a golden ratio ruler. - -I know that it measures 1 to 1.618 but how to keep it scalable? - -REPLY [5 votes]: I have never seen this device, but I would recognize the geometric principal anywhere. This tool relies on parallelograms to keep its fingers straight. Believe it or not, most of this device is mathematically redundant. To fully constrain the fingers, you only need the bottom three rows of joints. -Thanks to the properties of parallelograms, we can postulate that all of the parallelograms in a row are congruent, because all of their corresponding segments are parallel and of equal length (the distance between two joints). Because any two fingers are just extensions of the opposite sides of a parallelogram, they must be parallel and equally spaced. -As for the Golden Section Gauge, the same principal applies. In the image that you provided, $ \overline{AF}/\overline{BF} = 340/210 \approx \phi $. Parallelogram $ABEC$ keeps $ \overline{AC} \ || \ \overline{BE} $. The Proportional Segments Theorem proves that $ \overline{AF}/\overline{BF} = \overline{FG}/\overline{HG} $, thus $ \overline{FG}/\overline{HG} \approx \phi $. The device is "scaleable", because this proof holds true regardless of the size of $ \angle{FAH} $.<|endoftext|> -TITLE: About ZFC, peano's axioms, first order logic and completeness? -QUESTION [5 upvotes]: I read somewhere that the Peano's axioms can be derived out of ZFC. But if that is the case ZFC would be incomplete right( by Godel's incompleteness theorem)? But since ZFC is in first order logic , it would mean from the completeness theorem that it is complete, right? But Peano's axioms are in second order logic, right (the axiom of induction)? So where am I wrong? - -REPLY [8 votes]: There seem to be two confusions going on: about Peano arithmetic, and about the completeness theorem. - -Peano arithmetic -The important thing to keep in mind is that there are actually two things which could reasonably called "Peano arithmetic"! - -First-order Peano arithmetic ($PA$). This is what the name usually means these days, although this is historically not what Peano introduced. Here, the induction axiom is of course verboten; instead, there is an induction scheme: for each formula $\varphi$ in the language of arithmetic, we have the axiom $$\forall y([\varphi(0, y)\wedge\forall x(\varphi(x, y)\implies \varphi(x+1, y))]\implies\forall x(\varphi(x, y))).$$ (The $y$ here is just a parameter, and can be ignored at first reading.) PA, like ZFC, is incomplete. -Second-order Peano arithmetic ($PA_2$). This is what Peano originally introduced. It is categorical, and Godel's theorem does not apply to it (since it isn't first-order). - -ZFC does indeed contain $PA$, but not $PA_2$. -Note that a similar thing is going on in ZFC! There is a second-order version of ZFC, in which the schemes of separation and collection are replaced by second-order versions. -EDIT: Keeping track of what's first-order and what's not can get very confusing. Personal favorite: there's a theory called "second-order arithmetic," which . . . is a first-order theory! So you always want to pay attention to what kind of theory it is you're talking about. - -Completeness theorem -The completeness theorem does not say that every first-order theory is complete; rather, it says that the rules of proof for first-order logic are complete, in the sense that if $T$ is any first-order theory, and $\varphi$ is a first-order sentence true in every model of $T$, then $\varphi$ is provable from $T$. This is very different from what we mean when we say a theory is complete: a theory is complete if for every $\varphi$ in its language, it either proves or disproves $\varphi$.<|endoftext|> -TITLE: What is the most efficient algorithm for factorisation when an approximate value of one factor is known -QUESTION [10 upvotes]: If I am given the following number: -1522605027922533360535618378132637429718068114961380688657908494580122963258952897654000350 -692006139 - -And am told that one of the factors is in the range: -38035634573286525913223768327418691775212180785884 - -37933217936943673922808872755445625858565536638189 - -What would be the most efficient (classical) algorithm for calculating the factors? Obviously, brute forcing would be out of question, and the Quadratic and General Number Field Sieves wouldn't be able to use the range. - -REPLY [2 votes]: We know that $N$ has a factor $p$ between $0.97213\sqrt N$ and $0.97476\sqrt N$. Then for suitable $a$ (small, but with good factors), $N'=aN$ may have a factor very close to $\sqrt{N'}$, and that can be found by Fermat's method. For example if $a=20\cdot 19$ and $p\approx 0.97476\sqrt N$ then $20p\approx 1.00008\sqrt{aN}$. This $a$ was just a simple ad hoc example and already seems somewhat helpful (for a part of the possible factor range). To me it sounds promising to go look for a nice $a$ with many factor between $0.97213\sqrt a$ and $0.97476\sqrt a$.<|endoftext|> -TITLE: Is the tensor product over $B$ of two flat $A$-modules flat over $A$? -QUESTION [15 upvotes]: Given a morphism of commutative rings $A\to B$ such that $B$ is a flat $A$-module and given $M$, $N$ two $B$-modules flat as $A$-modules, is the tensor product $M\otimes_B N$ flat over $A$?? -The tensor product $M\otimes_A N$ is flat over $A$, the proof is not hard: -Given an exact sequence $0\to X\to Y\to Z\to 0$ of $A$-modules since $M$ is -a flat $A$-module the sequence $0\to X\otimes_A M\to Y\otimes_A M\to Z\otimes_A M\to 0$ is exact. Again $N$ is a flat $A$-module, and since the tensor commute we have the statement. -I've tried some similar arguments but without any success, and I can't find a counterexample. -There is a morphism $M\otimes_A N\to M\otimes_B N$ (it is shown in here), but I don't know how to use it. - -REPLY [17 votes]: Not in general, not even if $M$ and $N$ are, in fact $B$-algebras. To see this, let $A = k[t]$ be the ring of polynomials in one indeterminate $t$ over a field $k$, and $B = k[x,y]$ be the ring of polynomials in two indeterminates $x,y$ over $k$ made into an $A$-algebra by mapping $t$ to $x+y$. Let $M = B/(y)$ and $N = B/(x)$ be the quotient rings, emphatically not flat as $B$-modules but still flat as $A$-modules because in fact $M = k[x]$ with $t$ being mapped to $x$ and $N = k[y]$ with $t$ being mapped to $y$ are both even isomorphic to $A$. Yet the tensor product $M \otimes_B N$ is $B / (x,y) = k$ which is not flat as an $A$-module. -This is probably easier to view geometrically: $\mathop{\mathrm{Spec}} M$ and $\mathop{\mathrm{Spec}} N$ are two lines in the plane $\mathop{\mathrm{Spec}} B$ whose intersection is a point: each line maps flatly, and even isomorphically, to the line $\mathop{\mathrm{Spec}} A$, but their intersection (which is their fiber product over the plane) does not map flatly. -Very nice question, though!<|endoftext|> -TITLE: Prove that this summation evaluates out to $\zeta(2)-1$ -QUESTION [5 upvotes]: I am aware of the following identity: -$$\sum_{m=1}^\infty \left(\frac{1}{m}-\left(\zeta(2)-\sum_{n=1}^m \frac{1}{n^2}\right)\right)=\zeta(2)-1$$ -I can't quite figure out how to prove this result. Maybe this has to do with some specific properties of $\zeta(x)$, but I really don't know enough yet. If possible, as well, could one generalize this result for looking at more than just $\zeta(2)$ - -REPLY [5 votes]: $\sum_{m=1}^\infty \left(\frac{1}{m}-\left(\zeta(2)-\sum_{n=1}^m \frac{1}{n^2}\right)\right)=\zeta(2)-1 -$ -Playing around -and seeing what happens. -Since -$\zeta(2) -=\sum_{n=1}^{\infty} \frac{1}{n^2} -$, -the left side is -$\begin{array}\\ -\sum_{m=1}^\infty \left(\frac{1}{m}-\left(\sum_{n=m+1}^{\infty} \frac{1}{n^2}\right)\right) -&=\sum_{m=1}^\infty \left(\sum_{n=m+1}^{\infty}\left(\frac{1}{n-1}-\frac1{n}\right)-\left(\sum_{n=m+1}^{\infty} \frac{1}{n^2}\right)\right)\\ -&=\sum_{m=1}^\infty \left(\sum_{n=m+1}^{\infty}\left(\frac{1}{n(n-1)}\right)-\left(\sum_{n=m+1}^{\infty} \frac{1}{n^2}\right)\right)\\ -&=\sum_{m=1}^\infty \left(\sum_{n=m+1}^{\infty} \left(\frac{1}{n(n-1)}-\frac{1}{n^2}\right)\right)\\ -&=\sum_{m=1}^\infty \sum_{n=m+1}^{\infty} \frac{1}{n^2(n-1)}\\ -&=\sum_{n=2}^{\infty}\sum_{m=1}^{n-1} \frac{1}{n^2(n-1)} -\qquad\text{(Looking good!)}\\ -&=\sum_{n=2}^{\infty}(n-1) \frac{1}{n^2(n-1)}\\ -&=\sum_{n=2}^{\infty} \frac{1}{n^2}\\ -&=\zeta(2)-1 -\qquad\text{Shazam!} -\end{array} -$<|endoftext|> -TITLE: Determine all integers $x $ and $ y$ such that $|2^x − 3^y| =1$ -QUESTION [5 upvotes]: I am having trouble solving this problem: - -Determine all integers x and y such that $|2^x − 3^y| =1$ - -I would think that the only solutions to it is $x = y = 1$. -How can I show that there is no other solutions? -If there are other solutions, how can I find all the solutions? - -REPLY [7 votes]: Hint: If $2^x - 3^y = 1$, then $2^x = 1 + 3^y$. Take both sides modulo $16$: -$$2^x \equiv 1 + 3^y \pmod{16}.$$ -If $x\ge 4$, then the left-hand side is $0$. The sequence of $3^y \pmod{16}$ is $3,9,27=11,81=1,3,9,11,\ldots$, so the right-hand side is $1+3=4$, $1+9=10$, $1+11=12$, or $1+1=2$, none of which is equivalent to $0\pmod{16}$. Hence, there are no solutions when $x\ge 4$. -Now, what happens if $2^x - 3^y = -1$?<|endoftext|> -TITLE: Can $f_n\to f$ uniformly, $f'_n\to g$ uniformly, but $f$ not being differentiable? -QUESTION [6 upvotes]: Just the question in the title, -I know that if $f_n$ are differentiable, $f_n\to f$ uniformly, $f'_n\to g$ uniformly and $f$ is differentiable, then $f'=g$, so I'm looking for a counterexample if we remove that hypothesis. - -REPLY [5 votes]: Actually the following stronger result holds: - -If $f_n$ are differentiable functions on some open set $U$, $f_n\to f$ pointwise on $U$, and $f'_n\to g$ uniformly on $U$, then $f$ is differentiable on $U$ and $f'=g$ on $U$. - -So there is no counterexample.<|endoftext|> -TITLE: Where Fermat's last theorem fails -QUESTION [33 upvotes]: It's fairly well known that Fermat's last theorem fails in $\mathbb{Z}/p\mathbb{Z}$. Schur discovered this while he was trying to prove the conjecture on $\mathbb{N}$, and the proof is an application of one of his results in Ramsey theory, now known as Schur's theorem. -I'm wondering whether there are any other places (let's say, unique factorisation domains) where the statement is known to be false? - -REPLY [3 votes]: You can also blow FLT out of the water in $p$-adics. Consider the ordinary Pythagorean triple -$17^2+144^2=145^2$ -Render these arguments in $2$-adics: $17$ and $145$ are each one greater than a multiple of $8$, thus squares of other $2$-adic integers which I shall call $\pm m$ and $\pm n$ respectively (an additive inverse pair of choices for each). And of course $144$ is the square of $\pm 12$. So then we have eight $2$-adic equations (four of them "linearly independent") of the form -$(\pm m)^{\color{blue}{4}}+(\pm 12)^{\color{blue}{4}}=(\pm n)^{\color{blue}{4}}$<|endoftext|> -TITLE: find the closed form of $a_n=3a_{\lceil n/2 \rceil}+1$, $a_1=1$ -QUESTION [5 upvotes]: Let $a_1,a_2,...$ be the sequence recursively defined by $a_1 = 1$ and $a_n = 3a_{\lceil n/2 \rceil}+ 1$ for $n \geq 2$. -a)Find $a_2,a_3,a_4,a_5,a_6,a_7$and $a_8$ -b)Guess a formula for $ a_n -$ -$a_1=1,a_2=4,a_3=13,a_4=13,a_5=40,a_6=40,a_7=40,a_8=40$ is the start of the sequence i found. -I've tried $3^{\lceil n/2 \rceil}+1$ and other forms similar to $3^n$ -I can't find a formula which works any ideas? - -REPLY [7 votes]: OK - not sure if you were able to use the hint. -Let $b_k = a_{2^k}$. I assume you are able to show, by induction, that $a_n = b_k$ for all $n$ from $2^k$ to $2^{k+1}-1$. Then note that this means $a_n = b_{\lfloor \log_2 n \rfloor}$ for all $n \ge 1$. -So we only need to compute the $b_k$ and note it makes sense to start with $k = 0$. We have $b_0 = 1, b_{k+1} = 3 b_k + 1$. Then also $b_{k+2} = 3 b_{k+1} + 1$; subtract side by side to get $b_{k+2} - b_{k+1} = 3(b_{k+1} -b_k)$. This means the successive differences form a geometric progression with ratio 3. That is: $b_1 - b_0 = 3$ (directly from $b_0 = 1$ and $b_1 = 4$), then $b_2 - b_1 = 9$, etc. -Now $b_k = b_0 + (b_1 - b_0) + \cdots + (b_k - b_{k-1}) = 1 + 3 + \cdots + 3^k = \dfrac {3^{k+1}-1} 2$. -So finally: $a_n = \dfrac {3^{\lfloor \log_2 n \rfloor + 1} - 1 } 2$.<|endoftext|> -TITLE: Given $P(t): [0,1]\to [0,1]^2$ a space filling curve, can we calculate $\iint_{[0,1]^2}f(x,y) dxdy$ as $\int_0^1 f(P(t))\,dt$ or something alike? -QUESTION [8 upvotes]: Given $P(t): [0,1]\to [0,1]^2$ a continuous bijection, can we calculate $\iint_{[0,1]^2}f(x,y)\, dx\,dy$ as $\int_0^1 f(P(t))\,dt$ or something alike? -I'm thinking of the $P(t)$s as peano curves: we know such continuous bijections exist, thus, with a single parameter $t$, we can fill up the entire domain of integration $D\subseteq \Bbb R^2$ and so, I'd think that we should be able to calculate the double integral in the title with a single integral (integrating with respect to $t$). -Is this possible? -E: As discussed in the comments of the only answer, there may be a few annoying technicalities here ($P$ not being a bijection), I'd rather not bother with them, but see if this idea is usable somehow. -I'm mostly interested in the Riemann or R-S integral, but related stuff about the lebesgue integral is also welcome. - -REPLY [8 votes]: Well first, there is no continuous bijection from $[0,1]$ onto $[0,1]^2$. As of course has already been pointed out several times; your reply that you don't want to worry about that seems very curious - if you simply corrected the question to be something more sensible it would be a good question. -Anyway. Given a continuous surjection $P:[0,1]\to[0,1]^2$, is it true that $$\int_0^1\int_0^1 f(x,y)\,dxdy=\int_0^1 f(P(t))\,dt?$$The answer is of course no for "most" such $P$, but it's yes for some $P$, including one of the standard examples - the answer is yes for the example commonly known as the Hilbert curve. -This says that the Hilbert curve $H$ is measure-preserving, which follows from the fact that $H^{-1}([j2^{-n},(j+1)2^{-n}]\times[k2^{-n},(k+1)2^{-n}])$ is "essentially" (that is, except for a set of measure zero) equal to $[m4^{-n},(m+1)4^{-n}]$. -The Hilbert curve has other nice properties. For example, it's easy to see that a space-filling curve cannot be $Lip_\alpha$ for $\alpha>1/2$, and $H$ is in fact $Lip_{1/2}$. This says to me that $H$ is in some sense a very "efficient" space-filling curve; just as bad as needed to get the job done, no worse.<|endoftext|> -TITLE: Is this a different proof of the fundamental group being abelian? -QUESTION [13 upvotes]: I have proved the fundamental group of a topological group is abelian. But I've found nowhere the similar proof as mine. Everywhere I looked up, it was done either exploiting categorical properties or something like taking product of two paths. -My proof goes as follows: -Let $a$ and $b$ be two loops in a topological group $(G,\bullet )$ starting at the identity element $e$. We need to show $ a\ast b \simeq b\ast a$, where "$\ast$" is the fundamental group operation. -Now for each $t,s\in [0,1]$, define -$F_t(s)=a(st)\ast(a(t)\bullet b(s))\ast \bar a(st)$ -Now {$F_t$} gives the homotopy between $b$ and $a\ast b \ast \bar a$. - -The main idea is at each time $t$, we first go to $a(t)$ along $a$ and then traverse the translated path $a(t)\bullet b$ and then return back along the inverse path of the first one. Continuity of $F$ follows from pasting lemma. - -This proof seems correct but why do other proofs avoid this straightforward argument? - -REPLY [4 votes]: Your idea is very nice, but the definition -$$F_t(s)=a(st)\ast(a(t)\bullet b(s))\ast \bar a(st)$$ -is inadequate because $a(st)$, $a(t)\bullet b(s))$, $\bar a(st)$ are single points of $G$ which cannot be composed by $*$ which is the succession of paths. What you mean is -$$F_t = a_t * (a(t) \bullet b) * \overline{a_t}$$ -where $a_t(s) = a(st), \overline{a_t}(s) = a_t(1-s) = a((1-s)t)$. Note that $\overline{a_t}$ is the inverse of $a_t$, not part of the inverse $\bar a$ of $a$. In fact, $a((1-s)t) \ne a(1-st) = \overline{a}(st)$. Explicitly you can also write -$$F_t(s) = \begin{cases} a(3st) & 0 \le s \le 1/3 \\ a(t) \bullet b(3s - 1) & 1/3 \le s \le 2/3 \\ a((3(1-s)t) & 2/3 \le s \le 1 \end{cases}$$<|endoftext|> -TITLE: Continuous bijection on open interval is homeomorphism -QUESTION [7 upvotes]: Suppose that $a,b\in\mathbb R$ with $a -TITLE: What is the square root of the Laplace operator? -QUESTION [6 upvotes]: Let $\Delta$ be the Laplace operator -$$ \Delta f = \sum_{i=1}^d \frac{\partial^2 f}{\partial x^2_i}$$ -with $Dom(\Lambda) = H^1_0(\mathcal{O}) \cap H^2(\mathcal{O})$ where $\mathcal{O}\subset\mathbb{R}^d$ is a bounded domain with a smooth boundary. -I'm studying from this book [1] and they use the square root of the Laplace operator denoted as $(-\Lambda)^{1/2}$ with domain $Dom((-\Lambda)^{1/2})$ without specifying how does this operator and its domain look like. -Can you either explain me what these symbols denote or recommend me a different publication where I can learn more? -[1] Ruth F Curtain and Hans Zwart. An introduction to innite-dimensional -linear systems theory. 1995 - -REPLY [7 votes]: It is a standard tool of PDEs and functional analysis known as "functional calculus". In the case of the Dirichlet Laplacian on a bounded domain, take a basis $\{\phi_n\ :\ n\ge 1\}$ of normalized eigenfunctions of $(-\Delta, H^2\cap H^1_0(\mathcal{O}))$ corresponding to the eigenvalues $\{\lambda_n\ :\ n\ge 1\}$. If $u\in H^2\cap H^1_0$, then one can compute its Laplacian via the expansion -$$ -u=\sum_{n\ge 1} c_n\phi_n, $$ -obtaining -$$ --\Delta u=\sum_{n\ge 1} c_n \lambda_n \phi_n.$$ -Therefore one can define "functions" of $-\Delta$ by the formula -$$ -f(-\Delta) u=\sum_{n\ge 1} c_n f(\lambda_n)\phi_n,$$ -provided that -$$ -\sum_{n\ge 1}|c_n|^2 |f(\lambda_n)|^2<\infty.$$ -This last condition defines the domain of the operator $f(-\Delta)$. This operator is self-adjoint if $f$ is real valued and its eigenvalues are, predictably, -$$ -\{f(\lambda_n)\ :\ \lambda_n\ \text{eigenvalue of }-\Delta\}.$$ -Taking $f(x)=\sqrt{x}$ one obtains the square root of the Laplacian. -A nice introductory book on those things is the first volume of Zeidler's "Applied functional analysis". For more information, the standard reference is Reed & Simon's four-volume books "Methods of modern mathematical physics".<|endoftext|> -TITLE: tricky system of trigonometric equations -QUESTION [10 upvotes]: I am not very fresh in math, but I need to solve this system: -\begin{gather} -A\sin(x-y)+B\sin(z-y)=C\\ -A\cos(x-y)+B\cos(z-y)=D -\end{gather} -where $A,B,C,D$ and $x$ are given. -I tried to expand and combine the bracket terms and I suppose that there are some tricky substitutions to get it out, but I am lost! -Thank you all! - -REPLY [5 votes]: If you're not fresh at math, complex formalism may not be of much help, but for posterity: -Using -$$ -e^{i\theta} = \cos \theta + i\sin\theta, -$$ -your system can be written -$$ -A e^{i(x - y)} + Be^{i(z - y)} = D + iC. -$$ -Multiplying through by $e^{iy}$ and dividing by $D + iC$ gives -$$ -\frac{(A e^{ix} + Be^{iz})(D - iC)}{D^{2} + C^{2}} = e^{iy}. -\tag{1} -$$ -Conjugating, -$$ -\frac{(A e^{-ix} + Be^{-iz})(D + iC)}{D^{2} + C^{2}} = e^{-iy}. -\tag{2} -$$ -Multiplying (1) and (2) eliminates $y$: -\begin{align*} -1 = e^{iy}\, e^{-iy} - &= \frac{(A e^{ix} + Be^{iz})(D - iC)}{D^{2} + C^{2}}\, - \frac{(A e^{-ix} + Be^{-iz})(D + iC)}{D^{2} + C^{2}} \\ - &= \frac{(A e^{ix} + Be^{iz})(A e^{-ix} + Be^{-iz})}{D^{2} + C^{2}}. -\end{align*} -Expanding and rearranging, -$$ -D^{2} + C^{2} = A^{2} + B^{2} + 2AB\cos(x - z), -$$ -whereupon you can proceed as in John Hughes' answer.<|endoftext|> -TITLE: Do there exist several positive real numbers such that their sum is $1$ and sum of their squares is less than $0.01$ -QUESTION [6 upvotes]: Do there exist several positive real numbers such that their sum is $1$ and sum of their squares is less than $0.01$? -My Attempt: Let there are $n$ real numbers and we call them $x_{1},x_{2},..,x_{n}$. Since they are positive real so WLOG we can assume $x_{1}\geq x_{2}..\geq x_{n}$. The condition $x_{1}+x_{2}+..+x_{n}=n \implies x_{n} \leq \frac{1}{n}$. Also $x_{1}\geq x_{2}..\geq x_{n} \implies x_{1}^{2}\geq x_{2}^{2}..\geq x_{n}^{2}$ -After this point I am stuck. - -REPLY [2 votes]: The intuition: -When you square a number $x$, you multiply $x$ by $x$. That means that $x^2$ is always "$x$ times bigger" than $x$. -Suppose $x$ is large, like $100$. $100^2$ is much bigger than $100$. To be precise, it's $100$ times bigger. Examples like this give us the intuition "squaring makes a number bigger... and the larger the number, the larger the effect of squaring". -But if $x$ is smaller than $1$, "$x$ times bigger" is a confusing thing to say – multiplying a number by $x$ actually makes it smaller. So $x^2$ will be smaller than $x$ when $x$ is less than one. -For instance, suppose $x$ is $0.01$. $0.01^2$ is much smaller than $0.01$. To be precise, it's $1/0.01=100$ times smaller. By taking $x$ closer and closer to zero, you can make $x^2$ smaller than $x$ is, by bigger and bigger factors. -The problem at hand: -"Do there exist several positive real numbers such that their sum is 1 and sum of their squares is less than $0.01$?" -This problem asks us to find numbers that sum to $1$, even though their squares sum to less than $0.01$. That is, it is asking us to find numbers whose squares are really small, even though the numbers themselves aren't that small. -The intuition above tells us how to find these numbers: just take numbers much smaller than $1$!. We'll need a lot of them, to make them add up to $1$, but their squares will shrink by so much that this won't be a problem. -To start with, take ten copies of $0.1$. They add up to $1$. Each squared is $0.01$, so the ten squares added up is $0.1$. If the problem said "sum of their squares is less than $0.1$", we'd be done. But it wants us to be below $0.01$. So let's keep going – the smaller the numbers we pick, the more the squares will be less than the numbers. -Take one hundred copies of $0.01$. They add up to $1$. Each squared is $0.0001$, so the one hundred squares added up is $0.0001 \times 100 = 0.01$. Woah – right on the edge. Strictly speaking, the question wants us the sum of the squares to be less than $0.01$, so we need to go one more step. -Take one thousand copies of $0.001$. They add up to $1$. Each squared is $0.00001$, so the one thousand squares added up is $0.00001 \times 1,000 = 0.001$. We're done! -(Looking at the trend of the sum of squares, it's clear that if we kept going, we could get it as small as we wanted. That's what Spenser was saying, when they mentioned convergence. But it's really just a way of describing a trend.)<|endoftext|> -TITLE: Can elliptic space be infinite? -QUESTION [6 upvotes]: The go-to example of elliptic space is a sphere where geodesics turn into great circles of finite length. But is it possible to have an elliptic space which doesn't 'merge' with itself once it's made a full turn? ie. infinite, unbounded, simply-connected, but with constant positive curvature everywhere. -I can't find anything like it online, but then maybe I simply don't know what word to search for? -Edit: due to user427327's remarks I thought I'd elaborate with a 1D example, nevermind that curves don't have intrinsic curvature. -Circle 'space' vs spiral 'space'. -The above image shows two 'spaces', both of constant positive curvature. In the left space if you travel 2pi you end up back where you started. In the right space you end up somewhere else entirely. You can keep on travelling and keep on getting further and further away from where you started, despite travelling inside a 'space' of positive curvature. Is the same not possible for 2D spaces with constant positive curvature? - -REPLY [6 votes]: I came across this question while trawling the internet, and on the off-chance that you're still interested in this question and haven't seen an answer, I'll write a few words - it is possible to be infinite, non-repeating and positively curved. What you must lose, however, is completeness. (This is forced by the Bonnet-Myers theorem.) There are a couple of ways of viewing this. -Firstly, one can use longitude/latitude coordinates $\theta, \phi$ for $S^2$, so that the round metric is $ds^2 = d\phi^2 + \cos^2 \phi d \theta^2.$ (These are not the usual polar coordinates, but I think are more useful here). This metric breaks down in these coordinates at $\phi = \pm \pi/2.$ However, there is no reason why we cannot now allow $\theta$ to take any real value! That is, $$ds^2 = d \phi^2 + \cos^2 d \theta^2 $$ defines a metric on $(- \pi/2, \pi/2)_\phi \times \mathbb{R}_\theta$ (which as promised, is not complete - all 'vertical' lines $\theta =$ const are maximal geodesics of length $\pi$). Locally, this is the same as the sphere, and so it has constant curvature 1. However, the line $\phi = 0$ is a geodesic of infinite length. -Secondly, one can see the above construction as effectively removing the north and south poles from the sphere, and then 'unwrapping' the the rest of the sphere, by saying that when we travel around the equator once, we do not in fact return to the starting point, but to a 'new' point in the next part of the orange peel. This is just like your example of 'unwinding' a loop to create a helix - we 'unwrap' the sphere to create this thing. -Thirdly, the above thing may be seen as the Riemannian universal cover of $S^2 \backslash$ poles.<|endoftext|> -TITLE: Algebraic Varieties vs Smooth Manifolds -QUESTION [5 upvotes]: There are many posts I have read on that subject which seem unclear for me. My main question (it might be silly) is: - -"Every non-singular algebraic variety over $\mathbb{C}$ is a smooth - manifold." - -(see: http://mathoverflow.net/questions/7439/algebraic-varieties-which-are-also-manifolds) -How? For an algebraic variety we have the Zariski topology which is not even Hausdorff? How can they be diffeomorphic then? - -REPLY [6 votes]: You are absolutely right that a complex variety with its Zariski topology is not a complex manifold, nor even a Hausdorff topological space (unless it has dimension zero). -However there is a completely canonical way of associating to a complex algebraic variety $X$ a complex analytic variety $X^{an}$. -More precisely that association Algvar $\to$ Anvar is a functor. -This functor has been studied in detail by Serre in a ground-breaking article published in 1956 and universally known by its amusing acronym GAGA. -A typical result in the article (Proposition 6, page12) is that $X$ is complete iff $X^{an}$ is compact: a highly non-trival result relying on a theorem of Chow. -In this set-up the result you are asking about can be stated as follows: -An algebraic complex variety $X$ is regular (=smooth) if and only if the associated analytic variety $X^{an}$ is a complex manifold. -Edit -Here is an English translation of GAGA.<|endoftext|> -TITLE: Assuming $ab^2 = b^3a$ and $a^2=1$ prove that the order of $b$ is $5$. -QUESTION [6 upvotes]: Let $G$ be a group and $a,b \in G$ with $a \ne 1$ and $b \ne 1$. Assuming $ab^2 = b^3a$ and $a^2=1$ I need to prove that the order of $b$ is $5$. -I have proved by contradiction that it can't be 2 or 3 but I don't know how to prove that it must be 5 and it can't be 4. - -REPLY [9 votes]: Note that -$$b^5 = b^3a^2b^2=(b^3a)(ab^2)=(b^3a)(b^3a)=b^3(ab^2)ba=b^3(b^3a)ba=b^6aba,$$ -multiplying by $b^{-5}$ from the left we get $1=baba$. Multiplying by $a$ from the right we get $a=baba^2=bab$. Thus -$$1=a^2=(bab)^2=b(ab^2)ab=b(b^3a)ab=b^4a^2b=b^4\cdot 1 \cdot b = b^5.$$ - -REPLY [5 votes]: Consider that $$ab^2a=b^3$$ And so: -$$b^2=ab^3a=ab^2ba=ab^2aaba=b^3aba$$ From here: $$b^{-1}=aba$$ Thus $$b^{-2}=ab^2a=b^3$$ which means: $$b^5=1$$<|endoftext|> -TITLE: Zero vector of a vector space -QUESTION [21 upvotes]: I know that every vector space needs to contain a zero vector. But all the vector spaces I've seen have the zero vector actually being zero (e.g. $\mathbf{0}=\langle0,0,\ldots,0\rangle$). Can't the "zero vector" not involve zero, as long as it acts as the additive identity? If that's the case then are there any graphical representations of a vector space that does not contain the origin? - -REPLY [2 votes]: One more comment- in the specific vector space, $\mathbb R^n$, the zero vector is the $n$-tuple $(0, 0, \ldots, 0)$. Any $n$-dimensional vector space is "isomorphic" to $R^n$, with, given a basis $e_1, e_2, \ldots , e_n$, the isomorphism that maps $e_n$ to $(0, 0, \ldots, 1, \ldots, 0)$ where the "$1$" is in the $n$th place so we can always express vectors in the $(a, b, \ldots, z)$ notation. -However there exist infinite dimensional vector spaces for which we cannot use that notation.<|endoftext|> -TITLE: Computing degrees of projective varieties via Chern classes -QUESTION [7 upvotes]: I know that the degree of a projective hypersurface $H \subset \mathbb{P}^n$ can be computed in terms of the Chern class of the normal (line) bundle of $H$. Is there a similar formula for the degree of a higher codimension projective variety in terms of Chern classes of the normal bundle? -In general, does degree just depend on the normal bundle of the projective variety in projective space? I feel like the answer is no, which would make it impossible to compute the degree in terms of the Chern class of the normal bundle. - -REPLY [2 votes]: Here is a general answer in terms of Chow rings. -Let $i:Y\hookrightarrow \mathbb P^n$ be a smooth closed subvariety of codimension $r$ and degree $d$, so that for the corresponding cycle class we have $[Y]=dH^r\in A^r(\mathbb P^n)=\mathbb Z\cdot H^r$ . -We have (Hartshorne, page431): $$c_r(N_{Y/X})=i^\ast [Y]=i^\ast (dH^r)=dh^r\in A^r(Y)$$ where $h=i^\ast H\in A^r(Y)$ and where $c_r(N_{Y/X})\in A^r(Y)$ is the Chow Chern class - [which, in case the base field is $\mathbb C$, is infinitely more precise than its image in singular cohomology $c_r^\mathbb C(N_{Y/X})\in H^{2r}(Y(\mathbb C),\mathbb Z)$ ]. -If $c_r(N_{Y/X})$ is known, we may often extract the degree $d$ of $Y$ from the the above formula $c_r(N_{Y/X})=dh^r$. - However if $h^r=0$, as is the case for curves in $\mathbb P^3$ for example, the equality $c_r(N_{Y/X})=dh^r$ reduces to $0=d\cdot0$, which (in conformity with Pooh Bear's great counterexample) doesn't allow us to compute $d$.<|endoftext|> -TITLE: Integer multiplication vs. "multiple" notation in abstract algebra -QUESTION [5 upvotes]: In my abstract algebra text, the author uses "multiple" notation. Say you have a field $F$ that contains $a,b$. Consider some equation like $a^2 + 2ab + b^2 = 0$. The $2ab$ is meant to be shorthand for $ab + ab$ rather than the literal integer $2$ multiplied to $ab$. -In doing higher level computations in field theory, I encounter this notation and I'm always wondering whether or when I'm allowed to, say, divide both sides of $a^2+b^2 = -2ab$ by $2$. Can someone clarify the situations in which this multiple notation and integer multiplication coincide? - -REPLY [5 votes]: Integers are elements in a field. For example, $1$ is the multiplicative identity, $2=1+1$, $3=1+1+1$, $4=1+1+1+1$, $5=1+1+1+1+1$, and so on. Also, $-1$ is the additive inverse of $1$, $-2$ is the additive inverse of $2$, $-3$ is the additive inverse of $3$, and so on. Therefore, you can treat them like regular elements and add, subtract, and multiply equations by integers. -However, with division, you have to be careful because you might accidentally divide by $0$. For example, in $\Bbb{Z}_2=\{0, 1\}$, $2=1+1=0$, so you can't divide by $2$ because $2=0$. This can get kind of odd, but over time, you will become wary of division, so whenever you divide by an integer, check the characteristic of the field and make sure you are not dividing by $0$. -Notice that $2ab=ab+ab$ because of the distributive property: -$$2ab=(1+1)ab=1(ab)+1(ab)=ab+ab$$ -Similar logic applied for multiplying by other integers, which is why your book's "multiple" notation is consistent with this definition of an integer.<|endoftext|> -TITLE: Do the last digits of powers of a number $n$ follow the same cycle as the last digits of the number $n$'s last digit's powers? -QUESTION [5 upvotes]: In many places it is said that the last digits of the powers of the numbers from 1 to 9 have certain cycles. For example the last digits of powers of 2 repeat in a cycle of $4, 8, 6, 2$, and the last digits of powers of 9 repeat in a cycle of $1, 9$. -It seems like this works for bigger numbers as well. The last digits of any number's powers seem follow the cycle of the number's last digit's cycle. For example, the cycle of the last digits of powers of 7 is $9, 3, 1, 7$, and the cycle of the last digits of powers of 1097 are $9, 3, 1, 7$. I've been experimenting with my calculator and I haven't found a single counterexample, so my guess is that it's true for all numbers. That is, the last digits of powers of a number $n$ follow the same cycle as the last digits of the number $n$'s last digit's powers. Could someone show me a proof of this? - -REPLY [4 votes]: Your observation is correct. It's because, when you just look at last digits, you're working modulo 10, and addition and multiplication are well-defined modulo 10. Since, for example, $7\equiv 107\pmod{10}$, then for any natural $n$, we also have $7^n\equiv 107^n \pmod{10}$. -If you want technical details, write a number ending in the digit $b$ as $10k+b$. Then, if you raise that number to the power $n$, you can apply the binomial theorem: -$(10k+b)^n = (10k)^n + \binom{n}{1}\cdot(10k)^{n-1}b + \cdots + \binom{n}{n-1}(10k)b^{n-1} + b^n$ -Since everything except for the last term is a multiple of 10, the last digit of this sum is simply the last digit of $b^n$. -Does that make sense?<|endoftext|> -TITLE: Show that any two distinct lines in $\Bbb P^2$ intersect in one point. -QUESTION [5 upvotes]: Show that any two distinct lines in $\Bbb P^2$ intersect in one point. -Proof(My attempt). -Let $L_1, L_2$ be any two distinct lines in $P^2$. -Write $L_i = V (a_iX + b_iY + c_iZ), i = 1,2$. -It suffices to show that $L_1 ∩L_2 = V(a_1X +b_1Y +c_1Z,a_2X +b_2Y +c_2Z)=V$ is a point. -Now I know that there is a projective change of coordinates $T$ such that $V^T =V(Z)$ or $=V(Y,Z)$ -If $V^T =V(Y,Z)$ is an unique point. -Otherwise $V^T =V(Z)$,then can I say $L_1 =L_2$?? Why? -This is my problem. If this is true then I am done - -REPLY [7 votes]: Finding a point $[X_0:Y_0:Z_0]\in L_1\cap L_2$ amounts to finding (up to multiplication by a non-zero factor) a non-zero triplet $(X_0,Y_0,Z_0)$ solving the system $$\begin{cases}a_1X+b_1Y+c_1Z=0\\a_2X+b_2Y+c_2Z=0\end{cases}$$ or, which is the same, a non-zero vector $\begin{pmatrix}X_0\\Y_0\\Z_0\end{pmatrix}\in\ker\begin{pmatrix}a_1&b_1&c_1\\a_2&b_2&c_2\end{pmatrix}$. But by rank-nullity theorem, such a vector exists, because for a $(2\times3)$ matrix $A$ it holds $\dim\ker A=3-\operatorname{rk}A\ge3-2=1$. -In order for each equation to identify a line in $\mathbb P^2$, both rows must be non-zero. In order for the two lines to be dinstinct, the two rows must be linearly independent. So actually $\dim \ker A=1$. -This means that, if $(X_0,Y_0,Z_0)$ and $(X_1,Y_1,Z_1)$ are both non-zero solutions of the system, then $(X_0,Y_0,Z_0)=(\lambda X_1,\lambda Y_1,\lambda Z_1)$, i.e. that $[X_0:Y_0:Z_0]=[X_1:Y_1:Z_1]$. This proves uniqueness.<|endoftext|> -TITLE: Graph of a continuous function has measure zero -QUESTION [7 upvotes]: I need help to solve the following problem: -Let $f: \mathbb{R}^n \rightarrow \mathbb{R}$ be a continuous function. Prove that the graph $G(f)=\{(x,f(x)):x\in\mathbb{R}^n\}$ has measure zero in $\mathbb{R}^{n+1}$. -I suppose that I have to use that f es uniformly continuous, but I don't know what rectangle which sum of volumes is less than $\varepsilon > 0$ should I take. - -REPLY [4 votes]: Here's another argument. Assuming the graph is measurable, use Fubini-Tonelli to show that its measure is equal to an iterated integral: -$$ m(G) = \int_{{\mathbb R}^n} \int_{{\mathbb R}} {\bf 1}_{\{f(x)\}}(y) dy dx = \int_{{\mathbb R}^n} 0 dx =0,$$ -where the second equality is due to the fact that the Lebesgue measure of the singleton $\{y:y=f(x)\}$ is zero for any $x$. -Now for the measurability of $G$. It's a closed set. Why ? Take $(x,y)$ not in $G$. Then $f(x)\ne y$. Therefore by continuity of $f$, there exists a neighborhood $I$ of $x$ and a neighborhood of $J$ of $y$ such that for all $x \in I$, $f(x)\not\in J$. That is, $I\times J\subset G^c$.<|endoftext|> -TITLE: Evaluating $\sum_{n=1}^{\infty} \frac{1}{n^2+1}$ -QUESTION [14 upvotes]: While I know that $$\sum_{n=1}^{\infty} \frac{1}{n^2} = \frac{\pi^{2}}{6}$$ -But trying to evaluate this has left me stumped -$$\sum_{n=1}^{\infty} \frac{1}{n^2+1}$$ -I evaluated it through wolfram alpha, it gave me $\frac{1}{2}(\pi\coth(\pi)-1)$. -What would be a good way to start evaluating this series? - -REPLY [3 votes]: The simplest way is using contour integration. With some experience you can do the entire computation completely in your head. I'll write down the solution doing all the computations as I write along. For a counterclockwise contour $C$ that doesn't contain any poles of a meromorphic function $g(z)$, we have for an analytic function $f(z)$: -$$\oint_C \frac{f'(z}{f(z)}g(z) dz = 2\pi i\sum_n g(\alpha_n) $$ -where the $\alpha_n$ are the zeros of $f(z)$ inside the contour $C$. -If the contour does contains poles of $g(z)$ you just add the residues at these points to the summation. To calculate the summation, we can take $f(z) = \sin(\pi z)$ as this function has its zeros at the integers. The summation can thus be obtained by taking $g(z) = \frac{1}{1+z^2}$ and integrating along a contour that encircles the positive real axis, such that the poles of $g(z)$ at $z = \pm i$ are left outside. -Consider the contour integral that starts just below the point $z = 1$, moves to a point just below $z = R$, crosses the real axis to just above $z = R$ and then moves just to the left of $z = 1$, crosses the real axis there and then we complete the contour. In the limit of $R\to\infty$ this will yield the desired summation. The integrand being an even function, we can also consider a similar contour that encircles the negative real axis, from just below $z=-R$ to just below $z = -1$, crossing the real axis to the right of $z = -1$ to just above the real axis and then we move just above the real axis to just above $z = -R$ and then we move back to the starting point. The sum of the two contour integrals is thus twice the desired summation. -Then consider cutting open the two contours near $z = \pm R$ and connecting the two parts of the two contours above the real axis and do the same below the real axis. In the limit of $R\to\infty$ the integrals along the parts of the contours joining the two parts can be made arbitrarily small as the modulus of the integrand goes to zero far enough for $z\to \infty$. -The resulting contour is now a clockwise contour that only contains the poles at $z = 0$ and at $z = \pm i$. The residue at the pole at $z = 0$ is $g(1) = 1$. The residue at $z = \pm i$ is: -$$\lim_{z\to\pm i} (z-\pm i)\frac{\pi\cot(\pi z)}{z^2+1} = -\frac{\pi}{2} \coth(\pi)$$ -So, the sum of the two contour integrals is equal to twice the summation we want to evaluate times $2\pi i$, but this is also equal to minus $2\pi i$ times $-\pi\coth(\pi)$ plus 1, therefore the desired summation is: -$$\sum_{n=1}^{\infty} \frac{1}{1+n^2} = \frac{1}{2}\left[\pi\coth(\pi) - 1\right]$$ -So, the only computations to get to the answer were just trivial applications of L'Hopital's rule, simple enough to do in your head.<|endoftext|> -TITLE: $a,b,c \in \mathbf{Z}$ such that $a^7+b^7+c^7=45$ -QUESTION [7 upvotes]: Do there exist integers $a,b,c$ such that $a^7+b^7+c^7=45$? -[I have an ugly argument for a negative answer, is it possible to give a "manual" solution?] - -REPLY [5 votes]: The seventh powers modulo $49$ are $0,\pm 1,\pm 18,\pm 19.$ There is no way to combine three of these to get $45$ modulo $49$.<|endoftext|> -TITLE: Intersection of field extensions -QUESTION [8 upvotes]: Let $F$ be a field and $K$ a field extension of $F$. Suppose $a,b\in K$ are algebraic over $F$ with degrees $m$ and $n$, where $m,n$ are relatively prime. Then $F(a) \cap F(b) = F$. -I see that the intersection on the LHS must contain $F$, but I don't see why $F$ contains the LHS. - -REPLY [9 votes]: Hint: $[F(a):F]=[F(a):F(a)\cap F(b)][F(a)\cap F(b):F]$ -$[F(b):F]=[F(a):F(a)\cap F(b)][F(a)\cap F(b):F]$ -Thus $[F(a)\cap F(b):F]$ divides $[F(a):F]$ and $[F(b):F]$ so $[F(a)\cap F(b):F]=1$ since $[F(a):F]$ and $[F(b):F]$ are relatively prime.<|endoftext|> -TITLE: The number of subspaces over a finite field -QUESTION [8 upvotes]: How to prove this conclusion - -If $V$ is a vector space of dimension $n$ and $F$ is a finite field with $q$ elements then number of subspaces of dim $k$ is - -REPLY [4 votes]: You want to find the number of subspaces of $\mathbb{F}_q^n$ of dimension $k$. -Consider the following problem: in how many ways can we choose the first $k$ columns of an $n \times n$ matrix over the finite field $\mathbb{F}_q$ so that these $k$ columns are linearly independent. The first column can be any nonzero vector, and so can be chosen in $q^n-1$ ways. The second column can be any vector in the vector space $\mathbb{F}_q^n$ except the $q$ elements in the subspace spanned by the first column (in order to have linear independence). Thus, the second column can be chosen in $q^n-q$ ways. Continuing in this manner, we see that the $k$th column can be any vector in $\mathbb{F}_q^n$ except for vectors in the $(k-1)$-dimensional subspace spanned by the previous columns. Thus, the $k$th column can be chosen in $q^n-q^{k-1}$ ways. This is the numerator in your expression, and it is the number of different bases $(b_1,\ldots,b_k)$ (where order matters) for $k$-dimensional subspaces of $\mathbb{F}_q^n$. -However, different bases can span the same subspace. The number of different bases for a given $k$-dimensional subspace can be obtained using the same approach mentioned in the previous paragraph and is equal to the denominator of your expression. Thus, the ratio is the number of distinct $k$-dimensional subspaces of $\mathbb{F}_q^n$.<|endoftext|> -TITLE: Variant of Nakayama's lemma -QUESTION [6 upvotes]: I am trying to prove that if $M$ is an $R$-module, with $R$ complete w.r.t. an ideal $\mathfrak{m}$, and $M$ is separated ($\cap_k \mathfrak{m}^k M=0$) and the images of $m_1,\dots,m_n$ generate $M/\mathfrak{m} M$, then $m_1,\dots,m_n$ generate $M$. - -This appears as Exercise 7.2 in Eisenbud's Commutative Algebra text. -I am pretty stuck and would appreciate some hints. - -REPLY [2 votes]: Hint. Set $N=\langle m_1,\dots,m_n\rangle$. We have $M=\mathfrak mM+N=\mathfrak m(\mathfrak mM+N)+N=\mathfrak m^2M+N$, and so on.<|endoftext|> -TITLE: How to check in GAP whether two groups are isomorphic -QUESTION [9 upvotes]: Let $G,H$ be two Groups. How to check with GAP whether they are isomorphic or not? -For example, GAP has IsNilpotentGroup to check whether the group $G$ is nilpotent. Is there a similar function named like AreIsomorphicGroups to check whether $G$ and $H$ are isomorphic or not? - -REPLY [16 votes]: There are two aspects here: how to search such information in GAP, and how to actually check the isomorphism of two groups. -First, the OP was quite close to guessing the name of the function, since most of the documented GAP functions follow these naming conventions. Functions that return true or false usually have the name of the form IsSomething (even in the cases when AreSomething could make sense). So one could expect that GAP has something named like IsIsomorph.... -Making such a guess, one could now use the help system to find all help entries starting with IsIsom. To do this, enter ?IsIsom in GAP (one could also try ??IsIsom to find all entries containing a substring IsIsom). This will produce a list of entries, and those which are relevant are -ANUPQ (not loaded): IsIsomorphicPGroup - -and -SONATA (not loaded): IsIsomorphicGroup - -(this also demonstrates a very useful feature of the help system - searching in the manuals of packages even when they are not loaded). -Now in more details about the actual isomorphism check. If one would open the documentation of SONATA's IsIsomorphicGroup, that would lead to the GAP function IsomorphismGroups which constructs and returns the actual isomorphism, if it is possible, or returns fail if the groups are non-isomorphic (see here). For example: -gap> G:=DihedralGroup(8); - -gap> H:=Group( (1,5)(2,3)(4,8)(6,7), (1,2)(3,8)(4,6)(5,7) ); -Group([ (1,5)(2,3)(4,8)(6,7), (1,2)(3,8)(4,6)(5,7) ]) -gap> IsomorphismGroups(H,G); -[ (1,5)(2,3)(4,8)(6,7), (1,2)(3,8)(4,6)(5,7) ] -> [ f1*f3, f1*f2 ] -gap> T:=Group([ (1,7,4,3)(2,8,6,5), (1,4)(2,6)(3,7)(5,8) ]); -Group([ (1,7,4,3)(2,8,6,5), (1,4)(2,6)(3,7)(5,8) ]) -gap> IsomorphismGroups(T,G); -fail - -One could inspect the source code of this function entering -PageSource(IsomorphismGroups); - -to see that before attempting to construct the isomorphism it does some checks of necessary conditions and will immediately return fail if e.g. the groups have different orders or different number of conjugacy classes. Also, for groups of the order that allows identification of the group in the GAP Small Groups Library (see IdGroup here) it checks that both groups have the same ID and returns fail if not. -This may be useful if one is only interested whether two groups are isomorphic, but not interested in the actual homomorphism: in this case G and H are isomorphic if and only if IdGroup(G)=IdGroup(H). In the example above, we have: -gap> IdGroup(G); -[ 8, 3 ] -gap> IdGroup(H); -[ 8, 3 ] -gap> IdGroup(T); -[ 4, 1 ] - -Identification using IdGroup is possible for all orders in the library except for the orders $512$ and $1536$ and except for the orders $p^5$, $p^6$ and $p^7$ above $2000$ (see here). Also note that ANUPQ package has an undocumented function IdStandardPresented512Group which provides identification for groups of order 512. -This approach is used by the SONATA package which provides a function IsIsomorphicGroup (see here) which returns true or false dependently on whether the two groups are isomorphic or not: -gap> LoadPackage("sonata"); -true -gap> IsIsomorphicGroup(G,H); -true -gap> IsIsomorphicGroup(G,T); -false - -To inspect the code of IsIsomorphicGroup, first call LoadPackage("sonata"); and then enter -PageSource(ApplicableMethod(IsIsomorphicGroup,[Group((1,2)),Group((2,3))])); - -Then one could also see that this checks some necessary conditions before doing actual work in the generic method which just checks that IsomorphismGroups( G, H ) <> fail. These necessary conditions could work fast for small groups but not work very efficiently for very large ones (e.g. comparing the number of elements of each order), so this should be used with care. -In case of a $p$-group, the ANUPQ package (requires compilation, not available in the GAP distribution for Windows) provides the function IsIsomorphicPGroup (see here) which is applicable only to groups of prime power order and may be more efficient in this case. See more details about the algorithm it uses here. -For very large groups, checking whether they are isomorphic or not may be quite time-consuming. One could try to calculate some other invariants to try to show that they are different, and/or try to find better representation of the group (e.g. convert a group given by generators and relators to a permutation group) so that GAP will operate with it faster. In case of any challenging examples, I suggest to post them in separate questions.<|endoftext|> -TITLE: Algebraic vector proof of Lagrange's identity -QUESTION [5 upvotes]: $$|v · w| -^2 + |v × w| -^2 = |v|^2 |w|^2 $$ -Edit -Despite doing it multiple times it seems I have made a meal of the expansion see Jean-Claude's answer for a great explanation -Using $v = (v_1,v_2,v_3)$ and $w = (w_1, w_2, w_3)$ i have expanded the LHS and gotten -$$(v_2)^2(w_3)^2 + (v_3)^2(w_2)^2 + (v_3)^2(w_1)^2 + (v_1)^2(w_3)^2 + (v_1)^2(w_2)^2 - + (v_2)^2(w_1)^2 +(v_1)^2(w_1)^2 + (v_2)^2(w_2)^2 + (v_3)^2(w_3)^2 -\mathbf{2(v_2 w_3 w_2 v_3 + v_3 w_1 v_1 w_3 + v_1 w_2 v_2 w_1)}$$ -Now this is RHS minus the bolded terms and I dont know how to get rid of the bolded terms. - -REPLY [2 votes]: As both sides of the equality are homogeneous, you may assume that $v$ and $w$ are unit vectors. The two sides are also invariant under rotation of coordinate frame. So, you may further assume that $v=(1,0,0)^T$. The equality then reduces to $w_1^2+(w_2^2+w_3^2)=1$, which is true because $w$ is a unit vector. -Edit. Alternatively, write $w=u+z$, where $u\parallel v$ and $z\perp v$. Then -$$ -|v\cdot w|^2+|v\times w|^2 -=|v\cdot u|^2+|v\times z|^2 -=|v|^2|u|^2+|v|^2|z|^2=|v|^2|w|^2. -$$<|endoftext|> -TITLE: Is Arens Square a Urysohn space? -QUESTION [8 upvotes]: Example 80 in Steen-Seebach1 is called Arens Square. It is defined in the book as follows: - -Let $S$ be the set of rational lattice points in the interior of the unit square except those whose $x$-coordinate is $\frac12$. - Define $X$ to be $S\cup\{(0,0)\}\cup\{(1,0)\} \cup \{(\frac12,\sqrt2r); r\in\mathbb Q, 0 -TITLE: Sum involving binomial coefficients. -QUESTION [7 upvotes]: Prove that $${^{404}\mathrm C_4}-{^4\mathrm C_1}\cdot{^{303}\mathrm C_4}+{^4\mathrm C_2}\cdot{^{202}\mathrm C_4}-{^4\mathrm C_3}\cdot{^{101}\mathrm C_4} =(101)^4$$ - -I tried writing $101=102-1$, but couldn't move forward. - -REPLY [6 votes]: The given expression here is the coefficient of $x^4$ in $P(x) := ((1+x)^{101}-1)^4$. You can see this by first expanding $(y-1)^4$, then putting $y = (1+x)^{101}$ and expanding it in terms of $x$. -But $P(x) =((1+101x+...)-1)^4 = (101x+...)^4 = 101^4\cdot x^4(1+...)^4$ where the dots represent higher powers of $x$. -Thus, the coefficient of $x^4$ in $P(x)$ is indeed $101^4$.<|endoftext|> -TITLE: Proof of the quotient map $\pi : X\to X/G$ is a covering map only if the action of $G$ is properly discontinuous. -QUESTION [5 upvotes]: The following is a theorem from Munkres' Topology and there's a part in the proof of the theorem that I don't understand. I've written the part in bold. $X/G$ is the orbit space obtained from $X$ by means of the equivalence relation $x \sim g(x)$ for all $x\in X$ and all $g \in G$, and $e$ is the identity element of $G$. -Theorem 81.5. Let $X$ be path connected and locally path connected; let $G$ be a subgroup of the group of homeomorphisms of $X$. The quotient map $\pi : X \to X/G$ is a covering map if and only if the action of $G$ is properly discontinuous. In this case, the covering map $\pi$ is regular and $G$ is its group of covering transformations. -Proof: We suppose now that $\pi$ is a covering map and show that the action of $G$ is properly discontinuous. Given $x\in X$, let $V$ be a neighborhood of $\pi(x)$ that is evenly covered by $\pi$. Partition $\pi^{-1}(V)$ into slices; let $U_\alpha$ be the slice containing $x$. Given $g\in G$, with $g\neq e$, the set $g(U_\alpha)$ must be disjoint from $U_\alpha$, for otherwise, two points of $U_\alpha$ would belong to the same orbit and the restriction of $\pi$ to $U_\alpha$ would not be injective. It follows that the action of $G$ is properly discontinuous. -I have trouble figuring out the bolded part. Letting $y\in U_\alpha \cap g(U_\alpha)$, I don't see how this leads to a problem in the case $g(y)=y$, since then we don't get two different points belonging to the same orbit. In the most extreme case, what problem do we have if $g$ is identity restricted to $U_\alpha$? I would greatly appreciate it if anyone could explain this line to me. - -REPLY [4 votes]: I actually just figured out the answer to this question. I post this as an answer for anyone curious to see. The key was in the next step of the proof, which is why I wasn't able to see it, I think Munkres just assumed it was quite obvious. -So clearly the only problem arises when there is a $g\in G$ that is not the identity, such that $g$ sends points in $U_\alpha$ identically. But this cannot be the case, because $G$ here becomes the group of covering transformations. Certainly any $g\in G$ is a covering transformation, for $\pi \circ g=\pi$ because the orbit of $g(x)$ equals the orbit of $x$. On the other hand, let $h$ be a covering transformation with $h(x_1)=x_2$, say. Because $\pi \circ h=\pi$, the points $x_1$ and $x_2$ map to the same point under $\pi$; therefore there is an element $g\in G$ such that $g(x_1)=x_2$. Now by the uniqueness of covering equivalences, $g=h$. -And by the same reasoning, if $g\in G$ sends a point of $U_\alpha$ to the same point, then it must be $e$. So we can ignore this case.<|endoftext|> -TITLE: Conditionally convergent power sums -QUESTION [12 upvotes]: I'm struggling on the following question: - -Let $S$ be a (possibly infinite) set of odd positive integers. Prove that - there exists a real sequence $(x_n)$ such that, for each positive integer $k$, the - series $\sum x_n^k$ converges iff $k \in S$. - -I'm completely lost on this one. How can we even form a sequence such that the series converges for $k = 3, 7$ but not $5$? The series are all conditionally convergent, perhaps some clever rearrangement of the alternating harmonic series could do it. - -REPLY [4 votes]: Lemma 1 For any finite set $S$ of odd positive integers there is a finite multiset $A$ or real numbers such that $\sum_{\alpha \in A} \alpha^k = 0$ iff $k\in S$. - -Proof For $S = \{k\}$ take $A_k = [1,1,(-2)^{1/k}]$. -Let $S = \{k_1,\dots, k_m\}$. Put $A = \big[\prod_{j=1}^m \alpha_j\mid \alpha_j\in A_{k_j} \text{ for } j=1,\dots,m\big]$. Then -$$ -\sum_{\alpha \in A} \alpha^k = \sum_{\alpha_1\in A_{k_1}}\dots \sum_{\alpha_m\in A_{k_m}}\prod_{j=1}^m \alpha_j^k = \prod_{j=1}^m \sum_{\alpha_j\in A_{k_j}}\alpha_j^k, -$$ -which is equal to zero iff one of multiplicands $\sum_{\alpha_j\in A_{k_j}}\alpha_j^k$ is zero, i.e. iff $k=k_j$ for some $j$. $\Box$ -Let first $S$ be finite, and $A$ be the multiset corresponding to $S$; $|A|= m$. -Take a positive sequence $a_n$ such that $\sum_{n=1}^\infty a^k_n = \infty$ for each $k\ge 1$ (e.g. $a_n = (\log n+1)^{-1}$) and define $x_{(n-1)m+1},\dots,x_{nm}$ to be equal to $\alpha a_n$, $\alpha\in A$, in some order. Obviously, this satisfies the requirement. - -Lemma 2 For any odd positive integer $m$ there is a sequence $\{x_n,n\ge 1\}$ or real numbers such that the sequence $\{z_1^m+\dots+z_n^m,n\ge 1\}$ is unbounded, while $\sup_{k\neq m,n\ge 1}|z_1^k+\dots+z_n^k|<\infty$. - -Proof Use the above construction with $S=\{1,2,\dots,m-1\}$, $a_n = n^{-1/m}$, to get $\{z_n,n\ge 1\}$. Then $\{z_1^k+\dots+z_n^k,n\ge 1\}$ is obviously bounded for $k m,n\ge 1}|z_1^k+\dots+z_n^k| \le C \sum_{n=1}^{\infty} n^{-(m+1)/m}<\infty, $$ -as required. $\Box$ -Now let $S$ be arbitrary set of odd positive integers, $T$ be its complement and $\{k_n,n\ge 1\}$ be a sequence of integers from $T$ such that each integer from $T$ appears in it infinitely often. -Using Lemma 2, for each $n\ge 1$, we can construct a sequence $\{y_j(n),j=1,\dots,m_n\}$ such that $\sum_{j=1}^{m_n} y_j(n)^{k_n}\ge 1$ and $\sup_{k\neq k_n,j=1,\dots,m_n}|y_1(n)^k+\dots+y_j(n)^k|<2^{-n}$. Setting \begin{align} -& x_i = y_i(1), &1\le i\le m_1,\\ & x_i = y_{i-m_1}(2), &m_1 -TITLE: Prove that the following are isomorphic as groups but not as rings -QUESTION [5 upvotes]: $\mathbb{Z}$ and $\mathbb{2Z}$ - -My solution: To prove they are isomorphic as groups, I take the mapping $f: \mathbb{Z} \rightarrow \mathbb{2Z}$ defined by $f(x)=2x$. I prove it's a homomorphism and surjective and I am done. -To prove they are not isomorphic as rings, I take the equation $x^2=1$. It has solutions $x=1,-1$ in $\mathbb{Z}$ but no solutions in $\mathbb{2Z}$ and are hence not isomorphic. - -$\mathbb{Z}[\sqrt2]$ and $\mathbb{Z}[\sqrt5]$ - -My solution: To prove they are isomorphic as groups, I take the mapping -$f: \mathbb{Z}[\sqrt2] \rightarrow \mathbb{Z}[\sqrt5]$ defined by $f(a+b\sqrt2)=a+b\sqrt5$. I prove it's a homomorphism and surjective and I am done. -Here, to prove they are not isomorphic as rings, I take the equation $x^2=2$, which has solutions $x=\sqrt2, -\sqrt2$ in $\mathbb{Z}[\sqrt2]$ but no solution in $\mathbb{Z}[\sqrt5]$. -Is this the correct approach to proving non-isomorphism as rings? That an equation has a solution in one ring but not in another? - -REPLY [4 votes]: By Definition, a ring homomorphism $f: R \rightarrow R'$ must preserve addition and multiplication and must map the multiplicative identity of $R$ to the multiplicative identity of $R'$. In your example, the ring $R'=2\mathbb{Z}$ does not have a multiplicative identity. So the two rings are not isomorphic (there is no isomorphism, or even a homomorphism from one ring to the other, for that matter). -To show that the rings $\mathbb{Z}[\sqrt{2}]$ and $\mathbb{Z}[\sqrt{5}]$ are not isomorphic, you can use your idea that $x^2=2$ has no solution in the latter ring. But you need to justify why this method works. Here is a proof. Suppose there is an isomorphism $f$ between these two rings that takes $a+b\sqrt{2}$ to $a'+b'\sqrt{5}$. Since $f$ must take the identity to the identity, $f$ takes 1 to 1' (here, 1' is the identity in the second ring, and actually equals the integer 1; the primes are just to make things clearer). Since $f$ preserves sums, $f$ must take $1+1$ to $1'+1'$. Now, $(0+1 \sqrt{2})(0+1\sqrt{2}) = 1+1$ in the first ring. We can apply $f$ to both sides. Since $f$ preserves sums and products, we get the equation $(x'+y'\sqrt{5})^2 = 2$, where $x'+y'\sqrt{5}$ is the image of $(0+1\sqrt{2})$ under $f$. This equation has no solutions, and so we get a contradiction. Thus, there does not exist an isomorphism $f$ from the first ring to the second.<|endoftext|> -TITLE: Can we permute the coefficients of a polynomial so that it has NO real roots? -QUESTION [21 upvotes]: Let $P(x)=a_{2n}x^{2n}+a_{2n-1}x^{2n-1}+\ldots+a_{0}$ be an even degree polynomial with positive coefficients. -Is it possible to permute the coefficients of $P(x)$ so that the resulting polynomial will have NO real roots. - -REPLY [35 votes]: Yes: put the $n+1$ largest coefficients on the even powers of $x$, and the $n$ smallest coefficients on the odd powers of $x$. -Clearly the polynomial will have no nonnegative roots regardless of the permutation. Changing $x$ to $-x$, it suffices to show: if $\min\{a_{2k}\} \ge \max\{a_{2k+1}\}$, then when $x>0$,$$a_{2n}x^{2n} - a_{2n-1}x^{2n-1} + \cdots + a_2x^2 -a_1x+a_0$$is always positive. - -If $x\ge1$, this follows from -$$ -(a_{2n}x^{2n} - a_{2n-1}x^{2n-1}) + \cdots + (a_2x^2 -a_1x) +a_0 \ge 0 + \cdots + 0 + a_0 > 0. -$$ -If $0 0. -\end{multline*}<|endoftext|> -TITLE: Prove that a symmetric matrix with a positive diagonal entry has at least one positive eigenvalue -QUESTION [12 upvotes]: Let $A$ be a symmetric martix $n \times n$ such that there is some $i$ such that $a_{ii}>0$. -Prove that $A$ has a positive eigenvalue. - -I have a hint which I don't how to use/check: "Check that $a_{ii}=e^t_i*A*e_i$. -Thanks, -Alan - -REPLY [13 votes]: By contradiction assume that all the eigenvalues $\lambda_1,\ldots,\lambda_n$ of $A$ are non positive and by spectral theorem let $(v_1,\ldots,v_n)$ an orthonormal basis of eigenvectors then using the hint let $e_i=\alpha_1v_1+\cdots+\alpha_nv_n$ and then -$$a_{ii}=e_i^tAe_i=\sum_{j=1}^n\lambda_j\alpha_j^2\le0$$ -which is a contradiction.<|endoftext|> -TITLE: Isogenous elliptic curves over finite fields have the same number of points -QUESTION [5 upvotes]: I'm stuck in this question, it is the first part of exercise 5.4 from Silverman - The arithmetic of elliptic curves. -Let $C,D$ be two isogenous elliptic curves over a finite field $\mathbb{F}_q$. Then -$$\#C(\mathbb{F}_q)=\#D(\mathbb{F}_q)$$ -Any idea would be appreciated. -I also wonder if the following is true. Suppose $C,D$ are 2-isogenous curves over $\mathbb{Q}$, and for any $p$ prime that does not divide the discriminant, the reduction of these curves modulo $p$ are such that 4 divides their orders. Is it true that the reduced curves are also 2-isogenous? - -REPLY [7 votes]: In the spirit of chapter 5 of Silverman: use that $f:C\to D$ to be an isogeny defined over $\mathbb F_q$ means that $f \circ \phi_C = \phi_D \circ f$, where $\phi_C$ and $\phi_D$ are the Frobenius morphisms on $C$ and $D$ respectively. -Then $$f \circ ( 1_C - \phi_C) = (1_D - \phi_D) \circ f.$$ -Take the degree of both sides, and use the fact that $\deg u\circ v = \deg u \cdot \deg v$, and $\deg u\not= 0$ if $u$ is an isogeny. Now, use that $E(\mathbb F_q) = \ker (1 -\phi)$, for any elliptic curve $E$ over $\mathbb F_q$, and that $1-\phi$ is separable. -To answer your second question - I think that you are asking whether the isogeny $f$ over the rationals extends to one (call it $f$ again) over the open set $S$ of $\mathop{\rm Spec} \mathbb Z$ where the two curves have good reduction? -According to lemma 6.2.1 of S's "Advanced Topics in the Arithmetic of Elliptic Curves," a rational map from a smooth scheme to a proper scheme over a dedekind domain only fails to be defined on a set of at worst (at least) codimension 2, "so $f$ extends," and does so uniquely, as implicit in the definitions is 'separated.' -For the extended $f$ to be a group homomorphism one needs that $f$ commute with addition; but that's a Zariski closed condition which holds generically over $S$, so it must hold identically over $S$ (the separated condition). The degree of $f$ doesn't change - use the above to extend the dual isogeny $\check f$, and the relation $f \circ \check f = [m]$, where $m$ is the degree of $f$. -I hope I haven't screwed this up! Even if I haven't, I am sure there are better arguments.<|endoftext|> -TITLE: Power series solution for ODE -QUESTION [6 upvotes]: The ODE I have is $$y'(x)+e^{y(x)}+\frac{e^x-e^{-x}}{4}=0, \hspace{0.2cm} y(0)=0$$ -I want to determine the first five terms (coefficients $a_0,\ldots, a_5$) of the power series solution $$y(x)=\sum_{k=0}^{\infty} a_kx^k$$ So far, I know that $$y'(x)=\sum_{k=1}^{\infty} a_kkx^{k-1}$$ -Now I plug these back into the equation and get: -$$\sum_{k=1}^{\infty} a_kkx^{k-1} + e^{\sum_{k=0}^{\infty} a_kx^k} + \frac{e^x-e^{-x}}{4}=0$$. Now I'm not sure how to continue with this. Please help. - -REPLY [2 votes]: The answers provided are excellent, but I'll offer what I think is the easiest solution: Just take derivatives of the equation, plug in $0$ and get as many $y^{(n)}(0)$'s as you want, then finally construct your taylor series: -$$y=\sum_{n=0}^\infty \frac{y^{(n)}(0)}{n!}x^n.$$ -$$y'+e^{y}+\frac{1}{2}\sinh(x)=0, \quad\quad y(0)=0$$ -Plugging in $y(0)=0$, we get $y'(0)+1=0$, thus $\boxed{y'(0)=-1}$. -Now differentiate the original equation as many times as needed to get the number of coefficients that you want: -$$y''+y'e^y+\frac{1}{2}\cosh(x)=0$$ -$$y'''+(y')^2e^y+y''e^y+\frac{1}{2}\sinh(x)=0$$ -etc... -Plugging in $x=0$ into the above equations gives: -$$y''(0)+y'(0)e^{y(0)}+\frac{1}{2}\cosh(0)=0 \quad \Rightarrow \quad \boxed{y''(0)=\frac{1}{2}}$$ -$$y'''(0)+(y'(0))^2e^y+y''(0)e^{y(0)}+\frac{1}{2}\sinh(0)=0 \quad \Rightarrow \quad \boxed{y'''(0)=-\frac{3}{2}}$$ -Thus: -$$ -\begin{aligned} -y&=y(0)+y'(0)x+\frac{1}{2!}y''(0)x^2+\frac{1}{3!}y'''(0)x^3+\cdots \\ -&=-x+\frac{1}{2}\frac{1}{2!}x^2-\frac{3}{2}\frac{1}{3!}x^3+\cdots \\ -&=-x+\frac{1}{4}x^2-\frac{1}{4}x^3+\cdots -\end{aligned} -$$ -This method is nice because you don't have to work out recursion relations, etc.<|endoftext|> -TITLE: Random Walk of a drunk man -QUESTION [5 upvotes]: Problem Statement: -From where he stands, one step toward the cliff would send the drunken man over the edge. He takes random steps, either toward or away from the cliff. At any step his probability of taking a step away is 2/3, of a step toward the cliff 1/3. What is his chance of escaping the cliff? -My take: -Say the probability that he dies from where he stands right now is p. -Then, -he could comfortably make one step left and end his life with probability 1/3 -Or he could take one step away and two step towards and boom...take two steps away and three steps toward...so on and so forth -Resulting in p= 1/3 + 2/3 * (1/3)^2 + (2/3)^2 * (1/3)^3 +.... -Summing this infinite sequence gives me probability of dying as 3/7 (around 43%). I was rather puzzled when i learnt that the correct probability is 1/2. Cant figure out what are the other 7% ways for my drunken man to die which I missed above? - -REPLY [4 votes]: Here is a proof that does not rely on solving recurrence equations. -Let $r$ denote the probability of hitting the cliff. Let $0,1,\dots$ denote the distance from the cliff. He starts at $1$. -If in the first step goes to $2$, then will hit the cliff if and only if will ever go back to $1$, and then will ever go to $0$. But probability of going back from $2$ to $1$ is the same as hitting cliff starting from $1$, that is $r$. Summarizing: -$$r= \frac 13 + \frac 23 r\times r,$$ -or (writing $r=\frac 13 r + \frac23 r$): -$$ \frac 13 (r-1) = \frac 23 (r^2 -r).$$ -That is, if $r\ne 1$, we have -$$ \frac 13 = \frac 23 r.$$ -Or $r=\frac 12$. -It remains to show that $r\ne 1$. I'l allow myself to be sloppy and lazy, and will rely on the law of large numbers, which tells as that the position of this walk at time $n$ is of order $(\frac 23 - \frac 13)n$. In particular, the position tends to $+\infty$. If $r=1$, then by iterating, the probability of ever getting to $-1$ is also $1$, and the same for $-2$, etc. In particular, the path is unbounded from below. This contradicts the conclusion of the law of large numbers.<|endoftext|> -TITLE: Prove that the Gaussian Integers are an integral domain -QUESTION [5 upvotes]: We have the following Theorem: A non-zero commutative ring is an integral domain if and only if for all $a$,$b$ $\neq 0$ $\implies ab \neq 0$. -Now, we need to prove that the Gaussian integers form an integral domain. -Proof: Let $\Bbb Z[i]$ denote the Gaussian Integers, which is a commutative ring. Take $z,w \in \Bbb Z[i]$ s.t: $z,w \neq 0$ and $z = a + ib$, $w = c + id$. -Then, $zw = (ac - bd) + (ad + bc)i \in \Bbb Z[i]$. Since the elements of $\Bbb Z[i]$ are non-zero $\implies zw \neq 0$. QED. -I am wondering if this is correct? Thanks. - -REPLY [2 votes]: Alternative; I'd use the fact that norm (or absolute value) of Gaussian numbers, $\mathbb Z[i]$, is multiplicative. -$$0 \not =a,b \in \mathbb Z[i] \implies |a|^2 \not = 0, |b|^2 \not=0$$ -$$ |a|^2|b|^2 = |ab|^2 \not= 0 \implies ab \not=0$$<|endoftext|> -TITLE: Roll two dice. What is the probability that one die shows exactly two more than the other die? -QUESTION [15 upvotes]: Two fair six-sided dice are rolled. What is the probability that one die shows exactly two more than the other die (for example, rolling a $1$ and $3$, or rolling a $6$ and a $4$)? - -I know how to calculate the probabilities of each event by itself, but I do not know how to proceed with this problem. - -REPLY [2 votes]: If the first die is 1, the other can only be 3, probability = 1/6 -If the first die is 2, the other can only be 4, probability = 1/6 -If the first die is 5, the other can only be 3, probability = 1/6 -If the first die is 6, the other can only be 4, probability = 1/6 -If the first die is 3, the other can only be 1 or 5, probability = 2/6 -If the first die is 4, the other can only be 2 or 6, probability = 2/6 -Total probability is (1+1+1+1+2+2)/(6+6+6+6+6+6) = 8/36 = 2/9<|endoftext|> -TITLE: Closed-form for $\int_0^\infty {\frac{{\ln \left( {1 + x} \right)}}{{1 + ax}}{e^{ - bx}}{x^n}{\rm{d}}x} $ -QUESTION [6 upvotes]: I am trying to find the integration of the following -$$\int_0^\infty {\frac{{\ln \left( {1 + x} \right)}}{{1 + ax}}{e^{ - bx}}{x^n}{\rm{d}}x} $$ -Here $a>0, b>0$, and $n$ is an integer. -I think if we get the Meijer-G representation of -$$\frac{{\ln \left( {1 + x} \right)}}{{1 + ax}}$$ -we can use Laplace transform to get the closed-form expression. -But I don't know how to express the above function as Meijer-G function. -Thanks. - -REPLY [2 votes]: Disclaimer: Not a full solution, but I've gotten as far as a system of differential equations. -First, it's enough to consider an easier integral: -$$I(a,b)=I_0(a,b)=\int_0^\infty {\frac{{\ln \left( {1 + x} \right)}}{{1 + ax}}{e^{ - bx}}{\rm{d}}x}$$ -It's obvious, that for $n \in \mathbb{N}$: -$$I_n(a,b)=\int_0^\infty {\frac{{\ln \left( {1 + x} \right)}}{{1 + ax}}{e^{ - bx}}x^n{\rm{d}}x}=(-1)^n \frac{\partial^n I_0}{\partial b^n}$$ - -The first option to convert the problem into a partial differential equation is to write $b=ac$ and: -$$I(a,ac)=J(a,c)=e^c \int_0^\infty {\frac{{\ln \left( {1 + x} \right)}}{{1 + ax}}{e^{ - c(ax+1)}}{\rm{d}}x}$$ -So if we take a $c$ derivative, we obtain a much more simple integral: -$$\frac{\partial J}{\partial c}=J-e^c \int_0^\infty \ln (1 + x) e^{-c(ax+1)}\mathrm{d} x=J-\int_0^\infty \ln (1 + x) e^{-cax}\mathrm{d} x$$ -The latter integral has a well known solution (which can be obtained integrating by parts and using the definition of exponential integral): -$$\int_0^\infty \ln (1 + x) e^{-cax}\mathrm{d} x=I(0,ac)= \\ =\frac{1}{ac}\int_0^\infty \frac{ e^{-acx}}{1+x}\mathrm{d} x=-\frac{e^{ac}}{ac} \text{Ei}(-ac)$$ -Here Ei is the exponential integral. -Finally, we obtain a differential equation: - -$$\frac{\partial J(a,c)}{\partial c}-J(a,c)-\frac{e^{ac}}{ac} \text{Ei}(-ac)=0$$ - - -To obtain another PDE we change the variable $t=ax$: -$$I(a,b)=\frac{1}{a} \int_0^\infty {\frac{{\ln \left( 1+\frac{t}{a} \right)}}{{1 + t}}{e^{ - \frac{b}{a}t}}{\rm{d}}t}$$ -Now we take the $a$ derivative: -$$a \frac{\partial I(a,b)}{\partial a}+I(a,b)=\frac{b}{a^2} \int_0^\infty {\frac{{\ln \left( 1+\frac{t}{a} \right)}}{{1 + t}}{t~e^{ - \frac{b}{a}t}}{\rm{d}}t}-\frac{1}{a^2} \int_0^\infty e^{-\frac{b}{a} t} \frac{t dt}{(1+t)(1+\frac{t}{a})}$$ -We can see by direct comparison that: -$$\frac{b}{a^2} \int_0^\infty {\frac{{\ln \left( 1+\frac{t}{a} \right)}}{{1 + t}}{t~e^{ - \frac{b}{a}t}}{\rm{d}}t}=-b \frac{\partial I(a,b)}{\partial b}$$ -As for the second part of the derivative we can use partial fractions to compute the integral: -$$\frac{1}{a}=\alpha,~~~~\frac{b}{a}=\beta$$ -$$\int_0^\infty e^{-\beta t} \frac{t dt}{(1+t)(1+\alpha t)}=\frac{1}{\alpha-1} \left(\int_0^\infty e^{-\beta t} \frac{dt}{1+t}-\int_0^\infty e^{-\beta t} \frac{dt}{1+\alpha t} \right)$$ -But we already know how to compute the two latter integrals (see above), so: -$$\int_0^\infty e^{-\beta t} \frac{dt}{1+t}=-e^{\beta} \text{Ei}(-\beta)$$ -$$\int_0^\infty e^{-\beta t} \frac{dt}{1+\alpha t}=-\frac{e^{\beta / \alpha}}{\alpha} \text{Ei}(-\beta / \alpha)$$ - -Finally we obtain the system of partial differential equations, which (I think) completely define the function $I(a,b)$: - -$$a \frac{\partial I}{\partial a}+b \frac{\partial I}{\partial b}+I+\frac{1}{a(a-1)} \left( e^{b / a} \text{Ei}(-b / a)-ae^b \text{Ei}(-b) \right)=0$$ -$$a \frac{\partial I}{\partial b}-I-\frac{e^{b}}{b} \text{Ei}(-b)=0$$ - - - -$$I(0,b)=-\frac{1}{b} e^b \text{Ei}(-b)$$ -$$I(a,0) \to \infty$$ -$$I(a,\infty) = 0$$ -$$I(\infty,b) = 0$$ - - -From the above two equations we can also obtain another with only $a$ derivative: - -$$a \frac{\partial I}{\partial a}+\left(1+\frac{b}{a} \right) I+\frac{1}{a(a-1)} \left( e^{b / a} \text{Ei}(-b / a)-e^b \text{Ei}(-b) \right)=0$$ - - -P.S. I would really appreciate if someone checks my post for mistakes. I'll be checking myself, but just to be sure.<|endoftext|> -TITLE: Why do extraneous solutions exist? -QUESTION [13 upvotes]: I am currently in a Pre Calculus class at my High School. I have come across the concept of extraneous solutions, particularly when solving absolute value equations, radical equations, and logarithmic equations. My question is, why do these solutions exist? -My teacher never explained this, which is understandable given that I am in a High School math class, and there isn't much time for the teacher to go into the actual derivations of everything. I am wondering because I plan to major in Mathematics, and having a conceptual understanding of this is important to me. -If anyone could explain the reason extraneous solutions exist for the three examples I noted, it would be of great help to me. - -REPLY [2 votes]: The reason extraneous solutions exist is because some operations produce 'extra' answers, and sometimes, these operations are a part of the path to solving the problem. -When we get these 'extra' answers, they usually don't work when we try to plug them back into the original problem. -Squaring is a common operation that produces multiple values. As Mariano Suárez-Alvarez♦ notes, -$$x=-1\tag1$$ -$$\implies x^2=(-1)^2=1$$ -$$\implies x^2-1=0$$ -$$\implies x=\pm1$$ -But obviously, we see $x\ne+1$, as you can see by $(1)$. -Reading your comments, I see you present an example: -$$a^n=b^n\implies a=b$$ -This is only partially true, for the full algebraic solution to this problem is given as -$$ae^{\frac{2}ni\pi x}=b$$ -Where $x=0,1,2,3,\dots$ -For $x=0$, this breaks down into $ae^0=a=b$, but that is not the full picture. -So if we have something like $x=a\implies x^n=a^n$, the latter equality produces many different results, whereas the original equality has only one result. -Extraneous solutions for general problems like $x+a=\sqrt{x+b}$ are actually quite interesting, but further understanding of things like branches and complex numbers must be understood to fully grasp the meaning of those extraneous solutions. (If you really want to, solve some of these square root problems using the quadratic formula, and note which of the two solutions the quadratic formula game came out right ($+$ or $-$?)) -Lastly, extraneous solutions when dealing with logarithms are simply due to your lack of understanding of how complex numbers play into logarithms. When you must use the definition that a logarithm is only defined for positive real input, then you will get extraneous solutions for the very reason that you have that parameter in place. Once you learn how to deal with complex logarithms, I don't believe you can have extraneous solutions in logarithms. -I can't remember doing any extraneous solutions for absolute values, but I'm sure there is an explanation for those.<|endoftext|> -TITLE: Left-continuity of a Lévy filtration -QUESTION [5 upvotes]: The natural filtration $(\mathcal{F}_t^X)_{t\geq 0}$ of a Lévy process $X$ is right-continuous, but what about left-continuity? A Lévy process is quasi left-continuous at time $t$ which says that -\begin{align*} -\lim_{s\nearrow t}X_s = X_t, \quad w.p.\; 1, -\end{align*} -so it is almost surely left-continuous at time $t$. It is known that a left-continuous process has a left-continuous natural filtration, so what is the difference between $\mathcal{F}_t$ and $\mathcal{F_{t^-}}$ for Lévy processes? -It seems to me that events like $\{X_t\in A\}$, where $A$ is a Borel set, belong to $\mathcal{F}_t$ but not $\mathcal{F_{t^-}}$, but at the same time, -\begin{align*} -\mathbb{E}(X_t \mid \mathcal{F}_t) = X_t = X_{t^-} = \mathbb{E}(X_t \mid \mathcal{F}_{t^{-}}), -\end{align*} -holds almost surely. -Is essentially $\mathcal{F}_t=\mathcal{F}_{t^-}+\text{null sets}$? Can we for a fixed $t_0$ find a modification $X^{(t_0)}$ of $X$, which is left-continuous at $t_0$, and whose natural filtration therefore satisfies $\mathcal{F}_{t_0^{-}}=\mathcal{F}_{t_0}$? - -REPLY [4 votes]: Is essentially $\mathcal{F}_t = \mathcal{F}_{t-}+$ null sets? - -Yes. Roughly speaking, $\mathcal{F}_t$ contains all the information in $\mathcal{F}_{t-}$ plus the information for which $\omega$ a jump occurs at time $t$ (and also the jump height). -As you already noted, we have $X_t = X_{t-}$ almost surely. Since $X_{t-}$ is $\mathcal{F}_{t-}$-measurable, we find that $X_t$ is measurable with respect to the completion of $\mathcal{F}_{t-}$. In particular, if we denote by $\mathcal{G}_t$ the completed canonical filtration, then $\mathcal{G}_{t-} = \mathcal{G}_t$. - -Can we for fixed $t_0$ find a modification $X^{(t_0)}$ of $X$ which is left-continuous at $t_0$ [...]? - -Obviously, we can simply remove the jump at this fixed time $t_0$, i.e. define -$$Y_t := \begin{cases} X_t, & t < t_0, \\ X_t-\Delta X_{t_0}, & t \geq t_0. \end{cases}$$ -Then $t \mapsto Y_t$ is (left-)continuous at $t= t_0$ and, since $\{\Delta X_{t_0} \neq 0\}$ is a null set, we have $\mathbb{P}(X_t = Y_t)=1$ for all $t \geq 0$.<|endoftext|> -TITLE: weird combinatorics, combinations question from cambridge challenge exercise -QUESTION [5 upvotes]: A child has $10$ identical blocks, each of which is to be painted with one of $4$ colours. -how many different ways can the $10$ blocks be painted? -Answer is $286$ but I have no idea how they got it. -From cambridge year ten book $2$ - -REPLY [12 votes]: A common way to explain it is known as "stars and bars". -I shall illustrate it by "dipping" identical balls (boxes) into distinct bins (of colours) numbered $1-4$, and depict the results obtained -One result could be $\;\;\bullet\bullet\bullet|\bullet\bullet\bullet\bullet|\bullet -|\bullet\bullet\;\to\;\; 3-4-1-2$ of each of the colours. -Make two notes: only $3$ dividers are needed to depict $4$ bins, and some bins could remain empty, e.g. $|\bullet\bullet\bullet\bullet\bullet\bullet\bullet|\bullet\bullet\bullet|$ depicting $\;\;0-7-3-0$ -So if there are $n$ balls and $k$ bins ($k-1$ dividers), the only choice you have is to place the dividers among the lot, thus -$\dbinom{n+k-1}{k-1}$ which works out to $\dbinom{10+3}{3} = 286$ for your particular example. -You can look here if you need a a more technical explanation<|endoftext|> -TITLE: group theory for non-mathematicians -QUESTION [19 upvotes]: A very smart non-mathematician friend is looking to learn about groups, and I was wondering if people might have suggestions (this is NOT a duplicate of this question, since a textbook is not what I am looking for, at least not at first). - -REPLY [2 votes]: I recommend Carter's Visual Group Theory. It makes heavy use of pictures and diagrams (hence the name) and I found it very clear.<|endoftext|> -TITLE: Anybody knows a proof of Uniqueness of the Reduced Echelon Form Theorem? -QUESTION [6 upvotes]: The book has no proof showing each matrix is row equivalent to one and only one reduced echelon matrix. Does anybody know how to prove this theorem? - -"Theorem Uniqueness of the Reduced Echelon Form - Each matrix is row equivalent to one and only one reduced echelon matrix" - Source: Linear Algebra and Its Applications, David, C. Lay. - -[EDIT I think the following can be a proof that each echelon matrix is reduced to only one reduced echelon matrix, but how to show a matrix that is not in echelon form is reduced to only one reduced echelon matrix?] -In a $m×n$ matrix in echelon form of a linear system for some positive integers m, n, let the leading entries $(■)$ have any nonzero value, and the starred entries $(☆)$ have any value including zero. -Leading entries $■$s in $R_1$ and $R_2$ in an echelon matrix can become leading 1 in a reduced echelon matrix through dividing them by $■$, and the entry ☆ in $R_1$ above $■$ in $R_2$ can be $0$ by subtracting a multiple of $■$. -So $R_1$ and $R_2$ in a matrix in echelon form becomes as follows: -$\begin{array}{rcl} -R_1\space & [■ ☆\cdots ☆☆☆☆]\\ -R_2\space & [0 ■\cdots ☆☆☆☆]\end{array} \qquad ~ -\begin{array}{rcl} R_1\space & [1 0\cdots ☆☆☆☆]\\R_2 &[0 1\cdots ☆☆☆☆] - \end{array}$ -For all integers k with $2≤ki_j$ and -$$ -u_p=\alpha_1u_{i_1}+\alpha_2u_{i_2}+\dots+\alpha_{i_j}u_{i_j} -$$ -Therefore -$$ -a_p=\alpha_1a_{i_1}+\alpha_2a_{i_2}+\dots+\alpha_{i_j}a_{i_j} -$$ -and this linear combination is unique, because the set of columns -$$ -\{a_{i_1},a_{i_2},\dots,a_{i_j}\} -$$ -is linearly independent. -Thus the position of the pivot columns in $U$ is uniquely determined by the columns of $A$ and the coefficients on the non-pivot columns are likewise determined by the linear relations between the columns of $A$.<|endoftext|> -TITLE: Two atlases on a manifold $M$ are equivalent if and only if they determine the same set of smooth functions $f:M\rightarrow\mathbb{R}$ -QUESTION [6 upvotes]: Suppose $\{\phi_\alpha\}_{\alpha\in\mathcal{A}}$ and $\{\phi_\beta\}_{\beta\in\mathcal{B}}$ are two smooth atlases on a topological manifold $M$. My definition of two such atlases being equivalent is that their union $\{\phi_\alpha\}_{\alpha\in\mathcal{A}\cup\mathcal{B}}$ is also a smooth atlas, that is, the transition maps between the charts of different atlases are smooth. -I have shown that if two atlases are equivalent, then they determine the same set of smooth functions $f:M\rightarrow\mathbb{R}$ - i.e. $f$ is smooth with respect to one chart if and only if it is smooth with respect to the other. But I do not know how to prove the converse statement. I would like to do something along the lines of $(f\circ\phi_\beta^{-1})^{-1}\circ(f\circ\phi_\alpha^{-1})=\phi_\beta\circ\phi_\alpha^{-1}$ and conclude from that that the RHS is smooth since both bracketed functions on the LHS are, but I know I can't take the inverse on the LHS like this,since there is no guarantee $f$ is invertible. Any help would be appreciated. - -REPLY [3 votes]: Suppose $\varphi_\beta\circ\varphi_\alpha^{-1}$ isn't smooth. Then one of $x_i\circ\varphi_\beta$ isn't smooth in $(\varphi_\alpha)_{\alpha\in A}$, while it's clearly smooth in $(\varphi_\beta)_{\beta\in B}$. Choose a sufficiently small compact neighborhood $K$ of a point in which it isn't smooth. Then $x_i\circ\varphi_\beta|_K$ can be smoothly extended to the whole $M$ and is the function you want.<|endoftext|> -TITLE: Noetherian rings have only finitely many minimal prime ideals. -QUESTION [8 upvotes]: We say that $p$ is minimal prime if It does not contain any other prime. -Assume that $A$ is Noetherian ring -Question: $A$ has only finitely many minimal primes. -any suggestions please - -REPLY [5 votes]: Here is an approach you might try: -A topological space $X$ is called Noetherian if it satisfies the ascending chain condition on open sets. Equivalently, every collection of open sets has a maximal element. A topological space $Y$ is said to be irreducible if it is not the union of two proper closed sets. -If $R$ is a ring with identity, then the set $\textrm{Spec } R$ of prime ideals of $R$ is a topological space whose closed sets are of the form $$V(I) := \{ \mathfrak p \in \textrm{Spec } R : I \subseteq \mathfrak p \}$$ for $I$ an ideal of $R$. -1 . A maximal irreducible subset of a topological space $X$ is called an irreducible component. They always exist by Zorn's lemma. Show that a Noetherian topological space has only finitely many irreducible components. -2 . Show that the mapping $I \mapsto V(I)$ gives an order reversing bijection between radical ideals of $R$ and closed sets of $\textrm{Spec } R$. Show that under this bijection, prime ideals are in bijective correspondence with irreducible closed sets. In particular, minimal prime ideals correspond to maximal irreducible sets. -3 . Show that if $R$ is a Noetherian ring, then $\textrm{Spec } R$ is a Noetherian topological space.<|endoftext|> -TITLE: Prove that $\sigma(n^2)=\sum_{d\mid n} 2^{\omega(d)}$ -QUESTION [5 upvotes]: Let $\omega(n)$ denote the number of distinct prime divisors of $n>1$, with $\omega(1)=0$. -(a) Show that $2^{\omega(n)}$ is a multiplicative function. -(b) Prove that $$\sigma(n^2)=\sum_{d\mid n} 2^{\omega(d)}.$$ -I have done the part (a) and I am stuck by (b). -First, I set $d=p_1^{e_1}\cdots p_k^{e_{k}}$ be a factor of $n$. Then I don't know what's the next step. - -REPLY [5 votes]: Suppose we seek to show that -$$\tau(n^2) = -\sum_{d|n} 2^{\omega(d)}.$$ -This can be done using Dirichlet series and Euler products. -We have for the RHS and -$$\sum_{n\ge 1} \frac{1}{n^s} 2^{\omega(n)}$$ -the Euler product -$$\prod_p -\left(1 + \frac{2}{p^s} + \frac{2}{p^{2s}} + \frac{2}{p^{3s}} -+\cdots\right).$$ -which is -$$\prod_p \left(-1 + 2\frac{1}{1-1/p^s}\right) -= \prod_p \frac{-1+1/p^s+2}{1-1/p^s} -\\ = \prod_p \frac{1+1/p^s}{1-1/p^s} -= \prod_p \frac{1-1/p^{2s}}{(1-1/p^s)^2} -= \frac{\zeta(s)^2}{\zeta(2s)}.$$ -Therefore -$$\sum_{n\ge 1} \frac{1}{n^s} \sum_{d|n} 2^{\omega(d)} -= \frac{\zeta(s)^3}{\zeta(2s)}.$$ -On the other hand we have -$$\sum_{n\ge 1} \frac{1}{n^s} \tau(n^2) -\\= \prod_p -\left(1 + (2+1) \frac{1}{p^s} -+ (4+1) \frac{1}{p^{2s}} -+ (6+1) \frac{1}{p^{3s}} -+ (8+1) \frac{1}{p^{4s}} -+ \cdots\right).$$ -This is -$$\prod_p \left(1+\frac{1/p^s}{1-1/p^s} -+ \sum_{k\ge 1} \frac{2k}{p^{ks}} -\right) -\\ = \prod_p \left(1+\frac{1/p^s}{1-1/p^s} -+ 2 \frac{1/p^s}{(1-1/p^s)^2} -\right).$$ -To aid in simplification we put $z=1/p^s$ to get -for the inner term -$$1 + \frac{z}{1-z} -+ \frac{2z}{(1-z)^2}$$ -This simplifies to -$$\frac{1+z}{(1-z)^2}.$$ -On the other hand -$$\frac{\zeta(s)^3}{\zeta(2s)} -= \prod_p \frac{1-z^2}{(1-z)^3} -= \prod_p \frac{1+z}{(1-z)^2}.$$ -We have equality, QED.<|endoftext|> -TITLE: Abundence of smooth curves on a normal variety? -QUESTION [5 upvotes]: If $X$ is a normal variety, and $p \in X$, is it true that there is a curve $C \subset X$ with $p \in C$ a smooth point (on C)? -It is obviously false if normality is dropped - take $X$ to be a singular curve. I have no specific reason beyond this to request normality, though the examples I am familiar with pass this test. I somewhat doubt that this could be true, so I am asking for a counter example. -(The vague motivation is that I am thinking about the valuative criterion for separatedness recently, and would like to understand the intuition that there are no curves $C$ with double points on a separated scheme - i.e with two centers for the same valuation on $C$. And I like DVRs, though I guess one can just take an arbitrary curve passing through the point and take its normalization to get a discrete valuation on the curves function field with some prescribed center. I am still curious about the geometric question anyway.) -The other side of this question: -Is there a (normal) variety $X$ with a singularity so "bad" that all curves passing through that point acquire that singularity? - -REPLY [2 votes]: Here's an example to show the answer to the first question is "no". -Let $S$ be a surface which is factorial but not smooth. These singularities do exist, but they are rather special --- see the answer of Victor Protsak to this MO question for some details. By definition of "factorial", every Weil divisor, in particular every curve, on $S$ is a Cartier divisor. -Now, it is a general fact that if $X$ is a variety with a singular point $p$, and $D$ an effective Cartier divisor on $X$ passing through $p$, then $D$ is also singular at $p$. The proof is easy: take an affine open set containing $p$ in which $D$ is principal, defined by some regular function $f$ say. Then the Jacobian matrix for the ideal defining $D$ is obtained from the matrix for the ideal defining $X$ by adding one row, so its rank is at most one more. -So let $p$ be a singular point of $S$. By factoriality, any curve $C$ on $S$ is a Cartier divisor, so if $C$ is a curve passing through $p$, then $C$ must be singular at $p$. -Note that this example works not because the singularity is especially "bad", but rather the opposite --- factorial surface singularities are mild. By contrast the surface ordinary double point $xy=z^2$ is not factorial, but here there are smooth curves passing through the singularity. The point is precisely that these smooth curves are Weil divisors that fail to be Cartier.<|endoftext|> -TITLE: How do I manipulate the sum of all natural numbers to make it converge to an arbitrary number? -QUESTION [10 upvotes]: I just found out that the Riemann Series Theorem lets us do the following: -$$\sum_{i=1}^\infty{i}=-\frac{1}{12}$$But it also says (at least according to the wikipedia page on the subject) that a conditionally convergent sum can be manipulated to make it look like it converges into any real number. My question is then: Is there a general algorithm for manipulating this series into an arbitrary number? -My knowledge about series and number theory is pretty limited so if I'm in over my head or if the answer is just too complicated I'd appreciate some tips on what to read up on! -Thanks! - -REPLY [4 votes]: The Riemann series theorem does not allow you to make the claim above because -$ \sum_{n=1}^{\infty} n$ - is not a conditionally convergent series. -Instead, an amazing but customary abuse of notation is used to write down this "identity". There is a function called the Riemann Zeta Function which is defined for every complex number except $s=1$. If $s$ has real part greater than $1$, the value of the Riemann zeta function equals -$$ -\zeta(s)=\sum_{n=1}^\infty \frac{1}{n^s}. -$$ -The Riemann zeta function also has -$$ -\zeta(-1)=\frac{-1}{12}. -$$ -Filling in $s=-1$, we get -$$ -\zeta(-1)``=''\sum_{n=1}^\infty n, -$$ -but this should not be taken too literally because the sum only converges when $s$ has real part greater than $1$ and does not hold if $s=-1$.<|endoftext|> -TITLE: Non-geometric Proof of Pythagorean Theorem -QUESTION [11 upvotes]: Is there a purely algebraic proof for the Pythagorean theorem that doesn't rely on a geometric representation? Just algebra/calculus. I want to TRULY understand the WHY of how it is true. I know it works and I know the geometric proofs. - -REPLY [2 votes]: A "proof" of the Pythagorean Theorem depends on some kind of definitions of: - -right angle -length/area -stright line - -The axioms of Euclid are not completely formalized but we have other formal axiomatic systems that mimic euclidean axioms and definitions of these notions (for example the Hilbert's axioms) so that we can derive Pythagorean theorem there. A formal proof with these axiomatic systems wouldn't require any reference to pictures in principle. -Proving Pytagorean Theorem in completely different context such as analytic geometry (or"calculus") could be possibly trivial or meaningless depending on what definition of "right angle" we are going to consider. For example it would be trivial if you define a right angle with the scalar product and the distance with $\sqrt{(x_1-x_2)^2+(y_1-y_2)^2}$, but you could try with different ones and the proof of the theorem could get more and more complicated depending on which definition you want to take (you could want to define areas with Peano Jordan's measure for example).<|endoftext|> -TITLE: Geometric intuition of graph Laplacian matrices -QUESTION [12 upvotes]: I am reading about Laplacian matrices for the first time and struggling to gain intuition as to why they are so useful. Could anyone provide insight as to the geometric significance of the Laplacian of a graph? For example, why are the eigenvectors of a Laplacian matrix helpful in interpreting the corresponding graph? - -REPLY [2 votes]: This is to satisfy the request of @Surb which I quote as : - -Providing intuition on the construction and possible uses of the graph-Laplacian with layman terms would be great. Ideally the discussion is supported with a simple example. - -(Note : This usually disappears from the post following bounty expiration, hence I include it). - -Introduction -Let me first define the graph Laplacian : given a simple finite graph $G = (V = \{v_i\},E)$, let $A$ denote the adjacency matrix of $G$ and $D$ the degree matrix i.e. the diagonal matrix with the $i$th diagonal entry as $\deg(v_i)$. The matrix $L = D-A$ is called the Laplacian matrix of the graph $G$. -There are many variants of the Laplacian, but these can all be expressed in terms of $D$ and $A$ and play purposes that , at a high level, are analogous to that of $L$. -We begin with some preliminary observations about $L$ : it's a symmetric matrix, and furthermore , one can verify that $L = MM^T$ where $M$ is an $|E| \times |V|$ matrix given by $M_{(i,j),i} = 1$, $M_{(i,j),j} = -1$ and $M_{(i,j),k} = 0$ for $i,j,k \in V$ distinct. Hence, $L$ is in fact positive semidefinite, and hence admits only non-negative real eigenvalues. All row sums of $L$ are the same and equal to $0$, hence $L$ admits the eigenvector $[1,1,...,1]$ with eigenvalue $0$. -We will now talk about the geometric properties of $L$, and link these to the probabilistic aspect of $L$. - -The geometry of the graph, and $L$ -The simplest thing that one can find from $L$ is the number of connected components of the graph $G$. - -Result : The geometric multiplicity of $0$ as an eigenvalue of $L$ (which we know to be positive) equals the number of connected components of $G$. - -Proof : Suppose that $Lw = 0$. Then, $(D-A)w = 0$, so in particular $Dw = Aw$ i.e. for all $i$ we have $\deg(v_i)w_i = \sum_{v_j \sim v_i} w_j$. Suppose that $i^*$ satisfies $w_{i^*} = \max_{1 \leq i \leq n} w_i$, then $$ -\deg(v_{i^*})w_{i^*} = \sum_{v_j \sim v_{i^*}} w_{i^*} \geq \sum_{v_j \sim v_{i^*}} w_j -$$ -where the inequality will be strict unless $w_{j} = w_{i^*}$ for all $j$ such that $v_j \sim v_{i^*}$. From this, it follows that $w$ is constant on any connected component of $G$, and therefore $w$ is a linear combination of vectors of the form $1_C$ ,where $C$ is a connected component of $G$. This instantly furnishes the proof. $\blacksquare$ -Brief explanations -Let me now start with the layman explanations. When we say the "geometry" of a graph, what exactly are we trying to talk about? Let's try to write down some geometric characteristics. We have the diameter of the graph, the furthest distance between two vertices. We have , in some sense, a sparsity parameter, which tells you how quickly you can get from an average vertex to another vertex. Similarly, you'd consider the average degree to be a feature of the geometry of the graph. -What does the Laplacian do? Well, the Laplacian can do a lot of things, which I'll mention in brief and then go on to elaborate : - -Following some normalization, the Laplacian yields a transition kernel, which is a probability transition function that allows us to introduce a random walk on the graph. The random walk provides us with means to explore the graph : averaging the exploration then yields averages for critical quantities such as the average degree, average distance between points and so on, and using the variance of this random process, one can easily provide concentration bounds using Chebyshev's inequality as well. - -The eigenvectors of the Laplacian, when as results of certain minimax problems, yield information about the connectivity of the graph. The basic reason is that such functionals consist of direct comparison between adjacent vertices, as I'll explain later. - -Continuous surfaces such as manifolds can be discretely approximated by graph-like structures (think of a cycle in $n$ vertices approximating a circle for large $n$). This allows us to think about continuous geometric features, such as the curvature, the area/volume, and so on. It turns out that doing this, for a large class of manifolds, is actually a very important development in mathematics. - - - -Random walk -The (unweighted) random walk on a graph is created by a Markov chain on the vertices of the graph, and the transition is given by $L_{rw} = D^{-1}L - I$. You can check that $(L_{rw})_{ij} = \frac 1{\deg(v_i)}$ if $v_i \sim v_j$ and $0$ otherwise. What this basically means is : if I'm at a vertex, I look at its neighbours and give each of them an equal chance to be my next destination. Thus, I walk around the graph in this fashion. -The question is , what exactly can this random walk tell me? Let's think about a few scenarios, and ask ourselves what these scenarios occurring in particular, tell us about the graph. - -I'm finding that I'm visiting the same vertex , or set of vertices again and again. Perhaps the connectivity of this part of the graph is poor : it's an area which is difficult to get out of. - -I'm finding that I'm visiting a lot of new vertices : this is reflective of excellent connectivity in the region, that a lot of vertices in this area have a large degree and connect to each other often. - - -The average degree of the graph is reflected in $L$ itself, since the trace of $L$ is the sum of degrees of the graph (the diagonal entries are the degrees!). Therefore, the trace of $L$, divided by the number of vertices, gives the average degree. -However, this particular scenario is what we would describe as "bottlenecks" in the graph. A heavily connected graph has no "bottlenecks" or subsets of the graph where a random walk that gets in will find very difficult to come out of, whereas a poorly connected graph will have such issues. -It turns out that this "bottleneck" scenario is captured beautifully by Cheeger's inequality. Without going into too many details, Cheeger's constant is defined to capture the worst bottleneck in the graph : it is given by $$C = \min_{S \subset V} \frac{|E(S,S^c)|}{\min(\sum_{v \in S} \deg(v), \sum_{v \in S^c} \deg(v))} \quad ; \quad E(S,S^c) = \{(v,v') \in E, v \in S , v' \in S^c\}$$ -So the $S$ for which this minimum is attained is the worst one to get out of. It turns out that the smallest positive eigenvalue of the Laplacian can be used to provide tight bounds on $C$ (in fact, it turns out that higher eigenvalues will also come into play for even tighter bounds, but knowing this much is nice). The nice probabilistic way of proving this, is to actually randomly walk through the graph, collect data on where you've been and what's been the hardest part to navigate, and use your findings to construct the bottleneck $S$. This is briefly explained here. -In general, using a random walk to decipher properties of a graph is part of the more widely used "probabilistic method". The overarching principle of the probabilistic method is simple : - -Sometimes, the best or worst parameter optimizing a quantity is well-approximated by randomly choosing the parameter. - -Now, there's another thing that random walks can tell you. Let's say you're performing a random walk on a graph that looks like this : -$$ -1- 2 -3 -$$ -You will, very soon, be realizing that as you walk randomly on this graph, you are likely to be at $2$ with probability $\frac 12$ , and $1$ or $3$ with probability $\frac 14$ each. Imagine I was blindfolded and walked $100$ steps randomly across this graph, and then asked "where am I?" then I'd say the above probability distribution is very accurate, right? -Now, let me ask myself two questions : - -How did I get $\frac 14 , \frac 12 , \frac 14$? More precisely, what are the different proportions of time I spend at a vertex, while randomly walking across a graph? - -The mixing question : suppose I start from a given point, and walk randomly across the graph. How many steps do I need to take, to assume safely that the number of times I've visited each vertex looks very, very close to the different proportions I created earlier? - - -The answer to both questions is given by the random walk Laplacian. Indeed, the answer to the first is the unique left eigenvector of the random walk Laplacian, corresponding to the eigenvalue $1$ whose entries add up to $1$ i.e. it is the unique $w$ such that $wL_{rw} = w$ and $\sum w_i = 1$. -The answer to the second question is given by a result, that states that with a rate depending upon the smallest positive eigenvalue, my confidence (loosely used word : accurately speaking, this is the distance between two probability distributions) in the guess that my proportion of time spent thus far at each vertex is very close to the answer to the first question increases exponentially with the number of steps I take. -Which is a probabilistic representation of the fact that my exploration and understanding of a graphical structure's geometry gets exponentially better, with the exact rate given by a Laplacian eigenvalue. That should do the probabilistic method some justice. - -Understanding the geometry of $G$ using minimax-characterizations of the eigenvalues -It turns out that the speciality of $L$ means that the eigenvalues of $L$ (ordered as $0 = \lambda_0 \leq \lambda_1 \leq ... \leq \lambda_{n-1}$, where $n = |V|$) admit minimax characterizations like so : -$$ -\lambda_i = \inf_{\dim M=i+1} \max_{f \in M} \frac{\sum_{u \sim v} (f(u)-f(v))^2}{\sum_{x} f(x)^2\deg(x)} -$$ -where $M$ varies over subspaces of $W = \{f : V \to \mathbb R\}$, a vector space of dimension $|V|$. Taking this for $\lambda_0$ and using the subspace of constant functions gives $0$. For $\lambda_1$, we can add to the constant function any other non-constant function and get a subspace of dimension $2$. Since the constant part doesn't contribute, we are led to this characterization : -$$ -\lambda_1 = \inf_{f \in W , \sum_{x\in V} f(x) = 0} \frac{\sum_{y \sim x} (f(y) - f(x))^2}{\sum_x f(x)^2 \deg(x)} -$$ -Thus, $\lambda_1$ is determined by the function which reduces local variation the most, because we want for $y \sim x$, $f(y)-f(x)$ to be small in absolute value, in order for this to be minimized. -This makes $\lambda_1$ the ideal candidate to study globalized properties of the graph : whether it's the worst bottleneck or the mixing time or anything that involves the graph to be considered in its entirety, $\lambda_1$ gets involved. In particular, $\lambda_1$ is expected to be quite stable under local perturbations of the graph : if I pinch the graph at a few points or put up a few edges here and there at a few places, don't expect $\lambda_1$ to be changing all that much. -A nice bound for the diameter of a graph, for example, is given by $D \geq \frac 1{\lambda_1 \sum_{x \in X} \deg(x)}$. -On the other hand, as you go to $\lambda_2,\lambda_3$ and so on, the finer effects of the graph start getting captured, which are used for giving tighter bounds for quantities based on more local changes. For example, it is quite easy to see that $D$ equals the smallest $m$ for which the matrix $L_{rw}^m$ has no non-zero off-diagonal entries. The elementary bound is proven using this fact, and can be used to sharpen it (a bound involving $\lambda_1$ and $\lambda_{n-1}$ is known). -Thus, the geometry of the graph is affected by the minimax characterization of the eigenvalues. Furthermore, note that the eigenvectors can be obtained by the minimax characterization and actually capture the variation that this eigenvalue induces. An eigenvector with somewhat controlled entries (all entries being close to each other) points to great global behaviour, while eigenvectors with wild entries point, depending upon the index of the eigenvalue , to high local variations in the geometry of the graph. - -Continuous approximation -This one is actually pretty surprising. Let's say that you have a continuous surface, which has some continuous characteristics associated to it, like a volume/area, a curvature at each point of the surface, and so on. -A remarkable result somewhere in the $90$s showed that if were to discretize this surface by creating a graph that sort of "meshes" or looks like it, and looked at the eigenvalues of the Laplacian of that graph, then as the mesh gets finer and finer and starts to resemble the surface more and more, the eigenvectors and eigenvalues converge to the eigenvalues and eigenvectors of the Laplacian of the surface. -This rather astounding result, means that we can talk about the geometry of certain kinds of approximating graphs, by talking about the surface that they resemble! For example, a circle has constant curvature, and one can find the eigenvalues of the Laplacian of the cycle graph as shown here. One will be able to see the resemblance between this, and the kernel of the Laplace-Beltrami operator on the circle : both are periodic, for example, and both contain $\sin$-$\cos$ type signature. -What this means, is that for a graph, an eigenvector could very well assist in approximating the curvature , diameter and area of the surface it attempts to approximate. This also lets us make informed guesses. -For example, one can try to solve the Laplace-Beltrami equation on the unit circle. Now, the area of this shape is $\pi$, it's diameter is $2$, and it's curvature everywhere is some fixed constant number (I've actually forgotten what it is). The important thing, though, is that once you do this, you've got excellent estimates for the eigenvalues and the eigenvectors, of a cycle graph for large $n$! The reverse also holds. -So this is something very interesting : a sort of continuous-to-discrete connection. Some results include : - -Isoperimetry : If a graph is "isoperimetric" i.e. for all subsets $S \subset V$, the "perimeter" $|E(S,S^c)|$ of $S$ is at least a function comparable (this depends upon the dimension) to the size of $S$ which is $\sum_{x \in S}\deg(x)$, then the same holds for the shape it's approximating i.e. for all subsets $A$ of that shape, the perimeter of $A$ compares to the area of $A$ in the same way. - -Curvature-type results : If a graph has heavily constrained eigenvectors at the higher eigenvalues (so local effects are extremely minimized) then the surface it's approximating will also have extremely uniform curvature. If you imagine something having a really high curvature though, it will eventually bend back onto itself quicker, and thus have a smaller area. Therefore, a corollary of this result is that a graph with heavily constrained eigenvectors at the higher eigenvalues, helps bound the area of the shape it is approximating (and the diameter of the shape as well). - - -But , why is all this happening? -Well, it's all happening because of Fourier. Or, to be more precise, his expansion. -The point is that any function $f$ on a (not necessarily finite) graph's vertices, which is square-integrable with respect to the measure that assigns to each point it's degree as a weight, can be expanded as a weighted (possibly infinite) sum of the eigenvectors of the Laplacian matrix. Thus, these weights determine how much each eigenvalue affects $f$, and therefore talks about how globally or locally affected $f$ is, as a function. -It turns out that whether it's the diameter, or the Cheeger constant, or anything geometric that you can possibly think of, it can be captured by some function $f$ , and then studied using the Fourier expansion of that $f$. Then, the inequalities that are derived, are done so using truncations or bounds on the Fourier expansion. That , however, connects the geometry of the graph, to the analytic properties of the eigenvectors of the graph. -Furthermore, this also extends to surfaces, where you have square-integrable functions with respect to some measure, and a Fourier basis corresponding to this. The probabilistic method there is not really very different : instead of running between vertices of a graph, you're running across a surface and seeing where you're going. So the resemblance is remarkable, and this should hopefully have been a decent demonstration of it.<|endoftext|> -TITLE: A function is convex if and only if its gradient is monotone. -QUESTION [9 upvotes]: Let a convex $ U \subset_{op} \mathbb{R^n} , n \geq 2$, with the usual inner product. A function $F: U \rightarrow \mathbb{R^n} $ is monotone if $ \langle F(x) - F(y), x-y \rangle \geq 0, \forall x,y \in \mathbb{R^n}.$ -Let $f:U \rightarrow \mathbb{R}$ differentiable. Show that $f$ is convex $\iff \nabla f:U \rightarrow \mathbb{R^n}$ is monotone. -My attempt on the right implication: I already proved that if $f$ is convex and 2-differentiable then $f''(x) \geq 0$. But this exercise only says f is 1-differentiable. -Then I tried the following: -$f$ is convex $\iff \forall x,y \in U $ the function $\varphi:[0,1] \rightarrow \mathbb{R}$, defined by $ \varphi(t) = f((1-t)x+ty)$ is convex. Then $\varphi'$ is non-decreasing, then $\nabla \varphi(x) \geq 0$... but I'm stucked here. -My attempt on the left implication: -$ |\nabla \varphi (x) - \nabla \varphi (y)|| x-y| \geq | \langle \nabla \varphi (x) - \nabla \varphi (y), x-y \rangle | \geq 0$ -And so $ |\nabla \varphi (x) - \nabla \varphi (y)| \geq 0 $ then $\nabla \varphi $ is non-increasing and then (By an already proved Theorem) - it is convex. -Can someone please verify what I did and give me a hint? -Thanks. - -REPLY [5 votes]: 1) If $f$ is convex, then $$ - f(y)\geq f(x) + \nabla f(x)\cdot (y-x) $$ -and $$ f(x)\geq f(y) + - \nabla f(y)\cdot (x-y) $$ -so that by adding $$ (y-x)\cdot( \nabla f(x) - \nabla f(y)) \leq 0 -$$ -2) Assume that $\nabla f$ is monotone : Define $A =\{ x| f(x)\leq -a\}$. If $A$ is not convex, then there are $x,\ y\in \partial A$ -s.t. $$ \nabla f(x)\cdot (y-x),\ \nabla f(y)\cdot (x-y) >0 $$ -Hence $$ (\nabla f(x) -\nabla f(y))\cdot (y-x) >0 $$ -It is a contradiction.<|endoftext|> -TITLE: Intuition behind Fourier and Hilbert transform -QUESTION [17 upvotes]: In these days, I am studying a little bit of Fourier analysis and in particular Fourier series and Fourier/Hilbert transforms. Now, I am confident with the mathematical definitions and all the formalism, and (more or less) I know all the main theorems. What I don't really understand is why they are so important, why these concepts are defined in that precise way. -Could you explain to me why all these concepts/tools are so significant and useful in (applied) mathematics? Could you give me some intuition behind them? -I am not particularly interested in mathematical formulae. I would simply like to know what these definitions really mean. -Pretend to be talking with someone smart, very curious but not very knowledgeable about mathematics. -Of course, I encourage not only mathematicians but also engineers and physicists to reply. Having a truly physical interpretation of those concepts would be great!! -Very last thing: I would really love to have some unconventional and "personal" interpretation/point of view. -Thank you very much for any help!! - -REPLY [8 votes]: The Fourier series is a way of building up functions on $[-\pi,\pi]$ in terms of functions that diagonalize differentiation--namely $e^{inx}$. If $L=\frac{1}{i}\frac{d}{dx}$ then $Le^{inx}=ne^{inx}$. That is $e^{inx}$ is an eigenfunction of $L$ with eigenvalue $n$. The fact that all square integrable functions on $[-\pi,\pi]$ can be expanded as $f = \sum_{n=-\infty}^{\infty}c_n e^{inx}$ is quite a nice thing. If you want to apply the derivative operator $L$ to $f$, you just get $Lf = \sum_{n=-\infty}^{\infty}nc_ne^{inx}$. More generally, if $f$ has $N$ square integrable derivatives, then the $N$-th derivative is -$$\frac{1}{i^{n}}f^{(n)}=L^{N} f = \sum_{n=-\infty}^{\infty}n^{N}c_n e^{inx}.$$ -Diagonalizing an operator makes it easier to solve all kinds of equations involving that operator. The only issue is this: How do you find the correct coeffficients $c_n$ so that you can expand a function $f$ in this way? For the ordinary Fourier series, -$$ -c_n = \frac{1}{2\pi}\int_{-\pi}^{\pi} f(t)e^{-int}dt. -$$ -On a finite interval, this is great. But what happens if you want to work on the entire real line? If you work on larger and larger intervals, then you get more and more terms. You need terms with larger and larger periods, and all multiples of those. In the limit of larger intervals, you need an integral to sum up all the terms, with every possible periodicity. That is, you can expand a square integrable $f$ as -$$f(x) = \int_{-\infty}^{\infty}c(s)e^{isx}ds. -$$ -As before, applying operations of $L=\frac{1}{i}\frac{d}{dx}$ is easier using this representation of $f$: -$$ -\frac{1}{i^{N}}f^{(N)}(x)= L^{N}f = \int_{-\infty}^{\infty}s^{N}c(s)e^{isx}ds. -$$ -You can see that the discrete and the continuous cases are remarkably similar. And, based on that, how might you expect to be able to find the coefficient function $c(s)$? As you might guess, -$$ -c(s) = \frac{1}{2\pi}\int_{-\infty}^{\infty}f(x)e^{-isx}dx. -$$ -The Fourier transform is a way to diagonalize the differentiation operator on $\mathbb{R}$. -The reason that the discrete and continuous Fourier transforms are so important is that they diagonalize the differentiation operator. One way to view the effects of diagonalization is that you turn the operator into a multiplication operator. You can see how that makes solving differential equations a lot easier. In the coefficient space all you do is to divide in order to invert. -It's the same way with a matrix: if you have a big matrix equation -$$Ax = y,$$ -and if $A$ is symmetric, then you can find a basis $\{ e_1,e_2,\cdots,e_n \}$ where $Ae_k = \lambda_k e_k$. Then, if you can expand $x$ and $y$ in this basis -$$x = \sum_{k=1}^{n} c_k e_k \\y = \sum_{k=1}^{n} d_k e_k.$$ -Then the new equation is solved by division: -$$Ax = y \\ \sum_{k=1}^{n} c_k \lambda_k e_k = \sum_{k=1}^{n} d_k e_k \\ -c_k = \frac{1}{\lambda_k} d_k e_k.$$ -So if you know how to expand $y$ in the $e_k$ terms as $\sum_{k}d_k e_k$, then you can get the solution $x$ by division on the coefficients -$$x = \sum_{k=1}^{n} \frac{1}{\lambda_k} d_k e_k$$ -(Assuming none of the $\lambda_k$ are $0$.) -The discrete and continuous Fourier transforms are a way to diagonalize differentiation in an infinite-dimensional space. And that allows you to solve linear problems involving differentiation. -Hilbert Transform: The Hilbert transform was developed by Hilbert to study the operation of finding the harmonic conjugate of a function. For example, the function $f(z) = z^2=(x+iy)^2=x^2-y^2+i(2xy)$ has harmonic real and imaginary parts. Hilbert was trying to find a way to go between these two components (in this case $x^2-y^2$ to $2xy$.) The setting of this transform is the upper half plane. If you start with a function $f(x)$, find the function $\tilde{f}(x,y)$ that is harmonic in the upper half plane, and then find $g(x,y)$ such that $f(x,y)+ig(x,y)$ is holomorphic in the upper half plane, then the Hilbert transform maps $f$ to $g$. -Because $i(f+ig)=-g+if$ is also holomorphic, then the transform maps $g$ to $-f$, which means that the square of the transform is $-I$. In this setting, the Hilbert transform turns out to be concisely expressed in terms of the Fourier transform if you work with square integrable functions.<|endoftext|> -TITLE: On Ramanujan's curious equality for $\sqrt{2\,(1-3^{-2})(1-7^{-2})(1-11^{-2})\cdots} $ -QUESTION [62 upvotes]: In Ramanujan's Notebooks, Vol IV, p.20, there is the rather curious relation for primes of form $4n-1$, -$$\sqrt{2\,\Big(1-\frac{1}{3^2}\Big) \Big(1-\frac{1}{7^2}\Big)\Big(1-\frac{1}{11^2}\Big)\Big(1-\frac{1}{19^2}\Big)} = \Big(1+\frac{1}{7}\Big)\Big(1+\frac{1}{11}\Big)\Big(1+\frac{1}{19}\Big)$$ -Berndt asks: if this is an isolated result, or are there others? After some poking with Mathematica, it turns out that, together with $p= 2$, we can use the primes of form $4n+1$, -$$\sqrt{2\,\Big(1-\frac{1}{2^6}\Big) \Big(1-\frac{1}{5^2}\Big)\Big(1-\frac{1}{13^2}\Big)\Big(1-\frac{1}{17^2}\Big)} = \Big(1+\frac{1}{5}\Big)\Big(1+\frac{1}{13}\Big)\Big(1+\frac{1}{17}\Big)$$ -(Now why did Ramanujan miss this $4n+1$ counterpart?) More generally, given, -$$\sqrt{m\,\Big(1-\frac{1}{n^2}\Big) \Big(1-\frac{1}{a^2}\Big)\Big(1-\frac{1}{b^2}\Big)\Big(1-\frac{1}{c^2}\Big)} = \Big(1+\frac{1}{a}\Big)\Big(1+\frac{1}{b}\Big)\Big(1+\frac{1}{c}\Big)$$ - -Q: Let $p =a+b+c,\;q = a b + a c + b c,\;r =abc$. For the special case $m = 2$, are there infinitely many integers $1 -TITLE: Show that the eigenvalues of a unitary matrix have modulus $1$ -QUESTION [28 upvotes]: Show that the eigenvalues of a unitary matrix have modulus $1$. - -I know that a unitary matrix can be defined as a square complex matrix $A$, such that -$$AA^*=A^*A=I$$ -where $A^*$ is the conjugate transpose of $A$, and $I$ is the identity matrix. Furthermore, for a square matrix $A$, the eigenvalue equation is expressed by $$Av=\lambda v$$ -If I use the relationship $u v=v^*u$ and take the conjugate transpose of this equation then -$$v^*A^*=\lambda^*v^*$$ -But now I got stuck. Can someone help? - -REPLY [9 votes]: A unitary matrix $U$ preserves the inner product: $\langle Ux, Ux\rangle =\langle x,U^*Ux\rangle =\langle x,x\rangle $. -Thus if $\lambda $ is an eigenvalue, $Ux=\lambda x$, we get $\vert\lambda \vert^2\langle x,x\rangle =\langle \lambda x,\lambda x\rangle =\langle Ux, Ux\rangle =\langle x,x\rangle $. -So $\vert \lambda\vert^2=1\implies \vert \lambda\vert=1$.<|endoftext|> -TITLE: Hartshorne Lemma V.1.3 meaning of exact sequence -QUESTION [5 upvotes]: I've been trying to make sense of the exact sequence in Lemma 1.3 chapter 5. -The Lemma is the following: - -Let $C$ be a smooth irreducible curve on a smooth projective surface X, and let $D$ be any curve meeting $C$ transversally. Then $$\# (C \cap D) = \text{deg}_C(\mathcal{L}(D) \otimes \mathcal{O}_C)$$ - -Hartshorne claims the result is deduced from the exact sequence $$0 \to \mathcal{L}(-D) \otimes \mathcal{O}_C \to \mathcal{O}_C \to \mathcal{O}_{C \cap D} \to 0.$$ -I'm trying to get the details down for how he gets this exact sequence. Let $i: D \to X$ be the inclusion map. Then we get the exact sequence $$0 \to \mathcal{L}(-D) \to \mathcal{O}_X \to i_*\mathcal{O}_D \to 0.$$ -Let $j: C \to X$ be another inclusion map. Then by tensoring with $j_*\mathcal{O}_C$, we get $$0 \to \mathcal{L}(-D) \otimes j_*\mathcal{O}_C \to j_*\mathcal{O}_C \to i_*\mathcal{O}_D \otimes j_*\mathcal{O}_C \to 0.$$ -The first map is injective since $C$ and $D$ intersect transversally, so this is indeed an exact sequence. In order to get the result, we need to apply $j^*$ to the exact sequence. Then we get $$0 \to j^*(\mathcal{L}(-D) \otimes j_*\mathcal{O}_C) \to \mathcal{O}_C \to j^*(i_*\mathcal{O}_D \otimes j_*\mathcal{O}_C) \to 0.$$ -We can look at the stalks to see that this is exact as well. -If $p : C \times_X D \to C$ is the canonical map coming from the fiber product, how do I see that $j^*(i_*\mathcal{O}_D \otimes j_*\mathcal{O}_C) \cong p_*\mathcal{O}_{C \times_X D}$ and $j^*(\mathcal{L}(-D) \otimes j_*\mathcal{O}_C) \cong \mathcal{L}(-D) \otimes \mathcal{O}_C$? Does he mean that $\mathcal{L}(-D) \otimes \mathcal{O}_C = j^*\mathcal{L}(-D)$? If so, I see that the stalks are the same by just dealing with the presheaves, but is there some slick way to see this other than actually writing the maps down? Are there some universal property tricks that will yield the results? - -REPLY [2 votes]: Just take the exact sequence on $X$: -$$0 \to \mathcal L(-D) \to \mathcal O_X \to i_* \mathcal O_D \to 0$$ -and pull it back along $j: C \hookrightarrow X$ to obtain (As you pointed out, exactness is preserved by assumption): -$$0 \to j^*\mathcal L(-D) \to \mathcal O_C \to j^*i_* \mathcal O_D \to 0.$$ -Indeed we have $j^* \mathcal L(-D) = \mathcal L(-D) \otimes \mathcal O_C$. -Furthermore, we have the following Cartesian diagram: -\begin{array}{ccc} - C \cap D& \xrightarrow{\hat j} & D \\[3pt] - \downarrow {\hat i} & & \downarrow{i} \\ - C& \xrightarrow{j} & X -\end{array} -This shows that $j^*i_* \mathcal O_D = \hat i_*\hat j^*\mathcal O_D = \hat i_* \mathcal O_{C \cap D}$, i.e. we have the exact sequence on $C$: -$$0 \to \mathcal L(-D) \otimes \mathcal O_C \to \mathcal O_C \to \hat i_* \mathcal O_{C \cap D} \to 0.$$ -As usual, the $\hat i_*$ is omitted, so it is written as -$$0 \to \mathcal L(-D) \otimes \mathcal O_C \to \mathcal O_C \to \mathcal O_{C \cap D} \to 0.$$<|endoftext|> -TITLE: Does $\lim_{(x,y)\to(0,0)}[x\sin (1/y)+y\sin (1/x)]$ exist? -QUESTION [5 upvotes]: This is an exercise from my calculus class. -The function is defined as $x\sin (1/y)+y\sin (1/x)$ if $x\neq0 $ and $y\neq0 $, and $0$ if $x=0 $ or $y=0$. -I'm pretty confident the limit exists and should be $0$, because: $$\lim_{(x,y)\to(0,0)}[x\sin (1/y)+y\sin (1/x)]=\lim_{(x,y)\to(0,0)}[x\sin (1/y)]+\lim_{(x,y)\to(0,0)}[y\sin (1/x)]$$ -And: $x\leq x\sin(1/y)\leq x$, -so $\lim_{(x,y)\to(0,0)}[x\sin (1/y)]=0$ right? -(The same can be said for $\lim_{(x,y)\to(0,0)}[y\sin (1/x)]$) -However, I tried checking my answer, and according to Wolfram Alpha the limit doesn't exist. -Is this because I'm wrong, or is it just because $x\sin (1/y)+y\sin (1/x)$ is undefined for $y\neq0 $ and $y\neq0 $ - -REPLY [10 votes]: For $x\ne0$ and $y\ne0$ we have -$$ -\left|x\sin\frac{1}{y}+y\sin\frac{1}{x}\right|\le -\left|x\sin\frac{1}{y}\right|+\left|y\sin\frac{1}{x}\right|\le|x|+|y| -$$ -So, for -$$ -f(x,y)=\begin{cases} -x\sin\dfrac{1}{y}+y\sin\dfrac{1}{x} & \text{if $x\ne0$ and $y\ne0$} \\ -0 & \text{if $x=0$ or $y=0$} -\end{cases} -$$ -we have -$$ -|f(x,y)|\le |x|+|y| -$$ -for all $(x,y)$. Therefore -$$ -\lim_{(x,y)\to(0,0)}f(x,y)=0 -$$ -by the squeeze theorem. -Be careful that $x\sin(1/y)\le x$ is not true in general, but you just need the absolute value and $|x\sin(1/y)|\le|x|$ is true (provided $y\ne0$, of course). -WolframAlpha is a great resource, but it doesn't always tell the truth. ;-)<|endoftext|> -TITLE: Intuition for volume of a simplex being 1/n! -QUESTION [11 upvotes]: Consider the simplex determined by the origin, and $n$ unit basis vectors. The volume of this simplex is $\frac{1}{n!}$, but I am intuitively struggling to see why. I have seen proofs for this and am convinced, but I can't help but think there must be a slicker or more intuitive argument for why this is so than what I have already seen. Any help would be appreciated! - -REPLY [8 votes]: Denote the volume of this simplex by $\sigma_n$. Foliating the simplex with "horizontal" hyperplanes $x_n=z$ $(0\leq z\leq 1)$ and applying Fubini's theorem we obtain -$$\sigma_n=\int_0^1(1-z)^{n-1}\sigma_{n-1}\>dz={1\over n}\sigma_{n-1}\ .$$<|endoftext|> -TITLE: Which are integral domains? Fields? -QUESTION [6 upvotes]: Which of the following rings are integral domains? Which ones are fields? -(a) $\mathbb{Z}[x]/(x^2 + 2x +3)$ -(b) $\mathbb{F}_5[x]/(x^2+x+1)$ -(c) $\mathbb{R}[x]/(x^4+2x^3 +x^2 +5x+2)$ -For (a), $p(x) = x^2 + 2x + 3$ has no zero in $\mathbb{Z}$, so it is irreducible. This means $p(x)$ is maximal, and then $\mathbb{Z}[x]/p(x)$ is a field, also an integral domain. -Similarly in (b), $x^2+x+1$ has no zero in $\mathbb{F}_5$, so $\mathbb{F}_5[x]/(x^2+x+1)$ is a field, also an integral domain. -For the part (c), I think $x^4 + 2x^3 +x^2 +5x+2$ is irreducible, but I don't know how to prove it. -Also, I am not sure about the way I prove the first two parts is correct or not. So could you please help me to figure it out? Thank you!!! - -REPLY [4 votes]: (a) In a UFD (like $\mathbb Z[x]$) the irreducible elements are prime (see here), and prime elements generate prime ideals. In your case, $p(x)=x^2+2x+3$ is irreducible in $\mathbb Z[x]$, so it generates a prime ideal. This shows that the factor ring $\mathbb Z[x]/(x^2+2x+3)$ is an integral domain. However, it's not a field since the ideal $(x^2+2x+3)$ is not maximal: we have $$(x^2+2x+3)\subsetneq(x^2+2x+3,5)\subsetneq\mathbb Z[x].$$ -(b) $\mathbb F_5[x]/(x^2+x+1)$ is an integral domain for similar reasons. But $\mathbb F_5[x]$ is a PID (which is not the case with $\mathbb Z[x]$), and in a PID a non-zero prime ideal is maximal; see here. -(c) $\mathbb{R}[x]/(x^4+2x^3 +x^2 +5x+2)$ can't be an integral domain because the polynomial $x^4+2x^3 +x^2 +5x+2$ is reducible over $\mathbb R$ (why?), so the ideal it generates is not prime.<|endoftext|> -TITLE: Show $\mathbb {R}[x,y]/(y^2-x, y-x)$ is not an integral domain -QUESTION [6 upvotes]: Let $\mathbb{R}[x,y]$ denote the polynomial ring in two variables $x$, $y$ over $\mathbb{R}$, and let $I = (y^2-x,y-x)$ be the ideal generated by $y^2-x$ and $y=x$. Show that $$\mathbb{R}[x,y]/I$$ is not an integral domain. - -To be honest, I have no idea to solve this question. I am thinking that if the $\mathbb{R}[x,y]/I$ is not an integral domain, $I$ is neither a prime ideal not a maximal ideal. But I don't know how to prove that. Could you please help me? Thank you very much! - -REPLY [5 votes]: The golden rule. In a factor ring $R/I$ we have $a=0\bmod I$ iff $a\in I$ (where $a\in R$ ). - -In your example $y^2=x\bmod I$ and $y=x\bmod I$. This gives us $y^2=y\bmod I$, so $y(y-1)=0\bmod I$, and this suggests that $y\bmod I$ and $y-1\bmod I$ are zero divisors in $R/I$. (Here $R=\mathbb R[x,y]$.) -However we have to check that they are not zero. For instance, suppose $y=0\bmod I$. Then, from the golden rule, $y\in I$, that is, $y\in (y^2-x,y-x)$. Now write $$y=(y^2-x)f(x,y)+(y-x)g(x,y),$$ and for $x=y=1$ we get $1=0$, a contradiction. (Do the same for $y-1\bmod I$.)<|endoftext|> -TITLE: Why is cardinality of set of even numbers = set of whole numbers? -QUESTION [11 upvotes]: I recently watched a YouTube video on Banach-Tarski theorem (or, paradox). In it, the presenter builds the proof of the theorem on the basis of a non-intuitve assertion that there as as many even numbers as there are whole numbers, which he 'proves' by showing a 1:1 mapping between the two sets. -But would that constitute a valid proof? -To me, the number-density (per unit length of the number-line) for whole numbers is clearly more than that for even numbers. And this, I'm sure, can also be trivially proved by mathematical induction. -Later, in the same video, it is shown how the interval [0,1] contains as many real numbers as there are in the real number line in its entirety. Once again, using the common-sense and intuitive concept of 'number-density', there would be clearly (infinitely) more real numbers in the entire number line than a puny little section of it. -It seems, the underlying mindset in all of this is: Just "because we cannot enumerate the reals in either set, we'll claim both sets to be equal in cardinality." In the earlier case of even and whole numbers, just "because both are infinite sets, we'll claim both sets to be equal in cardinality." And all this when modern mathematics accepts the concept of hierarchy among even infinites! (First proposed by Georg Cantor?) -Is there a good, semi-technical book on this subject that I can use to wrap my head around this theme, generally? I have only a pre-college level of knowledge of mathematics, with the rest all forgotten. - -REPLY [5 votes]: When we pass from finite to infinite sets, many aspects of our intuition break down and need to be updated. We define cardinality by the existence or not of a bijection. If there is a bijection between two sets they have the same cardinality. If not, the one that can be injected into the other is smaller. When you do this, all infinite subsets of the naturals have the same cardinality, as do the rationals. The reals are strictly greater-Cantor's diagonal proof shows that. We do not say that all sets greater than the naturals have the same cardinality. Cantor's diagonal proof can be used to show that the number of subsets of any set is greater than the number of elements of the set, so the number of sets of reals is greater than the number of reals. Then the sets of sets of reals are greater yet. It is a tower that goes on unimaginably far, but for most of mathematics we don't need very many of them. For a semi-technical introduction, I like Rudy Rucker, Infinity and the Mind.<|endoftext|> -TITLE: What is the range of $f :R → R$, and $f(x) = x^2 + 6x − 8$ -QUESTION [5 upvotes]: I have this discrete math question I have done completing the square but not sure how to continue. May I get some guide? Thanks! -What is the range of $f :R → R$, and $f(x) = x^2 + 6x − 8$ -$f(x)=x^2+6x-8$ -$f(x)=(x^2+6x+9)-8-9$ -$f(x)=(x+3)^2-17$ - -REPLY [2 votes]: \begin{align} -f(x) & = x^2 + 6x - 8 \\ - & = (x^2 + 6x + 9) - 8 - 9 \\ - & = (x + 3)^2 - 17 \\ -\end{align} -Thus, the range is $[-17, \infty)$, which follows immediately from the fact that $(x + 3)^2 \ge 0$ and that $f(x)$ is not bounded from above. -While this is the general approach to finding ranges of quadratic functions, consider this instead if you do not get the idea above: -Suppose that $f(x) = k$ where $x, k \in \Bbb R$, then the range of $f(x)$ is just the range of $k$. -\begin{align} -f(x) = k & \Leftrightarrow f(x) - k = 0 \\ - & \Leftrightarrow x^2 + 6x - (8 + k) = 0 \\ -\end{align} -Since $x \in \Bbb R$, that is, $x^2 + 6x- (8 + k) = 0$ has real roots, which is equivalent to -\begin{align} -\Delta & = b^2 - 4ac \\ - & = 6^2 + 4 \cdot 1 \cdot (8 + k) \\ - & = 68 + 4k \\ - & \ge 0 \\ -\end{align} -Thus, -$$f(x) = k \ge -17$$ -and $[-17, \infty)$ is the range of $f(x)$.<|endoftext|> -TITLE: Analytical proof for the convergence of a sequence -QUESTION [8 upvotes]: Consider the following sequence -$\Xi_N=N\sum\limits_{i=0}^{N-1} {N-1 \choose i} (-1)^{(i+1)} \log\left(i+1\right)$. -I numerically compute the asymptotic behavior of sequence and it turns out that the sequence approaches to a non-zero value as N goes to infinity. Now, I want to analytically prove that this sequence converges to a non-zero value as N goes to infinity. -Also, it can be proved that the sequence has another form as follows -$\Xi_N=\sum\limits_{i=1}^{N} {N \choose i} (-1)^{(i)} i \log\left(i\right)$. -Moreover, Using -$\int_{0}^{1} \sum_{m=1}^{i} \frac{1}{x+m} dx=\log(i+1)$ -Then -$\Xi_N=N\sum_{m=1}^{N-1}{N-1 \choose m-1} (-1)^{m-1}\int_{0}^{1} \frac{1}{x+m} dx $ -Could you give me some advice? -Thanks - -REPLY [3 votes]: By Frullani's theorem and the binomial theorem: -$$\Xi_{N+1} = \int_{0}^{+\infty}\sum_{k=0}^{N}\binom{N}{k}(-1)^{k+1}\left(e^{-x}-e^{-(k+1)x}\right)\frac{dx}{x}=\int_{0}^{+\infty}\frac{(1-e^{-x})^N}{x e^{x}}\,dx$$ -hence your limit is simply $\color{red}{\large 0}$ by the dominated convergence theorem, since -$$ f_N(x) = (1-e^{-x})^N $$ -is bounded between $0$ and $1$ and behaves like $x^N$ in a right neighbourhood of the origin.<|endoftext|> -TITLE: A curious property of $\operatorname{frac}(e\cdot k)$ -QUESTION [7 upvotes]: Let $\alpha > 0$ be a real number and let us consider the set $S(\alpha)$ of those natural numbers $n$ such that the fractional part of $\alpha \cdot n$ "begins" with the representation of $n$ (in base $10$). Formally, -$$ -S(\alpha) = \{k\in \mathbb{N}:k=\lfloor \operatorname{frac}(\alpha k)\cdot 10^{1+\lfloor\log_{10} k \rfloor}\rfloor\} -$$ -where $\operatorname{frac}(x) = x-\lfloor x\rfloor$, for $x>0$, denotes the fractional part of $x$. -For example, $57211\in S(\sqrt{2})$, since $57211\sqrt{2} = 80908.\underline{57211}692\cdots$. -If $\alpha$ is an irrational number, we know that $\operatorname{frac}(\alpha\cdot k)$ is uniformly distributed in $(0,1)$, so, using a rough heuristic argument based on the fact that $\sum\frac{1}{k}$ diverges, we may expect $S(\alpha)$ to have sparse but infinite elements. -A few computations relative to well-known irrational constants support this intuition. For example, we have -$$ -S(\pi) = \{ 1,2,38,76,946,24996,3595182,61864425177,\dots\}\,, -$$ -and -$$ -S(\sqrt{2})=\{ 772,9792,57211,535090,6101272,65645433,9169209625,16835518309,\dots\}\,, -$$ -but what really baffles me is -$$ -S(e)=\{ 5,191,\\ 1100,1210,1320,1430,1540,1650,1760,1870,1980,2090,2200,2310,2420,2530,2640, -2750,\\ 2860,2970,3080,3190,3300,3410,3520,3630,3740,3850,3960,4070,4180,4290,4400,4510,\\ -4620,4730,4840,4950,5060,5170,5280,5390,5500,5610,5720,5830,5940,6050,6160,6270,\\ 6380, -6490,6600,6710,6820,6930,7040,7150,7260,7370,7480,7590,7700,7810,7920,8030,\\ 8140,8250, -8360,8470,8580,8690,8800,8910,9020,9130,9240,9350,9460,9570,9680,\\ 1865037,5422244075, \dots\}\,. -$$ -As you can see there are $79$ terms between $10^3$ and $10^4$, all divisible by 10. -My question is: is the behavior of $S(e)$ just a coincidence or the nature of $e$ has something to do with it? - -REPLY [4 votes]: Note that in order for a $k\in S(e)$, $k\cdot e$ must have digits of $k$ in its decimals. For example, $4180e=11362.\underline{4180}4...$ -Also observe that there is a $110$ gap between consecutive $k$. It turns out $110e=299.01100113...\approx 299+110*10^{-4}$ and its digits after the $1100$ is small. -Because it just so happens that $1100\in S(e)$ -Then $(1100+110m)e\approx n+(1100+110m)*10^{-4}$, where $n$ is the integral part of the number. -it follows that -$$(1100+110m)e -\lfloor (1100+110m)e\rfloor \approx 0.(1100+110m)$$ -Due to the fact that $110e$ has smaller value ($113$) after the $1100$ sequence in its digit. (And $1100$ being in $S(e)$, which is kind of a coincidence) -Edit: what I meant by the digits after $1100$ are small is that they won't affect the higher digits until about $100$ multiple of $110e$. For example, $9790$ doesn't work because $89*0.00000113\approx 0.001$, causing a change in digit in the fourth place<|endoftext|> -TITLE: Find all functions $f: \mathbb N \rightarrow \mathbb N$ such that $f(n!)=f(n)!$ -QUESTION [20 upvotes]: Find all functions $f: \mathbb N \rightarrow \mathbb N$ (where $\mathbb N$ is the set of positive integers) such that $f(n!)=f(n)!$ for all positive integers $n$ and such that $m-n$ divides $f(m)-f(n)$ for all distinct positive integers $m,n$. -My work so far: -$f(1!)=f(1)=f(1)!$ and $f(2!)=f(2)=2$. Then $f(1)=1$ or $f(1)=2$ or $f(2)=1$ or $f(2)=2$. -Case 1: $f(1)=f(2)=1$. I proved $f(n) \equiv 1$ -Case 2: $f(1)=f(2)=2$. I proved $f(n) \equiv 2$ -Case 3: $f(1)=2$ and $f(2)=1$. I proved that this case is impossible. -Case 4: $f(1)=1$ and $f(2)=2$. I need help here. - -REPLY [8 votes]: Here is a short proof, motivated by @PatrickStevens' argument. (Here a novel part is Lemma 2, and the other parts are simplification of Patrick's argument.) -Assume that $f(1) = 1$ and $f(2) = 2$. - -Lemma 1. $f(3) = 3$. - -Proof. $4 \mid (f(6) - 2)$ and $f(6) = f(3)!$ implies that $f(3) = 2$ or $3$. But $2 \mid (f(3) - 1)$ implies that $f(3)$ is odd. Therefore $f(3) = 3$. - -Lemma 2. For any $n$, we have $n \mid f(n)$. - -Proof. Given $n$, choose $k$ so that $b := 3!^{\circ k} \geq n$, where $!^{\circ k}$ denotes the $k$-fold factorial. Then $n \mid b! = f(b!)$. Hence writing $f(n) - f(b!) = N(n - b!)$ for some integer $N$, we have -$$ f(n) \equiv f(n) - f(b!) = N(n - b!) \equiv 0 \pmod{n}. $$ - -Lemma 3. $f(n) = n$ for all $n$. - -Proof. Assume otherwise and let $n$ be the smallest positive integer satisfying $f(n) \neq n$. (In particular, $n \geq 4$.) By Lemma 2, $f(n) \geq 2n$ and hence $(2n)! \mid f(n)!$. Also, by the minimality of $n$, we have $f(n-1) = n-1$. Then with $N := (n-1)(n-1)!$, we have -$$ (n-1)! = f((n-1)!) - f(n!) + f(n)! \equiv N + 0 \equiv 0 \pmod{N}. $$ -This contradicts $N > (n-1)!$. Therefore no such $n$ with $f(n) \neq n$ exists. - -Addendum. I guess it would worth it to explain how I come up with Lemma 2. In fact, I made a significant detour. Let $p$ be arbitrary prime. Then with the usual $p$-adic norm, the assumption says that -$$ |f(m) - f(n)|_p \leq |m - n|_p, \quad m, n \in \Bbb{N}. \tag{*} $$ -That is, the function $f$ is Lipschitz on the dense subset $\Bbb{N}$ of $\Bbb{Z}_p$. So $f$ uniquely extends to a continuous function $f_p : \Bbb{Z}_p \to \Bbb{Z}_p$ which also satisfies $\text{(*)}$. Now by Lemma 1, we can choose $x_j \in \Bbb{N}$ such that $|x_j|_p \to 0$ and $|f(x_j)|_p \to 0$ as $j \to \infty$. By continuity, this implies that $f_p(0) = 0$. Plugging this to $\text{(*)}$, we have -$$ |f_p(n)|_p \leq |n|_p, \quad n \in \Bbb{Z}_p. $$ -Since this is true for all $n \in \Bbb{N}$ and for all prime $p$, this implies $n \mid f(n)$. -Now decoding this analytic proof in terms of congruence gives the proof of Lemma 2.<|endoftext|> -TITLE: Is the regularization of a Fourier transform unique? -QUESTION [8 upvotes]: The Fourier transform of the Coulomb potential $1/\vert \mathbf r \vert$ of an electric charge doesn't converge because one obtains -$$F(k)=\frac {4\pi}{k} \int_0^\infty \sin(kr) dr.$$ -The standard way to obtain a sensible value is to multiple the integrand by $f(\alpha,r)=e^{-\alpha r}$ and after doing the integral, taking the limit $\alpha\to 0$ (which has a nice physical reason). So one gets -$$F(k)=\frac{4\pi}{k^2}.$$ -Would any other function $f(\alpha,r)$ that makes the integral converge and that satisfies $\lim_{\alpha\to\alpha_0}f(\alpha,r)=1$ give the same result? For example -$$F(k)=\lim_{\alpha\to 0}\frac {4\pi}{k} \int_0^\infty \frac{\sin(kr)}{\Gamma(\alpha r)} dr\stackrel{?}{=}\frac{4\pi}{k^2}.$$ -In this case, Cesàro integration gives the same result. What would be the sufficient condition for uniqueness of regularization (maybe the theory of tempered distributions can answer this). - -REPLY [2 votes]: Probably not the answer you're looking for but here it is anyway : -You didn't specified the dimension of the space, so lets say the dimension is 3. I'll also assume that you're familiar with distribution theory. -We note $q(x)=1/\|x\|_2$, in $\mathbb R^3$ this function is locally integrable and bounded outside of the (compact) unit ball. So the function $q$ can be considered as a tempered distribution, this is a good thing cause we can define $\mathcal F(q)$ without ambiguity : for any function $\varphi$ in the Schwartz class $\mathcal S(\mathbb R^3)$ we have $\langle\mathcal F(q)|\varphi\rangle=\langle q | \mathcal F (\varphi)\rangle=\int q\mathcal F(\varphi)d \lambda$ (So $\mathcal F (q)$ is also a tempered distribution). -Now suppose we have a sequence $(q_n)_n$ of tempered distributions. We have -$$\langle\mathcal F(q_n)|\varphi\rangle=\langle q_n | \mathcal F (\varphi)\rangle$$ and we want $$\lim_n \langle\mathcal F(q_n)|\varphi\rangle=\lim_n\langle q_n | \mathcal F (\varphi)\rangle=\langle\mathcal F(q)|\varphi\rangle=\langle q | \mathcal F (\varphi)\rangle$$ -for all $\varphi \in \mathcal S(\mathbb R^3)$. In other terms we want $\mathcal F (q_n)$ to converge to $\mathcal F (q)$ in the sens of $S'(\mathbb R^3)$. Since $\mathcal F$ is an automorphism of $\mathcal S(\mathbb R^3)$ this is equivalent to -$$\lim_n \langle q_n|\mathcal \varphi\rangle= \langle\mathcal q|\varphi\rangle \;\;\forall \varphi \in \mathcal S(\mathbb R^3) $$ -which is the definition of $q_n\to q$ in $\mathcal S'(\mathbb R^3)$ (convergence in the sense of tempered distribution). And by definition $q_n \to q\in \mathcal S'(\mathbb R^3)$ if and only if $\langle q_n |\varphi\rangle\to\langle q |\varphi\rangle$ $\forall \varphi \in S(\mathbb R^3) $. -Now if we assume moreover that the $q_n$ are $L^1(\mathbb R^3)$ functions (like in your examples) then we have $\langle q_n |\varphi\rangle=\int q_n \varphi d \lambda$ and thus you can show the convergence in $S'(\mathbb R^3)$ using the dominated convergence theorem for exemple. -Now i believe that what you really wanted was the pointwise convergence, but this question is probablya bit more tricky.<|endoftext|> -TITLE: A simple binomial identity -QUESTION [6 upvotes]: Is there a simple way of showing that a prime $p$ must divide the binomial coefficient $p^n\choose{k}$ for all $n\geq 1$ and $1\leq k\leq p^n-1$? - -REPLY [5 votes]: Just a quick remark after the fact: If you accept that $$ (a +b )^{p} \equiv a^p +b^p\pmod p ,$$ for $a$ and $b$ indeterminants, -then -$$(a+b)^{p^n} = \left(\ (a + b )^p\ \right)^{p^{n-1}}\equiv \left(\ a^p + b^p\ \right)^{p^{n-1}}\equiv a^{p^n}+ b^{p^n}\pmod p,$$ -which also gives the result.<|endoftext|> -TITLE: Prove $u\left( x\right)=W\left( x\right)+W'\left( x\right)+W''\left( x\right)+ \cdots \ge 0.$ -QUESTION [5 upvotes]: Let $W\left( x\right) \ge 0$ for $x \in \mathbb{R}$ be a polynomial. -Prove $$u\left( x\right)=W\left( x\right)+W'\left( x\right)+W''\left( x\right)+ \cdots \ge 0.$$ Is there a simple way? - -REPLY [4 votes]: Suppose for a contradiction $u(x_0)<0$ for some point $x_0\in \mathbb{R}$. Then, because $u$ is a polynomial of even degree with positive leading coefficient (because $W$ must have these properties as well), there must be an interval $[a,b]\ni x_0$ such that $u(x)<0$ for all $x\in ]a,b[$ and $u(a)=0=u(b)$. Then there must be some $c\in ]a,b[$ such that $u'(c)=0$ by Rolle's theorem. Hence -$$0=u'(c)=W'(c)+W''(c)+\dots =u(c)-W(c),$$and thus -$$0\leq W(c)=u(c)<0,$$ -which is a contradiction.<|endoftext|> -TITLE: Prove that the Pfaffian satisfies $\text{Pf}(MAM^T)=\det(M)\text{Pf}(A)$ -QUESTION [9 upvotes]: Show that $$\text{Pf} MAM^T = \text{det}M \cdot \text{Pf} A$$ for any matrix $M$ and antisymmetric $A$. - -Attempt: $$\text{Pf} MAM^T = \frac{1}{2^N N!} \epsilon_{\alpha_1 \dots \alpha_{2N}} (MAM^T)_{\alpha_1 \alpha_2} \dots (MAM^T)_{\alpha_{2N-1} \alpha_{2N}} = \frac{1}{2^N N!}\epsilon_{\alpha_1 \dots \alpha_{2N}} M_{\alpha_1 \sigma_1} A_{\sigma_1 \delta_1} (M^T)_{\delta_1 \alpha_2} \dots M_{\alpha_{2N-1} \sigma_{2N-1}} A_{\sigma_{2N-1} \delta_{2N-1}} (M^T)_{\delta_{2N-1} \alpha_{2N}} $$ while $$\text{det} M = \epsilon_{\beta_1 \dots \beta_{2N}} M_{1, \beta_1} \dots M_{2N, \beta_{2N}}$$ and $$\text{Pf}A = \frac{1}{2^N N!} \epsilon_{\gamma_1 \dots \gamma_{2N}} (A)_{\gamma_1 \gamma_2} \dots (A)_{\gamma_{2N-1} \gamma_{2N}}$$ -Working with the terms on the r.hs I see that $$\text{Pf}A \cdot \det M = \frac{1}{2^N N!} \epsilon_{\beta_1 \dots \beta_{2N}} \epsilon_{\gamma_1 \dots \gamma_{2N}} M_{1, \beta_1} \dots M_{2N, \beta_{2N}}(A)_{\gamma_1 \gamma_2} \dots (A)_{\gamma_{2N-1} \gamma_{2N}}$$ I don't see a way to proceed - is there perhaps another definition of $det$ I should use or can I argue based on these diagrammatic forms below? - -REPLY [3 votes]: Here is an approach using (possibly complex) Grassmann variables and Berezin integration$^1$. - -Define the Pfaffian of a (possibly complex) antisymmetric matrix $A^{ij}=-A^{ji}$ (in $n$ dimensions$^2$) as -$$ \begin{align}{\rm Pf}(A)~:=~&\int \!d\theta_n \ldots d\theta_1~ e^{\frac{1}{2}\theta_i A^{ij}\theta_j}\cr -~\stackrel{(5)}{=}~&\frac{\partial}{\partial \theta_n} \ldots \frac{\partial}{\partial \theta_1} e^{\frac{1}{2}\theta_i A^{ij}\theta_j}\cr -~=~&\frac{1}{n!}\epsilon_{i_1\ldots i_n} \frac{\partial}{\partial \theta_{i_n}} \ldots \frac{\partial}{\partial \theta_{i_1}} e^{\frac{1}{2}\theta_i A^{ij}\theta_j}.\end{align} \tag{1}$$ -If we make a change of coordinates -$$ \theta^{\prime}_j~=~\theta_i M^i{}_j,\tag{2} $$ -the chain rule becomes -$$ \frac{\partial}{\partial \theta_i}~=~M^i{}_j\frac{\partial}{\partial \theta^{\prime}_j} .\tag{3} $$ -Therefore OP's first equation follows from -$$\begin{align} -{\rm Pf}(MAM^T) -&~~\stackrel{(1)}{=}~\frac{1}{n!}\epsilon_{i_1\ldots i_n} \frac{\partial}{\partial \theta_{i_n}} \ldots \frac{\partial}{\partial \theta_{i_1}} e^{\frac{1}{2}\theta_i M^i{}_k A^{k\ell}M^j{}_{\ell}\theta_j}\cr -&\stackrel{(2)+(3)}{=}~\frac{1}{n!}\epsilon_{i_1\ldots i_n}M^{i_1}{}_{j_1}\ldots M^{i_n}{}_{j_n} \frac{\partial}{\partial \theta^{\prime}_{j_n}} \ldots \frac{\partial}{\partial \theta^{\prime}_{j_1}} e^{\frac{1}{2}\theta^{\prime}_i A^{ij}\theta^{\prime}_j} \cr -&~~=~\epsilon_{i_1\ldots i_n}M^{i_1}{}_{1}\ldots M^{i_n}{}_{n} ~\frac{\partial}{\partial \theta^{\prime}_{n}} \ldots \frac{\partial}{\partial \theta^{\prime}_{1}} e^{\frac{1}{2}\theta^{\prime}_i A^{ij}\theta^{\prime}_j} \cr -&~~\stackrel{(1)}{=}~{\rm Det}(M)~{\rm Pf}(A).\end{align}\tag{4} $$ -$\Box$ - --- -$^1$ We use the sign convention that Berezin integration $$\int d\theta_i~\equiv~\frac{\partial}{\partial \theta_i}\tag{5} $$ is the same as differentiation wrt. $\theta_i$ acting from left. See e.g. this Phys.SE post. -$^2$ One may show that the Pfaffian vanishes in odd dimensions.<|endoftext|> -TITLE: Finding limit of $ \lim \limits_{x,y \to 0,0}{(1 + x^2 y^2)}^{-\frac{1}{x^2 + y^2}}$ -QUESTION [5 upvotes]: Here is my limit: -$$ \lim \limits_{x,y \to 0,0}{(1 + x^2 y^2)}^{-\frac{1}{x^2 + y^2}}$$ -I have learned two methods. One where we replace y with for example $y = kx $ (because $y = y_0 + k(x - x_0)$ and $y_0 = 0, x_0 = 0$). Or with $x = r *cos(\phi)$ and $x = r *sin(\phi)$ where $r \to 0$. -Neither seem to help me at the moment (or at least when I tried solving with both I didn't get a good answer. -It kind of seems like I could use $ \lim \limits_{x \to \infty}{(1 + \frac{1}{x})}^{x} = e$, but I tried and also couldn't get a decent answer. -Any ideas? - -REPLY [7 votes]: We can use your idea of setting $x= r \cos \theta$, $y=r \sin \theta$ and the limit becomes -$$\lim_{r \rightarrow 0} \left(1+\frac{r^4 \sin^2 2\theta}{4} \right)^{-\frac{1}{r^2}}$$ -For a given $r$, the maximum and minimum values of this function are -$1$ and $\left( 1+\frac{r^4}{4} \right)^{-\frac{1}{r^2}}$ obtained by setting $\theta =0$ and $\theta = \frac{\pi}{4}$ respectively. -The second limit as $r\rightarrow 0$ is -$\lim_{r\rightarrow 0}\left( 1+\frac{r^4}{4} \right)^{-\frac{1}{r^2}} = \lim_{x\rightarrow \infty} \left( 1+\frac{1}{x^2} \right)^{\frac{x}{2}} = 1$ -where we have taken $x = \frac{2}{r^2}$ -Because this the minimum value the function can take on the circle, we can say the following: -For any $\epsilon >0$ there exists $r$ such that $x^2 + y^2 < r^2 \Rightarrow |(1+x^2y^2)^{-\frac{1}{x^2+y^2}} -1| < \epsilon$ so the limit is $1$.<|endoftext|> -TITLE: What is the spectral radius of $PBD$ with $P$ projection, $\|B\|_\infty=1$, and $D$ diagonal with $\|D\|_2<1$? -QUESTION [6 upvotes]: Assume we have the matrix product: -$$A=PBD$$ -where $P$ is a projection matrix (i.e., $P=P^2$, $P=P^\top$, and $\|P\|_2=1$), $B$ is a matrix whose infinite norm is equal to one ($\|B\|_\infty=1$), and $D$ is a diagonal matrix whose $\ell_2$-norm is less than one ($\|D\|_2<1$). -Is it correct to say that the spectral radius of $A$ is less than 1 ($\rho(A)< 1$). If yes, how to prove it? - -REPLY [5 votes]: Edit: the old answer is wrong. Here is a correction. -No. Counterexample: let $01$ when $c$ is close to $1$.<|endoftext|> -TITLE: Definition of basis in infinite-dimensional vector space -QUESTION [5 upvotes]: I am struggling to understand the definition of a basis in an infinite dimensional vector space. Specifically, the definition I know says: A subset $B$ of a vector space $V$ is a basis for $V$ if every element of $V$ can be written in a unique way as a finite linear combination of elements from $B$. -However, for any non-empty subset $X$ of a vector space $V$, the zero element of the space can be written in more than one way as a finite linear combination of elements from $X$. For example, $0 = 0v = 0w$, where $v \neq w$ are from $X$. So therefore, no subset $X$ of a vector space $V$ could be a basis for $V$. -What am I missing? What exactly does the definition mean? - -REPLY [2 votes]: The language used is a bit sloppy, but it's common not to be too fussy in these definitions. -Let $S$ be a subset (finite or infinite, it doesn't matter) of $V$. A choice of coefficients for $S$ is a function $f\colon S\to F$ (where $F$ is the base field) such that $\sigma(f)=\{x\in S:f(x)\ne0\}$ is finite. -We observe that, if $T$ is a finite subsets of $S$ such that $\sigma(f)\subseteq T$, then -$$ -\sum_{x\in\sigma(f)}f(x)x=\sum_{x\in T}f(x)x -$$ -with the convention that -$$ -\sum_{x\in\emptyset}f(x)x=0 -$$ -Since the summation doesn't depend on the finite subset we choose, we set, for a choice of coefficients $f$, -$$ -\sum_{x\in S}f(x)x=\sum_{x\in\sigma(f)}f(x)x -$$ -and call this vector a linear combination of $S$. Note that the summation is actually finite and we can use whatever subset $T$ we want, instead of $\sigma(f)$, provided $\sigma(f)\subseteq T$ and $T$ is finite. This is mostly useful for doing computations with choices of coefficients, when other properties are being investigated. -Then we call $S$ a basis for $V$ if - -For every $v\in V$, there exists a choice of coefficients $f$ for $S$ such that -$$ -\sum_{x\in S}f(x)x=v -$$ -(we can abbreviate this condition by saying that $S$ is a spanning set for $V$). -For every $v$, if $f$ and $g$ are choices of coefficients for $S$ and -$$ -\sum_{x\in S}f(x)x=\sum_{x\in S}g(x)x -$$ -then $f=g$ (we can abbreviate this condition by saying that $S$ is linearly independent). - -Note that condition 2 can be rewritten as “If $f$ is a choice of coefficients for $S$ and -$$ -\sum_{x\in S}f(x)x=0 -$$ -then $f(x)=0$, for every $x\in S$”.<|endoftext|> -TITLE: Good book that contains stochastic integration, martingales and Lévy-processes? -QUESTION [7 upvotes]: Does anyone know about any good and easy interoductory books which contins information about martingales, sotchastic integration and Lévy-processes? -I have tried reading: http://www.cambridge.org/us/academic/subjects/statistics-probability/probability-theory-and-stochastic-processes/levy-processes-and-stochastic-calculus-2nd-edition and it is very hard, I am not really able to get much out of it. Do you know about any lower level texts you can reccomend please?, which contains stochastic calculus and the theory about Lévy processes? -I would like it to introduce stochastic calculus from scratch, where the only prerequisites are real analysis and measure theory. - -REPLY [7 votes]: Since Lévy processes are used a lot in finance, there are several books on this topic (that is, Lévy processes and their applications in finance). For example "Stochastic Calculus for Finance II" by S. Shreve introduces stochastic integration and contains some material on Lévy processes. However, if you are less interested in applications, but more in the theory behind it, then this might not be your first choice. -For an introduction to Lévy processes I recommend "Stochastic Processes" by Barndorff-Nielsen & Sato. On $\approx$ 60 pages they present the most important results on Lévy processes and the book is quite readable, I would say. -There are plenty of books on stochastic integration. If you are new to stochastic integration, it might be a good idea to start with stochastic integration with respect to Brownian motion and then have a look at the general theory afterwards. For example, in "Brownian Motion - An Introduction to Stochastic Processes" by Schilling & Partzsch, stochastic integration (with respect to Brownian motion) is introduced (and proved) in such a way that it can be generalized to stochastic integration with respect to martingales without difficulties.<|endoftext|> -TITLE: Proving a curious formula of $\pi$? -QUESTION [13 upvotes]: I have recently come across this statement without proof. -$$ \pi = 128 \arctan\frac{1}{40} -4\arctan\frac{1}{239} -16\arctan\frac{1}{515} -32\arctan\frac{1}{4030} -64\arctan\frac{1}{32060}$$ -I'd put down my approach but, to be frank, I've gotten nowhere with this. How to go about this ? - -REPLY [4 votes]: Use the fact that $\arctan(1/a)$ is the argument of $a+i$, and that the arguments of complex numbers add up when you multiply them. -According to WolframAlpha, -we have -$$ -\begin{aligned} -&\frac{(40+i)^{128}}{(239+i)^4 (515+i)^{16} (4030+i)^{32} (32060+i)^{64}} = -\\ -&-1 -/ -37403944359352749280528518983232679702 -\\ & -01985315749502348525466597837636105197 -\\ & -87830439618227322115549670041854205583 -\\ & -78215314658650047572142913167759891935 -\\ & -23573829633433227264657819301199042671 -\\ & -95356826263444502459300305177919563475 -\\ & -022474784673838736016384 -. -\end{aligned} -$$ -Since this is a negative number, it has argument $\pi$, and this must agree with your sum, except that they might differ by $2\pi n$ for some integer $n$, since arguments are not uniquely defined. -But just by numerical evaluation one should be convinced that your sum is a least close enough to $\pi$ to rule out all the options except $n=0$. Q.E.D. -(There is probably some better argument which estimates the terms in your sum without resorting to numerics, but I'm a bit lazy here...)<|endoftext|> -TITLE: Prove that the equation $3^k = m^2 + n^2 + 1$ has infinitely many solutions in positive integers. -QUESTION [8 upvotes]: Prove that the equation $3^k = m^2 + n^2 + 1$ has infinitely many solutions in positive integers. - -I have found that this is true for the first $k$'s from 1 to 7 except 3 and 6. -I have tried algebraic manipulation and induction too and it doesn't seem to work. I believe induction won't work since there are exceptions. -If I am correct, the numbers $m^2$ and $n^2$ can only be of the form $3a+1$. -Do you have any ideas about how I should proceed with this? I would love a few hints. Thanks. - -REPLY [2 votes]: Hint: Prove that if $a^2-1$ is a sum of two square, then $a^{4}-1$ is a sum of two squares. -Thus if $3^{2k}-1$ is the sum of two squares for some $k$, then so is $3^{2^nk}-1$ for any $n$. (You can also prove if $3^{2k}-1$ is the sum of two squares, then so is $3^{k}-1$.) -The hint given by MXYMXY is just the case $k=1$. -The case $k=5$ also has this property, because $$3^{10}-1=(3^5-1)(3^5+1)=8 \cdot 11^2\cdot 61=(11\cdot 22)^2 + 22^2$$ is the sum of two squares. So $3^{5\cdot 2^n}-1$ is always the sum of two squares.<|endoftext|> -TITLE: Commutative addition on the ordinals -QUESTION [9 upvotes]: It is well known that ordinal addition is not commutative (for example $\omega+1\neq 1+\omega$), but it is associative. My question regards a new kind of addition defined as: -$$a\oplus b = \text{max}\{a+b,b+a\}$$ -This addition is obviously commutative, but is it associative? I can't find a counterexample, but I also can't prove that it is. -Thank you very much. - -REPLY [9 votes]: This is not associative. -$$\begin{align}a\oplus(b\oplus c)&=\max\{a+(b\oplus c),(b\oplus c)+a\}\\ -&=\max\{a+\max\{b+c,c+b\},\max\{b+c,c+b\}+a\}\\ -&=\max\{a+b+c,a+c+b,b+c+a,c+b+a\} \end{align}$$ -and -$$\begin{align}(a\oplus b)\oplus c&=\max\{(a\oplus b)+c,c+(a\oplus b)\}\\ -&=\max\{\max\{a+b,b+a\}+c,c+\max\{a+b,b+a\}\}\\ -&=\max\{a+b+c,b+a+c,c+a+b,c+b+a\} \end{align}$$ -Thus for example -$$ \omega^2\oplus(1\oplus\omega)=\omega^2+\omega+1$$ -and -$$ (\omega^2\oplus1)\oplus\omega=\omega^2+\omega$$<|endoftext|> -TITLE: Is there a way to solve explicitly the following functional equation? -QUESTION [5 upvotes]: I want to find an unknown function (actually CDF) $F(p)$ which solves -$1 - \lambda F(\frac{q_L}{q_H}p) - (1-\lambda)F(p-[q_H-q_L]) - \frac{K}{p-c_H} = 0$, -where $0<\lambda<1$, $q_H > q_L > 0$, $q_H > c_H > 0$, $K>0$, and $p \in (c_H, q_H]$. -Unfortunately, I don't really have an idea how to proceed, apart from randomly guessing functional forms (I'm note even sure about which tags to choose for this problem). So any suggestions would be greatly appreciated. Thanks! - -REPLY [5 votes]: You wish to solve -$$1 - \lambda F(\frac{q_L}{q_H}p) - (1-\lambda)F(p-[q_H-q_L]) - \frac{K}{p-c_H} = 0$$ -Let $S$ be a scaling operator $S F(p)=F(s p)$ with $s=\frac{q_L}{q_H}$. Let $T$ be a translation operator with $T f(p)=F(p-t)$ with $t=q_H-q_L$. Then your equation becomes -$$ \lambda S F + (1-\lambda)T F = -\frac{K}{p-c_H}+1$$ -$$ \left( \lambda S + (1-\lambda)T \right) F = -\frac{K}{p-c_H}+1$$ -$$ (1-\lambda)\left( \frac{\lambda}{1-\lambda} S + T \right) F = -\frac{K}{p-c_H}+1$$ -Let us assume that $\epsilon=\frac{\lambda}{1-\lambda}$ is small. -$$ (1-\lambda)\left( T + \epsilon S\right) F = -\frac{K}{p-c_H}+1$$ -$$ (1-\lambda) F = -\left( T + \epsilon S\right)^{-1}\frac{K}{p-c_H}+\left( T + \epsilon S\right)^{-1}1$$ -For constant functions, such as $1$, both $T$ and $S$ reduce to the identity operator -$$ (1-\lambda) F = -\left(\left( T + \epsilon S\right)^{-1}\frac{K}{p-c_H}\right)+\frac{1}{1 + \epsilon}$$ -What remains to be done is calculating -$$ \left( T + \epsilon S\right)^{-1}\frac{K}{p-c_H}$$ -We can hopefully use a series expansion -$$ \left( \sum_{i=0}^{\infty} (-\epsilon)^i(T^{-1} S)^i T^{-1}\right)\frac{K}{p-c_H}$$ -Now - -$T^{-1}f(x)=f(x+t)$ -$T^{-1} S T^{-1}f(x)=f(s(x+t)+t)$ -$T^{-1} S T^{-1} S T^{-1}f(x)=f(s(s(x+t)+t)+t)$ -$(T^{-1} S)^i T^{-1}f(x)=f(s^i x+t \sum_{j=0}^i s^j)=f\left(s^i x+t \frac{s^{i+1}-1}{s-1}\right)$ - -thus -$$ \left( \sum_{i=0}^{\infty} (-\epsilon)^i(T^{-1} S)^i T^{-1}\right)\frac{K}{p-c_H} = K \sum_{i=0}^{\infty} \frac{(-\epsilon)^i}{s^i p+t \frac{s^{i+1}-1}{s-1}-c_H} $$ -$$ \left( \sum_{i=0}^{\infty} (-\epsilon)^i(T^{-1} S)^i T^{-1}\right)\frac{K}{p-c_H} = K \sum_{i=0}^{\infty} \frac{(s-1)(-\epsilon)^i}{(s-1)(s^i p-c_H)+t (s^{i+1}-1)} $$ -Now the question becomes, can this sum be evaluated in a closed form? Wolfram alpha is of little help, though it happily calculates this related sum. -Summary: under various convergence conditions, $F$ can be written as follows -$$ F = 1 -\frac{K}{1-\lambda} \sum_{i=0}^{\infty} \frac{\left(-\frac{\lambda}{1-\lambda}\right)^i}{s^i p+t \frac{s^{i+1}-1}{s-1}-c_H}$$ -We can verify that this is correct in the $\lambda=0$ case -$$ F(p) = 1 -K \frac{1}{ p+t-c_H}$$ -indeed obeys -$$1 - F(p-t) - \frac{K}{p-c_H} = 0$$ -Three more points: - -The ratio test will tell you that the sum we derived here will only converge for $\epsilon<1$ i.e. $\lambda<1/2$. To get the other half of the solution (the $\lambda$ near 1 case), write instead $$ \lambda \left( S + \frac{1-\lambda}{\lambda}T \right) F = -\frac{K}{p-c_H}+1$$ -and expand in the new small parameter $\frac{1-\lambda}{\lambda}$. -This function has an infinite number of exponentially-spaced singularities on the real axis, the largest of which is at $p=c_H-t$. They do not occur within your region of interest $p\in ]c_H,q_H]$, which is good, but they make it implausible that a simple closed-form expression exists. - - - -This function is a valid CDF between its highest zero and $\infty$, i.e. it is nondecreasing and approaches 1. However its highest zero does not equal $c_H$, and may be $>c_H$ (this happens when $K$ is large), in which case it is not a valid CDF within the region $p\in ]c_H,q_H]$.<|endoftext|> -TITLE: Does the Pell-like equation $X^2-dY^2=k$ have a simple recursion like $X^2-dY^2=1$? -QUESTION [11 upvotes]: If $d \ne 0$ is a non-square integer, and $(u,v)$ is an integer solution to the Pell equation -$$ - X^2 - dY^2 = 1, \tag{$\star$} -$$ -then each solution $(x_i,y_i)$ can be recursively calculated using the formulas -\begin{align} -x_{n+1} &= ux_n + dvy_n, \\ -y_{n+1} &= vx_n + uy_n\tag1 -\end{align} -n.b. If $(u,v)$ is not the fundamental solution to ($\star$), the recursion still works, though you will instead get $(x_{n+m},y_{n+m})$ for some integer $m$ determined by which solution $(u,v)$ actually is. Thus you can always determine a larger solution to ($\star$), though not necessarily the next largest solution, using only a single solution $(x_n,y_n)$ and the recursion -\begin{align} -x_{n+1} &= x_n^2 + dy_n^2, \\ -y_{n+1} &= 2x_ny_n\tag2 -\end{align} -QUESTION: Considering the equation -$$ - X^2 - dY^2 = k, \qquad k \ne 1, -$$ -is there a similar simple recursion to determine $(x_{n+1},y_{n+1})$ knowing only $(x_n,y_n)$ [and possibly, though not necessarily, one other solution $(u,v)$]? -With $d=6$ and $k=3$, I tried applying the recursion for $X^2-6Y^2=1$ to the fundamental solution $(3,1)$ of the equation $X^2-6Y^2=3$, and ended up with a solution to the equation $X^2-6Y^2=9$. Since $9=3^2=k^2$, I feel like there might be just a small adjustment to be made to the recursion, to compensate for $k \ne 1$, but I haven't found it. - -REPLY [2 votes]: I thought the idea for naming some "fundamental" solutions, from yesterday, was pretty good. I wrote a program to do that. I wanted to show what can happen if the target number is not squarefree. In the following output, $x^2 - 5 y^2 = 121,$ one out of three $(x,y)$ is just $11$ times a pair that solves $x^2 - 5 y^2 = 1.$ -jagy@phobeusjunior:~$ ./Pell_Target_Fundamental - - x^2 - 5 y^2 = 121 - -x: 11 y: 0 ratio: 0 fundamental -x: 21 y: 8 ratio: 0.380952 fundamental -x: 29 y: 12 ratio: 0.413793 fundamental -x: 99 y: 44 ratio: 0.444444 -x: 349 y: 156 ratio: 0.446991 -x: 501 y: 224 ratio: 0.447106 -x: 1771 y: 792 ratio: 0.447205 -x: 6261 y: 2800 ratio: 0.447213 -x: 8989 y: 4020 ratio: 0.447213 -x: 31779 y: 14212 ratio: 0.447214 -x: 112349 y: 50244 ratio: 0.447214 -x: 161301 y: 72136 ratio: 0.447214 -x: 570251 y: 255024 ratio: 0.447214 -x: 2016021 y: 901592 ratio: 0.447214 -x: 2894429 y: 1294428 ratio: 0.447214 -x: 10232739 y: 4576220 ratio: 0.447214 - - - x^2 - 5 y^2 = 121 - -jagy@phobeusjunior:~$ - -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= -Why not, here is $x^2 - 5 y^2 = -121.$ -jagy@phobeusjunior:~$ ./Pell_Target_Fundamental - - x^2 - 5 y^2 = -121 - -x: 2 y: 5 ratio: 2.5 fundamental -x: 22 y: 11 ratio: 0.5 fundamental -x: 82 y: 37 ratio: 0.45122 fundamental -x: 118 y: 53 ratio: 0.449153 -x: 418 y: 187 ratio: 0.447368 -x: 1478 y: 661 ratio: 0.447226 -x: 2122 y: 949 ratio: 0.44722 -x: 7502 y: 3355 ratio: 0.447214 -x: 26522 y: 11861 ratio: 0.447214 -x: 38078 y: 17029 ratio: 0.447214 -x: 134618 y: 60203 ratio: 0.447214 -x: 475918 y: 212837 ratio: 0.447214 -x: 683282 y: 305573 ratio: 0.447214 -x: 2415622 y: 1080299 ratio: 0.447214 -x: 8540002 y: 3819205 ratio: 0.447214 -x: 12260998 y: 5483285 ratio: 0.447214 - - - x^2 - 5 y^2 = -121 - -jagy@phobeusjunior:~$ - -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= -Here is a good pair, $x^2 - 11 y^2 = 14$ and then $x^2 - 11 y^2 = 350 = 14 \cdot 25.$ -jagy@phobeusjunior:~$ ./Pell_Target_Fundamental - - x^2 - 11 y^2 = 14 - -Wed Mar 30 11:32:36 PDT 2016 - -x: 5 y: 1 ratio: 0.2 fundamental -x: 17 y: 5 ratio: 0.294118 fundamental -x: 83 y: 25 ratio: 0.301205 -x: 335 y: 101 ratio: 0.301493 -x: 1655 y: 499 ratio: 0.301511 -x: 6683 y: 2015 ratio: 0.301511 -x: 33017 y: 9955 ratio: 0.301511 -x: 133325 y: 40199 ratio: 0.301511 -x: 658685 y: 198601 ratio: 0.301511 -x: 2659817 y: 801965 ratio: 0.301511 -x: 13140683 y: 3962065 ratio: 0.301511 - -Wed Mar 30 11:32:56 PDT 2016 - - x^2 - 11 y^2 = 14 - -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= -jagy@phobeusjunior:~$ ./Pell_Target_Fundamental - - x^2 - 11 y^2 = 350 - -Wed Mar 30 11:29:54 PDT 2016 - -x: 19 y: 1 ratio: 0.0526316 fundamental -x: 25 y: 5 ratio: 0.2 fundamental -x: 41 y: 11 ratio: 0.268293 fundamental -x: 47 y: 13 ratio: 0.276596 fundamental -x: 85 y: 25 ratio: 0.294118 fundamental -x: 157 y: 47 ratio: 0.299363 fundamental -x: 223 y: 67 ratio: 0.300448 -x: 415 y: 125 ratio: 0.301205 -x: 773 y: 233 ratio: 0.301423 -x: 899 y: 271 ratio: 0.301446 -x: 1675 y: 505 ratio: 0.301493 -x: 3121 y: 941 ratio: 0.301506 -x: 4441 y: 1339 ratio: 0.301509 -x: 8275 y: 2495 ratio: 0.301511 -x: 15419 y: 4649 ratio: 0.301511 -x: 17933 y: 5407 ratio: 0.301511 -x: 33415 y: 10075 ratio: 0.301511 -x: 62263 y: 18773 ratio: 0.301511 -x: 88597 y: 26713 ratio: 0.301511 -x: 165085 y: 49775 ratio: 0.301511 -x: 307607 y: 92747 ratio: 0.301511 -x: 357761 y: 107869 ratio: 0.301511 -x: 666625 y: 200995 ratio: 0.301511 -x: 1242139 y: 374519 ratio: 0.301511 - -Wed Mar 30 11:29:55 PDT 2016 - - x^2 - 11 y^2 = 350 - -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=<|endoftext|> -TITLE: How calculate the indefinite integral $\int\frac{1}{x^3+x+1}dx$ -QUESTION [5 upvotes]: How do I calculate the following indefinite integral? -$$\int\frac{1}{x^3+x+1}dx$$ - - -Approach: -$x^3+x+1=(x-a)(x^2+ax+c)$ where -$a:$ real solution of the equation $a^3+a+1=0$ -$c:$ real solution of the equation $c^3-c^2+1=0$ -Then $$\int\frac{1}{x^3+x+1}dx=\int\frac{1}{(x-a)(x^2+ax+c)}dx=\int\frac{A}{(x-a)}dx+\int\frac{Bx+C}{(x^2+ax+c)}dx$$ - -REPLY [16 votes]: Starting with the root from my previous comment, -$$a=-\frac2{\sqrt3}\sinh\left(\frac13\sinh^{-1}\frac{3\sqrt3}2\right)$$ -We can factor the denominator as $x^3+x+1=(x-a)(x^2+ax+a^2+1)$. Then the partial fractions expansion reads -$$\frac1{x^3+x+1}=\frac{A}{x-a}+\frac{Bx+C}{x^2+ax+a^2+1}$$ -We can find $A$ by multiplying both sides by $(x-a)$ and taking the limit: -$$A=\lim_{x\rightarrow a}\frac{x-a}{x^3+x+1}=\frac1{3a^2+1}$$ -by L'Hopital's rule. If we observe that -$$\begin{align}\left(3a^2+1\right)\left(6a^2-9a+4\right) & =\left(18a-27\right)\left(a^3+a+1\right)+31 \\ - & =31\end{align}$$ -It follows that -$$A=\frac{6a^2-9a+4}{31}$$ -Then if we multiply through by $(x^3+x+1)$ and compare coefficients of like powers of $x$ we find that -$$\begin{align}B & =\frac{-6a^2+9a-4}{31} \\ -C & =\frac{18a^2+4a+12}{31}\end{align}$$ -So -$$\begin{align}\frac1{x^3+x+1} & =\frac1{31}\left\{\frac{6a^2-9a+4}{x-a}+\frac{-\left(6a^2-9a+4\right)x+18a^2+4a+12}{x^2+ax+a^2+1}\right\} \\ - & =\frac1{31}\left\{\frac{6a^2-9a+4}{x-a}+\frac{\left(-3a^2+\frac92a-2\right)(2x+a)+\frac{27}2a^2+3a+9}{x^2+ax+a^2+1}\right\}\end{align}$$ -Now all our integrals are elementary and we find -$$\int\frac1{x^3+x+1}dx=\frac1{31}\left\{\left(6a^2-9a+4\right)\ln|x-a|-\left(3a^2-\frac92a+2\right)\ln\left(x^2+ax+a^2+1\right)+\frac{\left(27a^2+6a+18\right)}{\sqrt{3a^2+4}}\tan^{-1}\left(\frac{2x+a}{\sqrt{3a^2+4}}\right)\right\}+C$$ -Numerical integration confirms this result.<|endoftext|> -TITLE: How many base $10$ numbers are there with $n$ digits and an even number of zeros? -QUESTION [5 upvotes]: How many base $10$ numbers are there with - $n$ digits and an even number of zeros? - -Solution: -Lets call this number $a_n$. -This is the number of $n-1$ digits that have an even number of zeros -times $9$ possibilities for the $n$th digit -+ number of $n-1$ digits -that have an odd number of zeros and a zero for the $n$th digit. -$a_n = 9a_{n-1} + (10^{n-1} - a_{n-1})$ -$a_n = 8a_{n-1} + 10^{n-1}$ -We define -$a_0 = 1$ -$a_1 = 9$ -The generating function is -$G(x) = 1 + 9x + 82x^2 + 756x^3 + \cdots $ -$G(x) = \sum_{n=0}^{\infty}a_n x^n$ -$$\begin{align} -G(x) - 1 & = \sum_{n=1}^{\infty} ([8a_{n-1} + 10^{n-1}] x^n)\\ - & = \sum_{n=1}^{\infty} 8a_{n-1}x^n + - \sum_{n=1}^{\infty}10^{n-1}x^n\\ - & = 8x\sum_{n=1}^{\infty} a_{n-1}x^{n-1} + - x\sum_{n=1}^{\infty}10^{n-1}x^{n-1}\\ - & = 8x\sum_{n=0}^{\infty} a_{n}x^{n} + - x\sum_{n=0}^{\infty}10^{n}x^{n}\\ - & = 8x G(x) + - x\left(\frac{1}{1-10x}\right) - & -\end{align} -$$ -$(1-8x)G(x) = x\left(\frac{1}{1-10x}\right) + 1$ -$G(x) = \frac{1-9x}{(1-8x)(1-10x)}$ -$G(x) = \frac{1/2}{1-8x} + \frac{1/2}{1-10x}$ -$\therefore$ $a_n=\frac{1}{2}(8^n+10^n)$ -Is this solution/method valid? -Note that the way i have set up the solution, and defined $a_0$ and $a_1$, there are supposed to be $82$ numbers in $a_2$. I am including the $0$ numbers of zeros, i.e. there are $9 \times 9 = 81$ numbers with $0$ zeros and $1$ number $00$. - -REPLY [2 votes]: Here's another solution: clearly the number of $n$-digit numbers with $k$ zeroes is $f(n,k) = {n \choose k} 9^k$. Then we have -$$f(n,0) + f(n,1) + \cdots + f(n,n) = \sum_{k=0}^n {n \choose k} 9^k$$ -and by the binomial theorem this is $(9+1)^n = 10^n$. On the other hand, -$$f(n,0) - f(n,1) + \cdots + (-1)^n f(n,n) = \sum_{k=0}^n {n \choose k} 9^k (-1)^k$$ -and this is, again by the binomial theorem, $(9-1)^n = 8^n$. Adding the two equations together, we get -$$2 f(n,0) + 2 f(n,2) + \cdots + 2 f(n, n) = 8^n + 10^n$$ -if $n$ is even, and -$$2 f(n,0) + 2 f(n,2) + \cdots + 2 f(n, n-1) = 8^n + 10^n$$ -if $n$ is odd. Dividing through by 2 gives the result. -To be fair, this solution is not the first one that springs to mind unless you know the answer in advance.<|endoftext|> -TITLE: Can proof by contradiction 'fail'? -QUESTION [41 upvotes]: I am familiar with the mechanism of proof by contradiction: we want to prove $P$, so we assume $¬P$ and prove that this is false; hence $P$ must be true. -I have the following devil's advocate question, which might seem to be more philosophy than mathematics, but I would prefer answers from a mathematician's point of view: -When we prove that $¬P$ is "false", what we are really showing is that it is inconsistent with our underlying set of axioms. Could there ever be a case were, for some $P$ and some set of axioms, $P$ and $¬P$ are both inconsistent with those axioms (or both consistent, for that matter)? - -REPLY [52 votes]: The situation you ask about, where $P$ is inconsistent with our axioms and $\neg P$ is also inconsistent with our axioms, would mean that the axioms themselves are inconsistent. Specifically, the inconsistency of $P$ with the axioms would mean that $\neg P$ is provable from those axioms. If, in addition, $\neg P$ is inconsistent with the axioms, then the axioms themselves are inconsistent --- they imply $\neg P$ and then they contradict that. (I have phrased this answer so that it remains correct even if the underlying logic of the axiom system is intuitionistic rather than classical.) - -REPLY [38 votes]: It is possible for both $P$ and $ \neg P $ to be consistent with a set of axioms. If this is the case, then $P$ is called independent. There are a few things known to be independent, such as the Continuum Hypothesis being independent of ZFC. -It is also possible for both $P$ and $ \neg P $ to be inconsistent with a set of axioms. In this case the axioms are considered inconsistent. Inconsistent axioms result in systems which don't work in a way that is useful for engaging in mathematics. -Proof by contradiction depends on the law of the excluded middle. Constructivist mathematics, which uses intuitionistic logic, rejects the use of the law of the excluded middle, and this results in a different type of mathematics. However, this doesn't protect them from the problems resulting from inconsistent axioms. -There are logical systems called paraconsistent logic which can withstand inconsistent axioms. However, they are more difficult to work with than standard logic and are not as widely studied.<|endoftext|> -TITLE: Can I choose $k+1$ hypersurfaces to avoid a fiber of dimension $k$ in projective space? -QUESTION [6 upvotes]: Let $X$ be a closed subscheme of dimension $k$ in $\mathbb{P}^n_A$, where $A$ is a Noetherian ring. In Exercise 11.3.C of Ravi Vakil's notes, it is shown that one may choose $k+1$ hypersurfaces such that the intersection of these hypersurfaces avoids $X$. This uses Krull's Principal Ideal Theorem, which is why $A$ must be Noetherian. -Let $\pi: X\rightarrow \text{Spec}(A)$ be the structure morphism. -My question is this: If $p$ is a point of $\text{Spec}(A)$, and $\pi^{-1}(p) \subset X$ is the fiber of $p$ in $X$ of dimension $r$, can I still find $r+1$ hypersurfaces whose intersection avoids the fiber? If $p$ is closed this would be immediate as then the fiber is closed, but in general this will not be true. -Note that my question is inspired from this Upper semicontinuity of fibre dimension on the target, where showing one can avoid a fiber of a given dimension is crucial to showing upper semicontinuity of fiber dimension. - -REPLY [3 votes]: This is more or less a copy of part of my answer here, which I copied over rather than just linked to since that answer contains a lot of material irrelevant to this question. I also don't think that either question is a duplicate of the other, this question is interesting even without applying it to the linked one. -Since $\pi: X \rightarrow \operatorname{Spec}(A)$ factors through $\mathbb{P}_A^n$, the map $\pi^{-1}(p)\rightarrow \operatorname{Spec(\kappa(p))}$ factors through $\mathbb{P}_{\kappa(p)}^n$, and since $X \rightarrow \mathbb{P}_A^n$ is a closed embedding, so is $\pi^{-1}(p)\rightarrow \mathbb{P}_{\kappa(p)}^n$: -$$ -\require{AMScd} \begin{CD} - \pi^{-1}(p) @>>>\mathbb{P}_{\kappa(p)}^n@>>>\operatorname{Spec}(\kappa(p)) - \\ @VVV @VVV @VVV\\ - X @>>> \mathbb{P}_A^n @>>> \operatorname{Spec}(A) - \end{CD} -$$ -But now $\pi^{-1}(p)$ is a closed subscheme of $\mathbb{P}_{\kappa(p)}^n$, of dimension $\leq r$, and so we may find $r+1$ hypersurfaces $H'_i$ in $\mathbb{P}_{\kappa(p)}^n$ such that $\pi^{-1}(p)\cap_i H'_i = \phi$. We can now take hyperpsurfaces $H_i \subset \mathbb{P}_A^n$ that have pre-image $H'_i$ in $\mathbb{P}_{\kappa(p)}^n$ (We can choice some lift of the defining equation of the hyperplanes as polynomials in $\kappa(p)$ to polynomials in $A$). Then it must be the case that $\pi^{-1}(p)\cap_i H_i = \phi$.<|endoftext|> -TITLE: Is $ \sum_{x=1}^{\infty} \frac{1}{2^x} = 1 $ an abuse of notation? -QUESTION [5 upvotes]: Is $ \sum_{x=1}^{\infty} \frac{1}{2^x} = 1 $ correct or is it just a shortened way to say $ \lim_{y \to \infty } \sum_{x=1}^{y+1} \frac{1}{2^x} = 1 $ ? -In a softer way of asking this question if we could add up all the numbers of the sequence would it equal 1 or are we treating the limit of the partial sums as the sum as a matter of convenience? -It was a something I was wondering about when I was recalling the classic joke "$\aleph_0$ mathematicians walk into the bar the first orders half a glass of beer..." and thinking a little philosophically about if the sum was "really" the sum of the cups or something else. - -REPLY [15 votes]: Recall that $\sum\limits_{k=1}^{\infty}x_k$ is, by definition, $\lim\limits_{n \rightarrow \infty} s_n$, where, for every $n\geqslant1$, $s_n=x_1+\cdots+x_n$. There is no abuse of notation. There may be, perhaps, a misinterpretation of notation, taking the notation to mean more than it describes, but that is on the reader's behalf.<|endoftext|> -TITLE: Twin Primes (continued research) -QUESTION [15 upvotes]: This has become increasingly crowded, so at the onset, let me state this: -My question is, is there some reason this is so linear that I'm not seeing? The only thing it seems to indicate to me is that there truly must be infinitely many twin primes. -I've previously posted a method that might have potential toward proving the twin prime conjecture: -If each prime were a bucket filled with at least one unique twin prime, infinite primes (proven) would imply infinite twin primes (conjectured only). Bucket twin primes as follows: -$(3pn-4, 3pn-2)$ where $p$ is a prime, and $n$ is some odd less than $p$. Not only does each $p$ within the first 4,000 generate at least one twin prime, but the quantity of twin primes created follows a very linear pattern! -This pattern appears more linear when considering primes of sufficiently large size. Also rather than curving toward $0$, it actually appears to curve upward lending credence to my proposition that there is no limit to the twin primes this pattern can create! Infinite twin primes created with finite steps at each iteration! -Here's the Mathematica notebook for your exploration as well as an image of what it does: -https://dl.dropboxusercontent.com/u/76769933/Twinprimeplotting.nb - -My question is, is there some reason this is so linear that I'm not seeing? The only thing it seems to indicate to me is that there truly must be infinitely many twin primes. -Edit: A quick explanation of the graph: {x,y} points are created with {n, Length[twin]}. The x-axis then is "$n$" or the ordered number of primes. On this graph, displayed are the primes from 400 to 4000. The y-axis is the number of twin primes generated using $(3pn−4, 3pn−2)$ where $p$ is a prime, and $n$ is some odd less than $p$. Thus each prime trends toward generating a greater number of twin primes, also with greater variability. Sorry for the lack of clarity. -Also, here's a zoomed in graph to see detail better, and a table of data points to consider: - -REPLY [9 votes]: I will add some extended comments. I do not know how to answer this question: -OP. My question is, is there some reason this is so linear that I'm not seeing? The only thing it seems to indicate to me is that there truly must be infinitely many twin primes. -In the above, I see one question, and two statements. The first statement is that the plot is "so linear". I am not convinced that it is indeed linear, and I comment on this below. (I admit it looked linear to me at first.) If the plot is not linear then formally the question is void, but I do find the plot interesting, and I provide some further support (in the form of more plots) that it is indeed more or less linear, if not in the sense of a "straight line" then at least in the sense of a "thin curve" (with a gradually decreasing slope, where perhaps slope remains positive all the time but who knows). I do not know if there is a reason for that (but on the other hand, there is a reason for everything :). From the plots below it seems that taking primes $p$ only, in $(3pn-4,3pn-2)$ -(as opposed to taking odd numbers $m$ in general, in $(3mn-4,3mn-2)$) does seem to contribute to the "thinness" of the curve. Finally, concerning the statement that there truly must be infinitely many twin primes, it doesn't look like use of the word "truly" on its own constitutes a proof ;) -So, first I was confused as to what exactly was plotted, and eventually it was cleared (to me, after guessing incorrectly twice) in the comments. The OP also edited the question to supply a clarification, but I find it confusing to use the same letter $n$ in two inconsistent ways, on one hand $p=p_n$, the $n$-th prime, and, on the other hand, odd $np_k$. This uniqueness argument fails if $p_k$ is replaced by a general odd number $m$, but I am not concerned about this. Indeed $3 m n -4\ge 3 m -4$, and the numbers $3 m -4$ go to infinity as $m$ goes to infinity, so if there are infinitely many odd $m$ for each of which there is at least one good odd $n -TITLE: Why this two surfaces have one end? -QUESTION [6 upvotes]: I want to prove that the infinite-holed torus and the infinite-jail cell window have one end but the doubly infinite-holed torus doesn't, my definition of one end is the following: - -A locally compact (not compact) space $X$ has one end if for every compact $C \subset X$ there is a compact $K$ such that $C \subset K \subset X$ and $K^{c}$ is connected. - -But the thing is that I don't know how to wave my hands here. Can someone help me with this issue? - -The pictures: - - - - -Thanks a lot in advance. - -REPLY [2 votes]: Follow Andrew's suggestion: First, embed $X$ inside $\mathbb{R}^3$. Then, any compact $C \subset X$ is bounded in $\mathbb{R}^3$, so that you may find a large closed ball $B$ containing $C$. (For the infinite torus you also want to enlarge the ball so that it contains the entire "end" as well as $C$.) Finally take $K = B \cap X \supset C$. -The problem with the doubly infinite torus is that $K^c$ will always be disconnected for any compact $C$ such that $C^c$ is disconnected.<|endoftext|> -TITLE: What is the difference between a supremum and maximum; and also between the infimum and minimum? -QUESTION [18 upvotes]: What is the difference between a supremum and maximum; and similarly the infimum and minimum? Also, how does one tell if they exist? -Here is an example: - -$$x_n = \frac{n}{2n-1}$$ -Determine whether the maximum, the minimum, the supremum, and the - infimum of the sequence $x_n$ n=1 to n=+∞ - -My understanding: - -The limit is 1/2. -$x_n$ is decreasing. -The supremum exists as n goes to +∞. -The infimum does not exist as limited by n=1, minimum = 1/2. -The maximum and supremum exists, both equal to 1. - -REPLY [2 votes]: Apparently, you know how to obtain the limits (here: upper bound 1, lower bound 1/2). That is any case is then the supremum (or infimum at the lower end) - or the set is unlimited, e.g. the naturals on top, the reals both ways: Nothing exists then. -Two cases: -(1) The limit is IN the set, e.g. actually attained by the sequence. -Then it is maximum AND supremum (or minimum AND infimum), e.g. 1. -(2) The limit is OUTSIDE the set, not attained by the sequence (though only barely missed). That is your case at the lower end, 1/2. -Then it is only supremum (or infimum) AND there is no maximum (minimum). -You might have some doubts coming in via the index n (your 3rd suggestion $n\to\infty$ and supremum)? -$n\to\infty$ has nothing to do with supremum ("large $n$"?). Only the $x_n$ themselves do. -Take-home bits: - -No bound - no ..mum, nothing. -Minimum = Infimum OR no Minimum. (ditto max=sup or no max). In particular, these can not differ!<|endoftext|> -TITLE: Computing the residue of $\frac{\cot(\pi z)}{z^2}$ at pole $z=0$ -QUESTION [6 upvotes]: To find the residue I used the residue theorem that states: -$$Res(f,z_0)=\frac{1}{(m-1)!}\lim_{z\to0}\frac{\mathrm{d}^{m-1}}{\mathrm{d}z^{m-1}}(z-z_0)^m f(z)$$ where $m$ is the order -Computing the Residue of $\dfrac{\cot(\pi z)}{z^2}$ -This is what I thought I was supposed to do for the singularity at $z_0=0$ of order $3$. -$$Res(f;0)=\frac{1}{2!}\lim_{z\to 0} \frac{\mathrm{d}^2}{\mathrm{d}z^2}z^2 \sin(\pi z)f(z)=-\frac{1}{2}\lim_{z \to 0} \pi^2 \cos(\pi z) =-\frac{\pi^2}{2}$$ -However the answer is $-\dfrac{\pi}{3}$ -I'd really appreciate any guidance in where I went wrong. - -REPLY [2 votes]: There is not quite enough shown of your work to figure out where the error is. However, I have used the same approach below, so perhaps you can find the step that differs. -$$ -\begin{align} -&\frac12\frac{\mathrm{d}^2}{\mathrm{d}z^2}\left(z^3\frac{\cot(\pi z)}{z^2}\right)\\ -&=\frac12\frac{\mathrm{d}^2}{\mathrm{d}z^2}\left(\frac{z\cos(\pi z)}{\sin(\pi z)}\right)\tag1\\ -&=\frac12\frac{\mathrm{d}}{\mathrm{d}z}\left(\frac{\sin(\pi z)\cos(\pi z)-\pi z}{\sin^2(\pi z)}\right)\tag2\\ -&=\pi\left(\frac{\pi z\cos(\pi z)-\sin(\pi z)}{\sin^3(\pi z)}\right)\tag3\\ -&=\pi\left(\frac{\pi z\cos^2(\pi z)-\sin(\pi z)\cos(\pi z)}{\sin(\pi z)\cos(\pi z)\left(1-\cos^2(\pi z)\right)}\right)\tag4\\ -&=\pi\left(\frac{\pi z\left(\cos^2(\pi z)\color{#C00}{-1}\right)+\pi z(\color{#C00}{1}-\color{#090}{\cos(\pi z)})-(\sin(\pi z)-\color{#090}{\pi z})\cos(\pi z)}{\sin(\pi z)\cos(\pi z)\left(1-\cos^2(\pi z)\right)}\right)\tag5\\ -&=\pi\left(-\frac{\pi z}{\sin(\pi z)\cos(\pi z)}+\frac{\pi z}{\sin(\pi z)\cos(\pi z)(1+\cos(\pi z))}+\frac{\pi z-\sin(\pi z)}{\sin^3(\pi z)}\right)\tag6\\ -&\to\pi\left(-1+\frac12+\frac16\right)\tag7\\[6pt] -&=-\frac\pi3\tag8 -\end{align} -$$ -Explanation: -$(1)$: simplify -$(2)$: take the derivative -$(3)$: take another derivative -$(4)$: apply $\sin^2(\pi z)=1-\cos^2(\pi z)$ and multiply by $\frac{\cos(\pi z)}{\cos(\pi z)}$ -$(5)$: add and subtract the red and green terms -$(6)$: separate and simplify the summands in the numerator -$(7)$: evaluate the limits using $\lim\limits_{x\to0}\frac{x}{\sin(x)}=1$ from this answer -$\phantom{\text{(7):}}$ and $\lim\limits_{x\to0}\frac{x-\sin(x)}{x^3}=\frac16$ from this answer<|endoftext|> -TITLE: Do all continuous real-valued functions determine the topology? -QUESTION [35 upvotes]: Let $X$ be a topological space. If I know all the continuous functions from $X$ to $\mathbb R$, will the topology on $X$ be determined? -I know the $\mathbb R$ here is somewhat artificial. So if this is wrong, will it be right if $X$ is a topological manifold? - -REPLY [24 votes]: Conifold's answer is essentially correct, but you should be careful what you mean by "determine the topology". Given any collection $I$ of real-valued functions on a set $X$, there is a natural topology you can impose on $X$, namely the coarsest topology that makes each element of $I$ continuous. Explicitly, this topology on $X$ is the collection of unions of finite intersections of sets of the form $f^{-1}(U)$ where $U\subseteq\mathbb{R}$ is open and $f\in I$. -Completely regular spaces are exactly the spaces $X$ such that their topology coincides with this natural topology induced by the set $I$ of all continuous real-valued functions on $X$. In particular, if you know a space is completely regular, then you can canonically recover its topology from the set of all continuous real-valued functions on it (so in this sense they "determine the topology"). However, this does not mean that its topology is the only possible topology on the set with the same collection of continuous real-valued functions. -For instance, let $T$ be the usual topology on $[0,1]$, and let $T'$ be the topology on $[0,1]$ consisting of sets of the form $U\setminus A$ where $U\in T$ and $A\subseteq\{1,1/2,1/3,\dots\}$ (intuitively, think of $T'$ as the usual topology modified so that the sequence $(1/n)$ no longer converges to $0$). Then a function $[0,1]\to\mathbb{R}$ is continuous with respect to $T$ iff it is continuous with respect to $T'$. Indeed, suppose $f:[0,1]\to\mathbb{R}$ is continuous with respect to $T'$. Then for any open $V\subseteq\mathbb{R}$ and any point $x\in f^{-1}(V)$ there is a neighborhood $W$ of $x$ such that $\overline{W}\subseteq f^{-1}(V)$, because $V$ contains a closed neighborhood of $f(x)$. But it is not hard to show that an open set $U\in T'$ contains such a $W$ around each of its points iff actually $U\in T$. It follows that $f$ is continuous with respect to $T$, not just with respect to $T'$. (The other implication is trivial, since $T\subset T'$.) -Thus in this example, even though the topology $T$ is completely regular, there is still another topology $T'$ with the same continuous real-valued functions. All that complete regularity guarantees you is that for any such topology $T'$, $T\subseteq T'$. (It also guarantees you that $T'$ cannot also be completely regular unless $T=T'$, so $T'$ must be somewhat pathological.)<|endoftext|> -TITLE: Uniqueness or non uniqueness of a pair of natural numbers -QUESTION [8 upvotes]: Let $1 -TITLE: Is there a math function to find an element in a vector? -QUESTION [16 upvotes]: I would like to write mathematically, if possible, the following statement: - -Given a vector $x=[1,4,5,3]$ and an integer $j=3$, find the position of $j$ in $x$? - -How to write this mathematically? -If I am looking for the position of the minimum value in $x$, I would achieve this by $\arg\min x$. -I guess $j^*=\operatorname{arg\,find} (x=j)$ but $\LaTeX$ does not recognize this. - -REPLY [2 votes]: Vector dot products do this. If you want the second element of a vector $X$, then $X \cdot \langle0, 1, 0, 0, 0\rangle $will give you that component if $X$ is a five-dimensional vector.<|endoftext|> -TITLE: Show that the intersection of any two intervals is an interval -QUESTION [6 upvotes]: So i've come across this question, with a follow up question of showing that the union of any two intervals need not be an interval. -I don't see how this could possibly be the case. The general structure of my proof would be to consider the case where: - -The intersection yields an empty set; -The intersection yields a set with 1 element; -The intersection yields a set with 2 or more elements; - -and then consider each case and show that each is an interval. -But a union can only yield one of those three possibilities too. To elaborate: If any of the three possibilities were not an interval, then an intersection is not necessarily an interval, so each of them must be an interval. But each of these possibilities for the intersection are also the only possibilities for a union, meaning a union of intervals must be an interval too, which is not true. -I'm obviously wrong, but why? - -REPLY [7 votes]: By definition, a set $A\subset{\mathbb R}$ is an interval if -$$\forall x, \ y\in A,\quad\forall t\in{\mathbb R}:\qquad x\leq t\leq y\quad\Rightarrow\quad t\in A\ .\tag{1}$$ -It is then obvious (on logical grounds, no case distinctions needed) that the intersection of two intervals is an interval. Of course it is allowed to go through the motions anyway: -Let $A$ and $B$ be intervals, let $x$, $y\in A\cap B$, and assume $x\leq t\leq y$. Then $t\in A$ as well as $t\in B$, hence $t\in A\cap B$. This shows that $A\cap B$ passes the test $(1)$. -Note that the claim would not be true if we would not accept the empty set as an interval.<|endoftext|> -TITLE: How can a probability density function (pdf) be greater than $1$? -QUESTION [21 upvotes]: The PDF describes the probability of a random variable to take on a given value: -$f(x)=P(X=x)$ -My question is whether this value can become greater than $1$? -Quote from wikipedia: -"Unlike a probability, a probability density function can take on values greater than one; for example, the uniform distribution on the interval $[0, \frac12]$ has probability density $f(x) = 2$ for $0 \leq x \leq \frac12$ and $f(x) = 0$ elsewhere." -This wasn't clear to me, unfortunately. The question has been asked/answered here before, yet used the same example. Would anyone be able to explain it in a simple manner (using a real-life example, etc)? -Original question: -"$X$ is a continuous random variable with probability density function $f$. Answer with either True or False. - -$f(x)$ can never exceed $1$." - -Thank you! -EDIT: Resolved. - -REPLY [2 votes]: Probability density functions are not probabilities, but , if $f(x)$ is a probability density function, then $P=\int_{x_0}^{x_1} f(x) dx$ is a probability and thus $\int_{x_0}^{x_1} f(x) dx \leq 1$ for all $x_0,x_1$ ($x_0\leq x_1$).<|endoftext|> -TITLE: Proof of the Arzelà–Ascoli Theorem -QUESTION [5 upvotes]: I'm stuck on a particular line of the proof of The Arzelà–Ascoli Theorem. -In lectures, we have: -$1.$ Defined equicontinuous as: - -Let $X$ be a metric space, $C(X) = \{f: X \rightarrow \mathbb{R}\text{ continuous} \}$ the space of continuous functions, $S \subset C(X)$. -Let $x \in X$ be a point. Then $S$ is equicontinuous at $x$ if $\forall \varepsilon > 0$, $\exists \delta > 0$ such that $y \in B(x, \delta)$, $f \in S$ $\implies$ $|f(x) - f(y)| < \varepsilon$. - -And obviously $S$ is equicontinuous if it is equicontinuous at all points of $X$. -$2.$ Stated The Arzelà–Ascoli Theorem as: - -Suppose that $X$ is a compact metric space and $S \subset C(X)$ is a subspace. -Then, $S$ is compact $\iff$ $S$ is closed, bounded, and equicontinuous. - -In the proof of the forward direction: -Closed and bounded are clear, so it remains to show that $S$ is equicontinuous. We already know that $S$ is totally bounded, so let $\varepsilon > 0$ and fix $x \in X$. Then $\exists F \subset S$ finite such that $S \subset \bigcup_{f \in F}B(f, \frac{\varepsilon}{3})$. -Since $F$ is equicontinuous... -This is the line that I get stuck at, why is $F$ equicontinuous? - -REPLY [4 votes]: Recall that for all $f \in C(X)$, $f$ is uniformly continuous because $X$ is compact. -Given $F \subset C(X)$ a finite set, we show that it is equicontinuous. Fix $\varepsilon > 0$. We need to find som $\delta > 0$ such that some condition is satisfied. -For all $f \in F$, by uniform continuity of $f$, there exist $\delta_f$ such that etcetera. Your required $\delta$ is $\delta = \min_{f \in F} \delta_f$. -In particular, using the same argument, you can see that a finite union of equicontinuous sets is equicontinuous.<|endoftext|> -TITLE: Homotopy equivalence of a space with the sphere -QUESTION [13 upvotes]: I have some trouble with the following problem. -A space $X$ is obtained by gluing two $2$-cells to a circle $S^1$ using maps winding $2$-times and $3$-times around $S^1$. Show that $X$ is homotopy equivalent to $S^2$. - -REPLY [2 votes]: Denote $2$-cells of $X$ by $D_1$ and $D_2$. There are glueing maps $\partial D_1\to S^1,z\mapsto z^2$ and $\partial D_2\to S^1,z\mapsto z^3$. -Consider the cellular decomposition of $S^2$ with two $2$-cells $D_1'$ and $D_2'$. - $ $ -Let $f_1:{D_1'}\to D_1$ be $z\mapsto z^3$, and $f_2:{D_2'}\to D_2$ be $z\mapsto z^2$. As it easy to see, the restrictions of $f_1$ and $f_2$ to the equator coincide, so we have well-defined map $f:S^2\to X$. This map is homology equivalence and $X$ is $1$-connected, therefore $f$ is a homotopy equivalence.<|endoftext|> -TITLE: How is the existence of several different morphisms between two objects generalized in therms of the axioms of cathegory theory. -QUESTION [5 upvotes]: TL;DR I was confused: I viewed commutative diagrams in therms of objects, while in reality they express relationships between morphisms. -According to the axioms of category theory, all we need to define a category are objects and morphisms. -My question is under this terms, shouldn't all morphisms with the same domain and codomain be identical? -More formally, let A and B be objects in any category and let f and g be morphism that convert A to B. - f: A -> B - g: A -> B -Shouldn't the diagram that displays these objects and morphisms commute for all A, B, x and y? -What is the reason it doesn't and how is all this expressed in category theory therms? -If we are operating in the category of sets, f and g aren't equal because the mapping between individual elements of the set A to elements of the set B in the two cases is not equivalent. But how is all this expressed in category theory therms? - -REPLY [5 votes]: For any two objects $A$ and $B$ in your category, you are given a set $Hom(A,B)$ of morphisms bewteen $A$ and $B$. There is absolutely no reason why two morphisms with the same domain and codomain should be equal. -For instance, in the category of sets, $Hom(A,B)$ is the set of functions between $A$ and $B$, and of course except in very special cases, there are many many functions between two sets. -Actually, there is a name for a category where morphisms are determined by their domain and codomain : it's called a preorder. -From the point of view of category theory, the fact that in the category of sets morphisms are functions and can be evaluated at elements is irrelevant (that's not completely true but it's true enough). For a category theorist, two functions are not equal iff they give the same image for every element ; they are equal if... they are equal. -Let me be more precise : as I mentioned before, for any objects you are given a set of morphisms $Hom(A,B)$. Like in any set, you can decide equality of elements : if $f,g\in Hom(A,B)$ it makes sense to say that $f=g$ or $f\neq g$. And that is part of the category structure. Equality of morphisms is hardcoded in the structure, if you want (but this is true for elements of any set). -Now if $f,f':A\to B$ and $g,g':B\to C$ are morphisms, you may or may not have $g\circ f = g'\circ f'$. The fact that this equality holds or not is a judgement that has to be taken in the habitual sense of equality of objects. Like you don't have to wonder too hard what it means that $2\neq 3$ : they are just two different objects in the set $\mathbb{N}$. -Category theory is really combinatorics : arrows don't have to make sense as "functions" or whatever, even though in most usual cases they do. That's why we sometimes call them "arrows". A category is like an oriented graph with a little extra structure (composition), and equality of morphisms has to be interpreted as equality of two edges in a graph. It doesn't have to mean anything : two edges may or may not be the same, and that's it.<|endoftext|> -TITLE: How to interpret the cotangent bundle of a complex manifold? -QUESTION [11 upvotes]: Let $X$ be a complex manifold. I am not sure what people mean when they talk about the cotangent bundle $T^*X$ of $X$. I have two interpretations: - -At each point $x\in X$, $T_x^*X$ is the complex vector space dual to the complex vector space $T_xX$, i.e. $T_x^*X$ is the space of all complex-linear maps $T_xX\to\Bbb C$. -At each point $x\in X$, $T_x^*X$ is the dual space to the real vector space $T_xX$, i.e. the space of all real-linear maps $T_xX\to\Bbb R$. - - -Which one is the right interpretation? - - -Thinking: -I was first thinking that there is a natural isomorphism between the two, but it doesn't seem like so. If I try to get a real isomorphism -$$(T_xX)_{\Bbb R}^*\to (T_xX)^*_{\Bbb C},$$ -where the first space are the real-linear maps $T_xX\to\Bbb R$ and the second one are the complex linear maps $T_xX\to\Bbb C$ (but viewed as a real vector space), then the isomorphism is always basis dependent. -Note that -$$\dim_{\Bbb R}(T_xX)^*_{\Bbb R}=\dim_{\Bbb R} T_xX=2\dim_{\Bbb C}T_xX=2\dim_{\Bbb C}(T_xX)^*_{\Bbb C}=\dim_{\Bbb R}(T_xX)^*_{\Bbb C},$$ -so the two spaces are indeed isomorphic. Although I cannot find a basis-independent isomorphism. - -REPLY [2 votes]: If $V$ is a vector space over $\mathbb{C}$, there is a natural $\mathbb{R}$-isomorphism $\varphi\colon V_{\mathbb{R}}^*\to V_{\mathbb{C}}^*$ defined by -$$ -\varphi(f)(v) \,=\, f(v) + i\,f(-iv). -$$ -for $f\in V_{\mathbb{R}}^*$ and $v\in V$, with inverse $\varphi^{-1}\colon V_{\mathbb{C}}^*\to V_{\mathbb{R}}^*$ defined by -$$ -\varphi^{-1}(g)(v) \,=\, \mathrm{Re}\bigl(g(v)\bigr). -$$ -Note that if $f\colon V \to \mathbb{R}$ is $\mathbb{R}$-linear, then $\varphi(f)$ is indeed $\mathbb{C}$-linear, since -$$ -\varphi(f)(iv) \,=\, f(iv) + i f(v) \,=\, if(v) - f(-iv) \,=\, i\bigl(f(v) + if(-iv)\bigr) \,=\, i\,\varphi(f)(v) -$$ -for any $v\in V$.<|endoftext|> -TITLE: Integral of product of two normal distribution densities -QUESTION [8 upvotes]: I want to compute the integral: - -$\displaystyle \int^{\infty} _{-\infty} \frac{1}{\sqrt{2\pi}} e^{-\frac{(y-x)^2}{2}} \frac{1}{\sqrt{2\pi}ab} e^{-\frac{x^2}{2(ab)^2}} dx$ - -Maybe we can use that for a normal distribution with mean $\mu$ and variance $\sigma^2$ we have -$\displaystyle \int^{\infty} _{-\infty} \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{(x-\mu)^2}{2 \sigma^2}} dx = 1$ -In an effort to write the integral in this form, I tried to take the exponents together. This gives: -$\displaystyle -\frac{(y-x)^2}{2} - \frac{x^2}{2(ab)^2} = \frac{-[(ab)^2 (y-x)^2 + x^2]}{2(ab)^2} = \frac{-[(ab)^2 (y^2 -2xy +x^2) + x^2]}{2(ab)^2}$ -But this leads to nowhere. Any suggestions? - -REPLY [5 votes]: Here are general formulas for multivariate Gaussian distribution in $\mathbb{R}^D$ (derivation): -$$\rho_{\mu, \Sigma}(x):= \frac{1}{\sqrt{|2\pi\Sigma|}} -e^{-\frac 12 (x-\mu)^T\Sigma^{-1} (x-\mu)}$$ -Integral of product of Gaussian distributions with covariance matrix $\Sigma$ and $\Gamma$, shifted by $\mu$ vector: -$$\int_{\mathbb{R}^D} \rho_{\mu, \Sigma}(x)\cdot\rho_{\mathbf{0},\Gamma}(x)\,dx=\frac{\exp\left(-\frac 12 (\mu^T\Sigma^{-1}(\Sigma^{-1}+\Gamma^{-1})^{-1} \Gamma^{-1}\mu) \right)} -{\sqrt{(2\pi)^D |\Sigma||\Gamma||\Sigma^{-1}+\Gamma^{-1}|}}$$ -For spherically symmetric $\Sigma=\sigma^2 \mathbf{I}$, $\Gamma=\gamma^2 \mathbf{I}$ shifted by any length $l$ vector it becomes: -$$\frac{\exp \left(-\frac 12 \frac {l^2}{\sigma^2+\gamma^2} \right)} -{\sqrt{2\pi \left(\sigma^{2}+\gamma^{2}\right)}^D}$$<|endoftext|> -TITLE: Hartshorne 4.1.6 Gonality of a curve -QUESTION [5 upvotes]: I have a question about the following exercise from Hartshorne's book 'Algebraic geometry': -Let $X$ be a curve of genus $g$. Show that there is a finite morphism $f:X\rightarrow \mathbb P^1$ with degree $\leq g+1$. -My idea is the following: We choose $g+1$ points $P_i$ in $X$. This gives us by a previous exercise (4.1.2) a rational function $r=\frac g h$ with poles at the $P_i$ and nowhere else. Now we define the map on closed points to be $x \mapsto [h(x):g(x)]$. As this map is non-constant, it is finite. -The fibre of $f^{-1}([1:0] )$ contains exactly the $P_i$ and hence the degree of $f$ is smaller than g+1. What obstructs us from choosing less than g+1 points in the beginning? -Sincerely -slin0 - -REPLY [3 votes]: Let $P$ be a point on $X$. Consider the divisor $D = (g+1)[P]$ on $X$. Let's compute a lower bound for the dimension of $\mathrm{H}^0(X,D)$. -By Riemann-Roch, $$\dim \mathrm{H}^0(X,D) = (g+1)+ 1- g+ \dim \mathrm{H}^1(X,D) \geq 2 + \dim \mathrm{H}^1(X,D) \geq 2.$$ Thus, there exists a non-constant $f$ in $\mathrm{H}^0(X,D)$. -Any non-constant $f$ in $\mathrm{H}^0(X,D)$ gives a finite morphism $f:X\to \mathbb P^1$ of degree at most the degree of $D$. Thus, as $\deg(D) = g+1$, there is a finite morphism $X\to\mathbb P^1$ of degree at most $g+1$.<|endoftext|> -TITLE: Proving that Levi-Civita connection is preserved by isometries -QUESTION [6 upvotes]: I am trying to prove that given two Riemannian submanifolds $S,S'$ with Levi-Civita connections $\nabla , \nabla'$ and an isometry $f$, then -$$ -Df(\nabla_XY)=\nabla'_{X'}Y' -$$ -where, $X',Y'=Df(X),Df(Y)$. -The argument is that if $\nabla''_XY=D f^{-1}(\nabla '_{X'}Y')$ torsion free and metric then it is the Levi-Civita connection on $S$. I was trying to prove this is metric and this is what I get: -$$ -g(\nabla''_XY,Z)+g(Y,\nabla''_XZ)=g(D f^{-1}(\nabla '_{X'}Y'),Z)+g(Y,D f^{-1}(\nabla '_{X'}Z')) -$$ -$$ -=g'(\nabla '_{X'}Y',Z)+g'(Y',\nabla '_{X'}Z') -$$ -as $f$ is isometry, and as $\nabla'$ is Levi-Civita, -$$ -=X'g'(Y',Z')=X'g(Y,Z) -$$ -which should have been $Xg(Y,Z)$... Can someone explain what I am doing wrong? also is there a more efficient way to show what I want to prove instead of checking that $\nabla''$ is metric and torsion free (and also that it is a connection)? -This is not a duplicate I showed my working and I want to know why my working does not work. - -REPLY [4 votes]: If $f:(S,g)\to (S',g')$ is an isometry, then define -$\nabla_{X'}Y':=df\ \nabla_XY$ -Show that this is LC-connection : -(1) Compatibility condition : First show that $$ X'(Y',Z')=X(Y,Z)$$ -Proof : If $\frac{d}{dt}p(t)=X,\ p(0)=p$ then -$$ df_p X(df_p Y, df_p Z) =\frac{d}{dt} (df Y, df Z)_{f(p(t))} = -\frac{d}{dt} (Y,Z)_{p(t)} -$$ since $f$ is an isometry And $\frac{d}{dt} (Y,Z)_{p(t)}= X(Y,Z)$ -So $$ (\nabla_{X'}Y',Z')+(Y',\nabla_{X'}Z')=f^\ast g'( \nabla_XY,Z) -+ f^\ast g' (Y,\nabla_XZ) = X(Y,Z) =X'(Y',Z') $$ -(2) Symmetry condition : $$ \nabla_{X'}Y' -\nabla_{Y'}X'=df -(\nabla_XY-\nabla_YX)=df[X,Y]=[X',Y']$$<|endoftext|> -TITLE: One of any consecutive integers is coprime to the rest -QUESTION [23 upvotes]: After reading this question, I conjectured a generalization of it. - -Conjecture: Fix $k\in \mathbb N$. Then, for all $n\in \mathbb N$, one of $n+1,\ldots,n+k$ is coprime to the rest. - -I tried some elementary ways, but wasn't successful. -Observation: One of the consequences of this conjecture is that there are infinitely many primes! - -REPLY [36 votes]: Surprisingly, the statement is false once $k\ge17$, and the shortest counterexample is the sequence of length $17$ beginning with $2184$. This was the result of a line of work beginning with Pillai, and finally wrapped up by Brauer. See S.S. Pillai on Consecutive integers research paper?. - -Pillai showed that it holds for $k<17$, but can fail for all $k$ between $17$ and $430$ - infinitely often, in fact! -In a sequence of results, this was improved until eventually Scott showed that there are infinitely many counterexamples for $17\le k\le 2491906561$ . . . -. . . and then Brauer showed that there are infinitely many counterexamples for any $k\ge 17$.<|endoftext|> -TITLE: How to prove that field of rational functions is a *proper* subset of field of formal Laurent series? -QUESTION [8 upvotes]: Now, if $F$ is a field, I can prove easily that $F(x)\subseteq F((x))$ but I'm having problems to show this is a proper inclusion. - -If for example $F=\Bbb R$ or $\Bbb C$, I can take a well-known function, say $\cos x$ and use its power series to show proper inclusing, because if -$$\left(F((x))\ni\right)\;\cos x=1-\frac{x^2}2+\frac{x^4}{24}-\cdots=\frac{f(x)}{g(x)}\in F(x)$$ -then, since $f(x),g(x)\in F[x]$ (polynomials), we'd get that $\cos x$ has a finite number of zeros, which is absurd, and thus $\cos x\in F((x))\setminus F(x)$. -My problem now is: what to do if the field $F$ is not one of the usual, infinite ones? For example, if $F$ has positive characteristic? -Any input will be duly appreciated. - -REPLY [8 votes]: An easy argument : $1-x$ has a square root in $F((x))$ but not in $F(x)$. -Indeed, in $F((x))$, take the usual Taylor expansion of $\sqrt{1-x}$ at $0$. But in $F(x)$, looking at decomposition in irreducible factors easily shows that such a square root can't exist. -If the characteristic of $F$ is $2$, this won't work, you will have to take a cubic root. -Maybe I can comment a little on the argument : what I'm using is the fact that $F((x))$ is henselian, whereas $F(x)$ is not. This is a quite different argument from the one by @user26857 : they use the fact that some elements of $F((x))$ are not solutions of algebraic equations (over $F(x)$), I use the fact that some elements of $F((x))$ are solutions of such equations. This is really a different argument because there are henselian extensions of $F(x)$ which are algebraic over $F$ (namely the henselianization).<|endoftext|> -TITLE: Is there no difference between upper triangular matrix and echelon matrix(row echelon matrix)? -QUESTION [8 upvotes]: Source: Linear Algebra with Applications Gareth Williams -I see no difference between upper triangular matrix and echelon matrix(row echelon matrix). Then are they the same? - -Source: Linear Algebra with Applications David C. Lay - -REPLY [13 votes]: To summarize the comments into an answer: -The matrix -$$\begin{pmatrix}1&2&3\\0&4&5\end{pmatrix} $$ -is echelon, but not triangular (because not square). -The matrix -$$\begin{pmatrix}1&2&3\\0&0&4\\0&0&5\end{pmatrix} $$ -is triangular, but not echelon (because the leading entry $5$ is not to the right of the leading entry $4$). -However, for non-singular square matrices, "row echelon" and "upper triangular" are equivalent.<|endoftext|> -TITLE: Combinations of four consecutive primes in the form $10n+1,10n+3,10n+7,10n+9$ -QUESTION [7 upvotes]: Here $n$ is some natural number. For example, among the primes $< 1000$ I found four such combinations: -\begin{array}( 11 & 13 & 17 & 19 \\ 101 & 103 & 107 & 109 \\ 191 & 193 & 197 & 199 \\ 821 & 823 & 827 & 829 \end{array} -Using Mathematica I was able to move further, so the sequence $n_k$ starts with: -$$\{n_k\}=\{1,10,19,82,148,187,208,325,346,565,943,\dots\}$$ -A question already exists on this topic, however there is not a lot of information there. -I would like to know if the sequence $n_k$ was studied before, and what can we tell about the distribution of $n_k$ among the natural numbers? -Distances $n_{k+1}-n_k$ seem to grow on average, but 'close' quadruples still exist even for large $n_k$, for example: -$$n_{872}=960055,~~~n_{873}=960058$$ -The plot of all the distances for $n_k<10^6$ is provided below (there are $898$ of them): - -As the author of the linked question stated, every $n_k$ has the form $3m+1$, so the distances are all divisible by $3$. -So, the main thing I ask is some reference on the topic, or additional information about this sequence. - -Found OEIS A007811 with some information - -REPLY [4 votes]: It is conjectured that the number of such $n_k$ up to $X$ is of the size (ignoring leading constants) -$$ \frac{X}{\log^4 X},$$ -and similarly that the number of $k$-tuples up to $X$ in fixed, admissible configurations is of the size -$$ \frac{X}{\log^k X}.$$ -Your tuple is $(10n+1, 10n+3, 10n+7, 10n+9)$. If you were to consider the $8$-tuple $(10n+1, 10n+3, 10n+7, 10n+9, 10n+91, 10n+93, 10n+97, 10n+99)$, (which I think is admissible but I didn't actually check), then it is conjectured that the number of such $8$-tuples up to $X$ is of the size -$$ \frac{X}{\log^8 X}.$$ -Notice that this is actually two of your $4$-tuples separated by $90$. So conjecturally we believe there should be infinitely "smallish" gaps between $4$-tuples of your shape. -More distribution-style statements can be made along these lines. You'll get very far by looking up the prime $k$-tuple conjecture and studying its progress and results.<|endoftext|> -TITLE: LU Decomposition vs. QR Decomposition for similar problems -QUESTION [6 upvotes]: Suppose I want to solve the 2D Poisson equation with Neumann boundary conditions. The solution is non-unique up to an additive constant. -I have previously asked a related question here for the 1D case, which may provide some context for this question: -Numerically Solving a Poisson Equation with Neumann Boundary Conditions -There are two problems, which I'll use different notation for: - -The "Original Equation" $A x = b$, where $A$ is $m \times m$ and has rank $m-1$. This equation is singular because its solution is unique only up to an additive constant, which this equation can not resolve. -The "Modified Equation" $C y = d$, where $C$ is $(m+1) \times m$ and has rank $m$. This equation adds uniqueness constraint to the original equation, making $C$ full-rank and the solution unique. - -In both cases, $x$ and $y$ should be identical, to machine precision. -This problem can be uniquely solved by specifying a uniqueness constraint. This is done differently for each approach (MATLAB notation): -% Generate A as an m-by-m matrix - -% Generate b as an m-by-1 column vector - - - -%% Original Equation -% Solve A*x==b for x -xp = A \ b; % "Primary" solution for x - % xp isn't unique, however: - % The uniqueness constraint must - % be applied before x==y. - -err_x = norm( A*xp - b, 2 ) - -% Impose the uniqueness constraint x(4) == 3.14159 -x = xp - xp(4) + 3.14159; % Now, x should equal y (to be calculated) - - -%% Modified Equation -% Add the constraint x(4) == 3.14159 -extraRow = zeros(1,m); -extraRow(4) = 1.0; -C = [A; extraRow]; % Add to the matrix A -d = [b; 3.14159]; % Add to the RHS vector, b - -% Solve C*y == d for y -y = C \ d; - -err_y = norm( C*y - d, 2 ) - -I have tried to solve these in MATLAB using the backslash operator (\ or mldivide()) which evaluates the matrix to be solved, then chooses an optimal algorithm to solve it. -In my own tests, MATLAB uses LU decomposition to solve the Original Equation and QR decomposition to solve the Modified Equation. -Test Calculation -I performed the above calculation for an example 2D problem to solve for $\Phi(x,z)$, with a problem size of $Nx=Nz=150$. -2D PDE: -$\nabla^2 \Phi = C_1 \frac{\partial f(x)}{\partial{x}}$ -Boundary conditions for x-boundaries: -$\frac{\partial \Phi}{\partial x} = C_1 f(x)$ -Boundary conditions for z-boundaries: -$\frac{\partial \Phi}{\partial z} = 0$ -Given the form of the source terms, the problem has an analytic, 1D solution for particular $f(x)$: $\frac{\partial \Phi}{\partial x} = C_1 f(x)$, or $\Phi(x,z) = C_1 \int f(x) dx$. -Error -I was surprised to find that the LU decomposition approach yielded far less error than the QR decomposition! (Specifically, err_x $\sim 10^{-11}$ was several orders of magnitude less than err_y $\sim 10^{-8}$.) -Speed -For a problem sizes of order $Nx = Nz \sim 100-400$, the modified approach (y = C\d), using QR decomposition, takes roughly twice as long as the original approach (xp = A\b), which uses LU decomposition? -My Question -Why? -What's going on for each approach? Is there a compelling reason that LU decomposition out-performs QR decomposition for this type of problem? If not, under what conditions would LU decomposition out-perform QR decomposition, or vice-versa? -(I'm curious how Gaussian Elimination with/without partial pivoting would compare, but that doesn't need to be part of this discussion.) -This question is definitely relevant (but not identical): https://scicomp.stackexchange.com/questions/1026/when-do-orthogonal-transformations-outperform-gaussian-elimination - -REPLY [9 votes]: I recognize that I'm probably far too late for my answer to be of much use to you, but I'll add an answer here for posterity. -First, regarding the choice of method: MATLAB has a very clear methodology by which it selects an particular algorithm to solve this type of equation, as is noted in the documentation. The choice of algorithm is essentially based on how much structure your matrix has that can be exploited to achieve better performance. In your case, the boundary conditions your problem imposes prevent your matrix from being e.g. symmetric, which would allow you to use a faster method. As a result, you end up using LU decomposition, which is one of the slower methods for solving $Ax = b$ when $A$ is square. In the Modified equation case, the imposition of the additional constraint makes your matrix non-square. The majority of common solution methods for systems of linear equations (including LU factorization) do not work for such matrices; in MATLAB, the fallback solution for these types of equations is the QR decomposition. This is why the two different problems you pose end up being solved using different methods. -Second, the speed issue is due precisely to the fact that LU factorization, despite being one of the slower methods available for square matrices, is still faster than QR factorization. The question you linked to actually explains this as well, but what it boils down to is that the number of operations for LU factorization is proportional to $\frac{2}{3}m^3$ (where m is the size of the matrix), whereas for QR factorization the operation count is $~\frac{4}{3}m^3$. These operation counts can be derived by walking through the algorithms for performing these factorizations and seeing how many computations are required for a matrix of size $m$. If you're interested in more detail, you can find more detailed explanations in e.g. Trefethen and Bau's Numerical Linear Algebra. -On the final topic of the error, I'm a little less certain what is going on. The potential culprit that immediately stands out is the fact that in the first case you are applying the constraint after solving, whereas in the second case you do so before solving. This makes it not quite an apples to apples comparison. It would not surprise me if somehow the matrix multiplication involved with solving for y after computing the QR decomposition led to more error propagation. While it is possible that it has to do with the condition numbers of the matrix, I think it's unlikely that this would lead to such a drastic difference between the LU and the QR results. The only reason I think that this argument might have legs is that in QR factorization you need to compute an orthogonal matrix, and numerically it is very easy for that orthogonality to be very slightly violated in ways that might affect the rest of your solution.<|endoftext|> -TITLE: Prove that $a-b=b-a\Rightarrow a=b$ without using properties of multiplication. -QUESTION [5 upvotes]: Yesterday my Honors Calculus professor introduced four basic postulates regarding (real) numbers and the operation $+$: - -(P1) $(a+b)+c=a+(b+c), \forall a,b,c.$ -(P2) $\exists 0:a+0=0+a=a, \forall a.$ -(P3) $\forall a,\exists (-a): a+(-a)=(-a)+a=0.$ -(P4) $a+b=b+a, \forall a,b.$ - -And of course, we can write $a + (-b) = a-b$. Then he proposed a challenge, which was to prove that $$a-b=b-a\iff a=b$$ using only these four basic properties. The $(\Leftarrow )$ is extremely easy and we can prove using only (P3), but I'm struggling to prove $(\Rightarrow )$ and I'm starting to think that it is not possible at all. -My question is how to prove $(\Rightarrow )$, or how to prove that proving $(\Rightarrow )$ isn't possible, using only (P1), (P2), (P3), (P4)? - -REPLY [8 votes]: In fact, it is impossible to prove that result using only the information provided. -To show that this is impossible, we can build a system that obeys the postulates, but does not satisfy the provided statement. In particular, we can consider the following system: - -The only numbers are $0$ and $1$ -$1+0=0+1=1$ -$0+0=1+1=0$ (so, $a = -a$ for $a = 0,1$) - -Now, show that $a = 0$ and $b = 1$ satisfy $(a-b) = (b-a)$ but $a \neq b$. - -What you can say (once you allow multiplication by integers) is that -$$ -a - b = b - a \iff 2(a-b) = (a-b) + (a-b) = 0 -$$ -in our system, however, multiplying anything by $2$ makes it zero. - -REPLY [4 votes]: You are right, you can't only from those axioms, and here is why. -Consider $A=\mathbb{Z}/2\mathbb{Z}$. Then $1-0=1=0-1$. But $0 \neq 1$. -But you can show this is true if $2 \neq 0$ and your ring is a integral domain, where $2:=1+1$. In fact, -$a-b=b-a \implies a=b-a+b \implies 0=b-a+b-a \implies 0=2b-2a \implies 0=2(b-a) \implies b-a=0 \implies b=a. $<|endoftext|> -TITLE: Example of a continuous function with a discontinuous inverse -QUESTION [11 upvotes]: What is an example of a function $f: \Bbb R^n \rightarrow \Bbb R^m$ such that $f$ is continuous and injective but that $f^{-1}$ is not continuous. -Our professor teased us with the notion but I haven't been able to think of such a function. - -REPLY [12 votes]: Take $f:\mathbb{R} \rightarrow \mathbb{R}^2$ to be a function which performs an eight-shaped figure in the way described here (as $x \rightarrow -\infty$, it tends to the origin, and also as $x \rightarrow \infty$). -For topological reasons, the inverse cannot be continuous. -Note that if $n=m$, then the inverse must be continuous, and this is a result of the Invariance of Domain Theorem. (If $n=m=1$, a direct proof through methods of real analysis can be easily achieved)<|endoftext|> -TITLE: The equivalence of two definitions of closed subscheme, Vakil's Ex 8.1.K -QUESTION [7 upvotes]: Generally in literature, the definition of a closed embedding in the category of scheme is a morphism $\pi:X \rightarrow Y$ between two schemes such that $\pi$ induces a homeomorphism of the underlying topological space of $X$ onto a closed subset of the topological space of $Y$, and the induced map $\pi^\sharp : \mathcal{O}_Y \rightarrow \pi_*\mathcal{O}_X$ of sheaves on $Y$ is surjective. -However Vakil uses a different definition, $\pi:X \rightarrow Y$ is a closed embedding if $\pi$ is an affine morphism and for every affine open subset $\text{Spec}~B \subset Y$ with $\pi^{-1}(\text{Spec}~B)=\text{Spec}~A$, the induced ring homomorphism is surjective, i.e. $B \rightarrow A$ is surjective. -So Ex.8.1.K is to show the equivalence of the two definitions. It is trivial that Vakil's definition implies the definition in literature, but how to show the other direction? - -REPLY [7 votes]: It is clear that the standard definition is local on $Y$, so by taking an affine open cover, it suffices to prove the case where $Y$ is affine, say $Y\cong \operatorname{Spec}(A)$. -We thus have a morphism $\pi: X \rightarrow \operatorname{Spec}(A)$, determined by a ring map $\pi^{\sharp}(A): A \rightarrow \Gamma(X,\mathcal{O}_X)$. Let $I$ be the kernel of the map, so that $\pi$ factors as $\psi \circ \varphi :X\rightarrow\operatorname{Spec}(A/I)\rightarrow\operatorname{Spec}(A)$, where $\psi$ is the standard closed immersion, and $\varphi^{\sharp}$ induces an isomorphism of global sections. We will show that $\varphi$ is an isomorphism. -Since $\pi$ and $\psi$ are both closed immersions in the usual sense, we have e.g. that $\pi_*(\mathcal{O}_X)_{\varphi(p)} \cong \mathcal{O}_{X,p}$ and $\pi_*(\mathcal{O}_X)_q \cong 0$ if $q \not\in \operatorname{Im}(\pi)$. From this observation it follows that $\pi_p:A_{\pi(p)}\rightarrow \mathcal{O}_{X,p}$ is surjective, and so too is $\varphi_{p}:(A/I)_{\varphi(p)}\rightarrow \mathcal{O}_{X,p}$. Provided $\varphi_p$ is also injective, it is an isomorphism, which tells us in particular that $\operatorname{Im}(\varphi) = \operatorname{Spec}(A/I)$, hence $\varphi$ is also a homeormorphism of topological spaces*, and so an isomorphism of schemes. -To prove injectivity, let $\mathcal{I} = \ker(\psi^{\sharp})$ and $\mathcal{J}=\ker(\pi^{\sharp})$. Clearly $\mathcal{I}\subset \mathcal{J}$ and on the distinguished affine open subset $D(f)$, $\mathcal{I}(D(f))= I_f$. $\mathcal{J}(D(f))$ is the kernel of the map $\pi^{\sharp}:A_f\rightarrow \Gamma(\pi^{-1}(D(f)),\mathcal{O}_X)$, and moreover $\pi^{-1}(D(f)) = X_{\pi^{\sharp}(f)}$. They key step now is to notice that $X$ is quasi-compact and quasi-seperated (qcqs), since it is homeomorphic to a closed subset of an affine scheme, and affine schemes are qcqs. The qcqs lemma ($7.3.5$) tells us that there is a natural isomorphism $\Gamma(X,\mathcal{O}_X)_{\pi^{\sharp}(f)} \cong \Gamma(X_{\pi^{\sharp}(f)},\mathcal{O}_X)$. The naturality gives that the ring map $A_f\rightarrow \Gamma(X_{\pi^{\sharp}(f)},\mathcal{O}_X)$ is the localisation of the map $\pi^{\sharp}(A)$, so it's kernel is $\mathcal{J}(\operatorname{Spec}(A))_f = I_f = \mathcal{I}_f$. Thus the two sheaves agree on a base of open sets, and so are equal, hence $\varphi$ is an isomorphism as discussed earlier. -*We knew already that $\varphi$ was a homeomorphism with a closed subset and that $\operatorname{Supp}(\varphi_*(\mathcal{O}_X))=\operatorname{Im}(\varphi)$. But if $\varphi_*(\mathcal{O}_X)\cong \mathcal{O}_{A/I}$, then the sheaves have the same support, namely the whole scheme.<|endoftext|> -TITLE: Lebesgue Integral - graphical concept -QUESTION [12 upvotes]: I am having problems visualizing the "mechanics" of the Lebesgue integral, but after much editing of the question I think I get it (at least for nice functions where measure theory can be somewhat taken for granted). -So I decided to posted the material I have been working on as a proposed answer. -Part of the misunderstanding had to do with plots found online showing slabs of horizontal, brick-like constructs, as opposed simple functions. In addition, the initial definitions on the chapter on Lebesgue integrals in A Garden of Integrals by Frank E. Burk: - -If a function $f$ is bounded measurable on the interval $[a,b]$ with - $\alpha -TITLE: Motivation/Intuition behind Lorentz spaces -QUESTION [12 upvotes]: My current understanding is that the Lorentz spaces $L^{p,q}$ arise naturally as interpolation spaces between $L^1$ and $L^\infty$, but then people often describe them heuristically by saying something along the lines of "Lorentz spaces provide a finer control than $L^p$ spaces", and this is where I'm lost - what does that really mean? -It certainly seems like a reasonable claim, if only because you now have an extra parameter to tweak, and since $L^{p,p}=L^p$, well the Lorentz spaces are simply a larger class of spaces amongst which your classical $L^p$ spaces live, so sure, they are "better" because there's more of them, so I can give more nuanced descriptions, but I don't really understand where the nuance lies, I don't understand what extra control the Lorentz spaces provide you that the usual $L^p$ spaces do not. -I feel like my question is very vague overall, so feel free to ask for clarifications. As an example of the type of answer that I think there might be to what I am asking is the following cryptic (to me anyway) comment on the wikipedia page for "Lorentz spaces": "The Lorentz norms provide tighter control over both qualities than the $L^{p}$ norms, by exponentially rescaling the measure in both the range (p) and the domain (q)". I have no idea what that means, if anyone does, please let me know, but it seems like, after clarification, it would provide a nice intuitive explanation for precisely how Lorentz spaces provide finer control than $L^p$ spaces do. - -REPLY [2 votes]: Note that if $F$ is the distribution function of $f$, and $f = H \cdot \mathbf{I}_E$, where $E$ is a measurable set with $|E| = W$, then $F = W \cdot \mathbf{I}_{[0,H]}$. Thus, in some sense, the distribution function switches the domain and range of a function, so that the 'range' of $f$, is the 'domain' of $F$, and vice versa. In particular, the $L^p$ norms -$$ \| f \|_p = \left( \int |f(x)|^p\; dx \right)^{1/p} \sim \left( \int_0^\infty F(t) t^p \frac{dt}{t} \right)^{1/p} $$ -try to understand the distribution of $f$ by scaling it's range, or by scaling the `domain' of $F$ (changing the power of $t$ in the equation). Conversely, the Lorentz norm -$$ \| f \|_{p,q} \sim \left( \int_0^\infty (tF(t)^{1/p})^q \frac{dt}{t} \right)^{1/q} $$ -have two separate powers $p$ and $q$. Here $p$ scales the domain of $f$, and $q$ scales the domain and range of $f$ simultaneously. We changed $F(t)$ from being linear to being a power of $1/p$, but this is only slightly confusing because -$$ \left( \int_0^\infty (t^pF(t))^{1/q} \frac{dt}{t} \right)^{1/q} \sim \| f \|_{p,q/p} $$ -The reason that $q$ needs to scale the domain and range simultaneously is so that it acts as a second order parameter for the family of quasinorms, when compared to the primary exponent, which is $p$.<|endoftext|> -TITLE: Calculation of $\lim_{n\rightarrow\infty}\frac{3^{3n}\cdot (n!)^3}{(3n+1)!}=$ -QUESTION [10 upvotes]: Calculation of $$\lim_{n\rightarrow\infty}\frac{3^{3n}\cdot (n!)^3}{(3n+1)!}=$$ - -$\bf{My\; Try::}$ Using Stirling Approximation $\displaystyle (n!\approx\left(\frac{n}{e}\right)^n\sqrt{2\pi n})$,We get -Limit $$l=\lim_{n\rightarrow\infty}\frac{3^{3n}\cdot \left(\frac{n}{e}\right)^{3n}\left(\sqrt{2\pi n}\right)^3}{\left(\frac{3n+1}{e}\right)^{3n+1}\sqrt{2\pi (3n+1)}} = \frac{2\pi}{3\sqrt{3}}$$ -My question is how can we solve it Using Reinman sum (Limit as a sum) or any other method -Help me ,Thanks - -REPLY [3 votes]: First of all, -let's see what the result should be -for general $k$. -$\begin{array}\\ -f(k, n) -&=\dfrac{k^{kn} (n!)^k}{(kn)!}\\ -&\approx \dfrac{k^{kn} (n^n\sqrt{2\pi n}e^{-n})^k}{(kn)^{kn}\sqrt{2\pi kn}e^{-kn}(kn)!}\\ -&= \dfrac{k^{kn} n^{kn}(2\pi)^{k/2} n^{k/2}e^{-kn}}{k^{kn}n^{kn}\sqrt{2\pi kn}e^{-kn}}\\ -&= \dfrac{(2\pi)^{(k-1)/2} n^{(k-1)/2}}{\sqrt{ k}}\\ -\end{array} -$ -As a check, -this gives -$f(3, n) -\approx \dfrac{2\pi n}{\sqrt{ 3}} -$ -which agrees with your result, -since -$(k-1)/2 = 1$ for $k=3$. -Note that the -$(3n+1)!$ was sort of a fake, -since -$3n(3n)!$ -would have given the same limit. -If you want to so -some sort of -Riemann sum, -you would have to take logs -and use -$\ln(n!) -=\sum_{i=1}^n \ln(i) -\approx \int_{i=1}^n \ln(x)dx -=(x \ln(x)-x)\big|_0^n -=n\ln(n)-n -$. -This is quite close -to Stirling's -$\ln(n!) -\approx n\ln(n)-n+\frac12(\ln(n)+\ln(2\pi)) -$. -The reason is that -if $f$ is monotonic -then the error in using -$\sum_{i=1}^n f(i) -\approx \int_{i=1}^n f(x)dx -$ -is bounded by -$f(0)+f(n)$ -which is, in this case, -$\ln(n)$. -This can be proved -by using -$\min(f(n), f(n+1)) -\le \int_n^{n+1} f(x) dx -\le \max(f(n), f(n+1)) -$. -What you get -for $\ln f(k, n)=g(k, n)$ -is -$\begin{array} -fg(k, n) -&\approx kn\ln(k)+k\ln(n!)-\ln((kn)!)\\ -&=kn\ln(k)+k(n\ln(n)-n)-(kn\ln(kn)-kn)\\ -&=kn\ln(k)+kn\ln(n)-kn-(kn(\ln(k)+\ln(n))-kn)\\ -&=kn\ln(k)+kn\ln(n)-kn-(kn(\ln(k)+\ln(n))-kn)\\ -&=0\\ -\end{array} -$ -The reason for this -is that the error -in -$\ln(n!) -\approx n \ln(n)-n -$ -is of order -$\ln(n)$, -so the error in -$\ln(n!^k)$ -is of order -$k\ln(n) -=\ln(n^k) -$ -which fits the precise result -nicely. -In other words, -you would have to use -a more precise approximation -to $\sum_{i=1}^n \ln(i)$ -and the best you could do -would be the same as -Stirling's approximation -with the constant -($\sqrt{2\pi}$) -being undetermined. -More precisely, -if you used -$\ln(n!) -\approx n\ln(n)-n+\frac12 \ln(n)+c -$, -you would get -$\ln(f(k, n)) -\approx (k-1)c+\frac{k-1}{2}\ln(n)-\frac12 \ln(k) -$. -And that's about all I can think of.<|endoftext|> -TITLE: Game: two pots with coins -QUESTION [5 upvotes]: Rules of the game with two players. -First player puts any number of coins in the first pot. Then second player, knowing that number, puts any amount of coins in the second pot. -Then they in turns (beginning with the first player) do one of these things: take any amount of coins from one pot OR take equal amount from both pots. Whoever cannot take a coin loses. Players always know how many coins there are in the pots. -Who wins and what is the perfect strategy? -I managed to solve it, but the solution is very complicated and is based on fact which I just randomly guessed, but cannot deduct without the full solution. So I am looking for a nice approach to this problem. -The easy part is determining that player two wins: if there is a smallest winning initial donation by the first player, then the first move of player one cannot be taking just from the second pot, and this fact leads to infinite count of winning positions for player two for bounded number in the first pot, which is impossible (since there cannot be two winning positions for player two with the same amount of coins in the first pot). - -REPLY [3 votes]: The game they play after the coin numbers are chosen is known as Wythoff's game. Here's a plot of the losing positions (from the Wikipedia article): - -The two lines are complementary Beatty sequences, which implies that for every number chosen by the first player, the second player can choose a number such that the resulting position is a losing position for the first player.<|endoftext|> -TITLE: How to prove that a complex number is not a root of unity? -QUESTION [30 upvotes]: $\frac35+i\frac45$ is not a root of unity though its absolute value is $1$. - -Suppose I don't have a calculator to calculate out its argument then how do I prove it? -Is there any approach from abstract algebra or can it be done simply using complex numbers? -Any help will be truly appreciated. - -REPLY [4 votes]: Let $a=\frac{3}{5}+\frac{4}{5}i$. If $a$ were a root of unity, there would exist a positive integer $n$ so that -$$a^n-1=0.$$ -Expressions of the form $x^n-1$ can be factored into products of cyclotomic polynomials: -$$x^n-1=\prod_{d|n}\Phi_d(x).$$ -There are several properties of cyclotomic polynomials that are relevant here: - -$\Phi_d(x)\in\mathbb{Z}[x]$. That is, all its coefficients are integers. -$\Phi_d(x)$ is monic, meaning that the coefficient of the highest-powered term is 1. -$\Phi_d(x)$ is irreducible over $\mathbb{Q}[x]$. That is, it cannot be factored into two non-constant polynomials with rational coefficients. - -The minimal monic polynomial $p(x)\in\mathbb{Q}[x]$ for $a$ is readily calculated to be -$$p(x)=x^2-\frac{6}{5}x+1.$$ -In other words, $p(x)$ is the lowest-powered monic polynomial with rational coefficients where $p(a)=0$. Because of the non-integer $-\frac{6}{5}$ coefficient, $p(x)\neq\Phi_d(x)$ for any positive integer $d$ and therefore $a$ is not a root of unity.<|endoftext|> -TITLE: There are $5$ apples $10$ mangoes and $15$ oranges in a basket. -QUESTION [7 upvotes]: There are $5$ apples $10$ mangoes and $15$ oranges in a basket. Then find number of ways of distributing $15$ fruits each to $2$ persons. -Can I approach this question as number of ways $15$ fruits from $5$ apples $10$ mangoes and $15$ oranges to one person as every time first person gets $15$ fruits, there will be $15$ fruits left in the basket. - -REPLY [4 votes]: You can give any combo of $0-5$ apples, $0-10$ mangoes to person $A$ -Thus $6\times11 = 66$ ways (the balance needed to make $15$ will be oranges)<|endoftext|> -TITLE: $L^1$ convergence of PDFs vs $L^2$ convergence of CDFs -QUESTION [6 upvotes]: Let $f_n$ denote a sequence of PDFs, and $F_n$ denote the corresponding sequence of CDFs. Given $L^1$ convergence of the PDFs to some PDF $f$, -$$\int_\mathbb{R} |f_n(x) -f(x)| dx \rightarrow 0$$ -does this imply $L^2$ convergence of the corresponding CDFs to the corresponding CDF $F$, -$$\int_\mathbb{R} \left(F_n(x) - F(x)\right)^2 dx \rightarrow 0$$ - -REPLY [3 votes]: First note that $0\leqslant F(x),F_n(x)\leqslant 1$ for all $n,x$ so $\sup_{n,x}|F_n(x)-F(x)|=2$. It follows that $$\sup_{n,x}(F_n(x)-F(x))^2\leqslant 2|F_n(x)-F(x)|.$$ -From $L^1$ convergence of $f_n$, continuity of $F_n, F$, and nonnegativity of $f_n,f$, we have for each $t\in\mathbb R$ -\begin{align} -|F_n(t)-F(t)| &= \left| \int_{-\infty}^t f_n(x)\ \mathsf dx -\int_{-\infty}^t f(x)\ \mathsf dx\right|\\ -&\leqslant \int_{-\infty}^t |f_n(x)-f(x)|\ \mathsf dx\\ -&\leqslant \int_{\mathbb R} |f_n(x)-f(x)|\ \mathsf dx\stackrel{n\to\infty}\longrightarrow 0, -\end{align} -and hence $F_n$ converges in distribution to $F$. In fact, as $F$ and $F_n$ are bounded, continuous, and monotone, it follows from Pólya's extension to Dini's theorem (cf. *Problems and Theorems in Analysis, I, p. 270) that $F_n$ converges uniformly to $F$ on any compact subset of $\mathbb R$. Since $F$ is a CDF, we may extend it to a continuous function $\overline F$ on the extended real line $[-\infty,\infty]$ by $\overline F(-\infty)=0$, $\overline F(\infty)=1$ (and similarly for $F_n$, and therefore $F_n$ converges uniformly to $F$ on $[-\infty,\infty]$. Since $F$ and $F_n$ are uniformly continuous, given $\varepsilon>0$ we may choose $N$ so that $n\geqslant N$ implies $$\sup_{x\in\mathbb R}|F_n(x)-F(x)|<\varepsilon. $$ Then $$\int_{\mathbb R} (F_n(x)-F(x))^2\ \mathsf dx\leqslant 2 \int_{\mathbb R}|F_n(x)-F(x)|\ \mathsf dx\stackrel{n\to\infty}\longrightarrow 0$$ -so that $F_n$ converges to $F$ in $L^2$.<|endoftext|> -TITLE: Show that a radical ideal has no embedded prime ideals. -QUESTION [8 upvotes]: Let $A$ be a commutative ring and $I$ a decomposable ideal. Let $I=\bigcap_{k=1}^{n} I_k$ be a minimal primary decomposition. Show that if $I=\sqrt{I}$ then $I$ has no embedded prime ideals. - -(I noticed that $I=\bigcap_{k=1}^{n} P_k$ where $P_k=\sqrt{I_k}$, $\forall k\in\lbrace1,...,n\rbrace$. I have to show that $P_i \nsubseteq P_j, \forall i\neq j $.) - -REPLY [6 votes]: Since $I=\bigcap_1^nQ_i$ is a minimal primary decomposition, by uniqueness theorem of a primary decomposition, the number $n$ is fixed by $I$ in all primary decompositions, i.e., $I$ has no primary decomposition with less than or equal to $n-1$ primary ideals. Now let $I=\sqrt{I}$, then $$I=\sqrt{I}=\sqrt{\bigcap_1^nQ_i}=\bigcap_1^n\sqrt{Q_i}=\bigcap_1^n P_i$$ is also a primary decompositon for $I$, where each $Q_i$ is $P_i$-primary. Now if for some $i\neq j$, say 1 and 2, $P_2\subset P_1$, then $I=\bigcap_2^nP_i$ which is a primary decomposition with $n-1$ elements, a contradiction.<|endoftext|> -TITLE: Is the function $f(x,y,z)=\sqrt{x^2 +y^2} +z^2$ smooth? -QUESTION [5 upvotes]: Let $f:\mathbb{R}^3 \rightarrow \mathbb{R}$ be defined by - -$$f(x,y,z)=\sqrt{x^2 +y^2} +z^2 .$$ - -Is this function smooth? My head is telling me there should be a problem when $x=y=0$, but I'm not sure. Can anybody help me out? - -REPLY [4 votes]: Your intuition is correct: Applying the definition gives that, e.g., the partial derivative $\frac{\partial f}{\partial x}$ does not even exist at $(0, 0, 0)$, and hence $f$ is not even differentiable once there, let alone smooth. \ No newline at end of file