diff --git "a/stack-exchange/math_stack_exchange/shard_105.txt" "b/stack-exchange/math_stack_exchange/shard_105.txt" deleted file mode 100644--- "a/stack-exchange/math_stack_exchange/shard_105.txt" +++ /dev/null @@ -1,5841 +0,0 @@ -TITLE: What's wrong with this use of Taylor's expansions? -QUESTION [7 upvotes]: I'm trying to find the value of the following limit: -$$ -\lim_{x \to 0} \frac{x^2\cos x - \sin(x\sin x)}{x^4} -$$ -Which I know equals to $-\dfrac13$. -I tried to do the following: -$$ -\lim_{x \to 0} \frac{x^2(1 - \frac{x^2}{2} + o(x^4)) - \sin(x(x + o(x^3)))}{x^4}\\ -= \lim_{x \to 0} \frac{x^2 - \frac{x^4}{2} + o(x^4) - \sin(x^2 + o(x^4)))}{x^4}\\ -= \lim_{x \to 0} \frac{x^2 - \frac{x^4}{2} + o(x^4) - x^2 + o(x^4)}{x^4} -= -\frac12 -$$ -The result is clearly wrong. I suspect the mistake to be in the expansion of $\sin (x \sin x)$ but I don't get it. -What's wrong? - -REPLY [2 votes]: I suspect the mistake to be in the expansion of $~\sin(x\sin x),$ but I don't get it. What's wrong ? - -Your error consists in using the double-approximation $\sin(x\sin x)\simeq x\sin x\simeq x^2,$ by applying -$\sin t\simeq t$ twice instead of just once, yielding the more accurate $\sin(x\sin x)\simeq x\sin x.$ The latter -leads to $~\lim\limits_{x\to0}~\dfrac{\cos x-\dfrac{\sin x}x}{x^2}=-\dfrac13,~$ which is different from $~\lim\limits_{x\to0}~\dfrac{\cos x-\color{red}1}{x^2}=-\dfrac12.$<|endoftext|> -TITLE: Why is the empty set convex? -QUESTION [5 upvotes]: Why is it the empty set, trivially convex? I see this results stated into a proof as something known, but I do not understand what's the idea idea behind it. How could I reason about convex combinations if the set has no elements? - -REPLY [9 votes]: Here's another approach, without thinking about the formal quantifier logic of it. -The intersection of any two convex sets is convex, yes? Well, what's the intersection of two disjoint circles?<|endoftext|> -TITLE: Length of the main diagonal of an n-dimensional cube -QUESTION [14 upvotes]: Find the length of a main diagonal of an n-dimensional cube, for example the one from $(0,0,...,0)$ to $(R,R,...,R)$ -I tried to use induction to prove that its $\sqrt{n}R$ but I'm stuck on writing the proof that for an n-dimensional cube, the perpendiculars that with that main diagonal compose the right-angled triangle are the main diagonal of the n-1-dimensional cube and another R-length-ed perpendicular -Thanks - -REPLY [5 votes]: I think this is basically what you've been trying to do, but here's a picture of a series of right angled triangles, each built using the hypotenuse of the previous triangle and a side of length $R$ as legs. The red triangle's hypotenuse is the diagonal of a square, the green triangle's hypotenuse is the diagonal of a cube, and the blue triangle's hypotenuse is that diagonal of the 4-cube. - -The only particular thing we must prove about this is that the chosen diagonal is perpendicular to the chosen edge at each step. Essentially, this is because, to extend the cube one dimension higher, we add a new side, perpendicular to all the other sides. A consequence of this is that any line drawn in the space of the original cube is perpendicular to the new edges - for instance, any line drawn on the bottom face of a cube is perpendicular to the edges connecting that face to the top face. -This is most simply a consequence of vectors: The set of vectors perpendicular to a given one is a linear subspace. Since the diagonal of a cube is in the span of the edges of the cube and all of those are perpendicular to the new edge, we find that the diagonal is perpendicular to the new edge. Basically, extending a cube is adding a new vector perpendicular to everything we already had. -One could state this property (sufficiently well for our purposes), without resorting to vectors, as saying: - -If $AB$ and $BC$ are perpendicular to $ED$, then $AC$ is perpendicular to $ED$. - -which could be proved using the law of cosines. Then, in our case, we can just apply that $AB$ and $BC$ are perpendicular to $ED$ by definition of a cube, thus so is $AC$. Then, again $CD$ is perpendicular to $ED$ and we just proved $AC$ was, meaning $AD$ is perpendicular to $ED$, which gets us the result we wanted.<|endoftext|> -TITLE: Existence of solutions to first order ODE -QUESTION [5 upvotes]: The fundamental theorem of autonomous ODE states that if $V:\Bbb R^n\to\Bbb R^n$ is a smooth map, then the initial value problem -$$ -\begin{aligned} -\dot{y}^i(t) &= V^i(y^1(t),\ldots,y^n(t)),&i=1,\ldots,n \\ -y^i(t_0) &= c^i, &i=1,\ldots,n -\end{aligned}\tag{1} -$$ -for $t_0\in\Bbb R$ and $c=(c^1,\ldots,c^n)\in\Bbb R^n$ has the following existence property: - -Existence: For any $t_0\in\Bbb R$ and $x_0\in\Bbb R^n$, there exist an open interval $J$ containing $t_0$ and an open subset $U$ containing $x_0$ such that for each $c\in U$, there is a smooth map $y:J\to\Bbb R^n$ that solves $(1)$. - -Now here is my question: - -Question: Suppose we already know that a solution exists with initial value $y(t_0)=x_0$ on an interval $J_0$ containing $t_0$. Does the interval $J$ above can be assumed to contain $J_0$? - -A priori, there is noting telling us that in the statement of the theorem. My question can be rephrased as follows. - -Reformulation of the Question: Let $y:J\to\Bbb R^n$ be a smooth solution to $(1)$ with initial value $y(t_0)=x_0$. Is there an open set $U$ containing $x_0$ such that for all $c\in U$ there is a smooth solution $z:J\to\Bbb R^n$ to $(1)$ with initial value $z(t_0)=c$? - -Edit: And what about the case where $J$ is a compact interval? - -REPLY [3 votes]: The answer to the reformulation is negative. Consider the problem -\begin{equation} -\begin{cases} -y'=y^2 \\ -y(0)=c -\end{cases} -\end{equation} -Its solution is $y(t)=\frac{1}{c^{-1}-t}$ and it is defined on $J_c=(-\infty, c^{-1})$. So, for example, the solution with initial datum $c=1$ is defined on $(-\infty, 1)$ while the solution with initial datum $1+\epsilon$ is defined on a strictly smaller interval, no matter how small $\epsilon$ is.<|endoftext|> -TITLE: How to prove the parametric equation of an ellipse? -QUESTION [5 upvotes]: The parametric equation of an ellipse is -$$x=a \cos t\\y=b \sin t$$ -It can be viewed as $x$ coordinate from circle with radius $a$, $y$ coordinate from circle with radius $b$. - -How to prove that it's an ellipse by definition of ellipse (a curve on a plane that surrounds two focal points such that the sum of the distances to the two focal points is constant for every point on the curve) without using trigonometry and standard equation of ellipse? - -REPLY [8 votes]: Let $OA=a$ and $OB=b$ be the radii of the two circles, and let $C$, $C'$ be the foci of the ellipse, where $OC=OC'=c=\sqrt{a^2-b^2}$. -If $H$ is the projection of $A$ on major axis $DE$ and $P$ is the projection of $B$ on $AH$, then you want to show that $PC+PC'=2a$. -Suppose, without loss of generality, that $C$ is the focus nearest to $P$. We have $PC^2=PH^2+CH^2$, but $PH=(b/a)AH$, $HC=|c-OH|$ and $AH^2+OH^2=a^2$, so that: -$$ -\begin{aligned} -PC^2&={b^2\over a^2}AH^2+(c-OH)^2={b^2\over a^2}(a^2-OH^2)+c^2+OH^2-2cOH\\ -&=a^2+{c^2\over a^2}OH^2-2cOH=\left(a-{c\over a}OH\right)^2,\\ -\end{aligned} -$$ -and then $PC=a-(c/a)OH$. An analogous computation yields $PC'=a+(c/a)OH$, -so that $PC+PC'=2a$, QED.<|endoftext|> -TITLE: Real-analytic function of two complex variables, holomorphic in first and anti-holo in second, which vanishes on the diagonal is identically zero. -QUESTION [8 upvotes]: The following theorem is stated as being a well-known result of the theory of several complex variables in a book I am reading (on a more or less unrelated subject): - -Let $f:\mathbb C^2\to\mathbb C$ be a real-analytic function such that - $f$ is holomorphic in the first variable and anti-holomorphic in the - second variable. If $f(z,z) = 0$ for all $z\in\mathbb C$, then - $f=0$ identically. - -1) Can somebody point me to a reference where this is proved, or provide a proof that does not use extensive machinery of several complex variables? -2) What is the necessity of specifying real-analyticity? Is it false that holomorphicity and anti-holomorphicitiy respectively in the two variables implies real-analyticity? - -REPLY [2 votes]: Assuming $f$ is real-analytic leads to a very simple proof; I wasn't aware of this result and know more or less nothing about several complex variables, but I found a proof more or less automatically: -At least in some neighborhood of the origin we have $$f(z,w)=\sum_{n,m\ge0}a_{n,m}z^n\overline{w}^m.$$ So for $r>0$ small enough and $t\in\Bbb R$ we have $$0=\sum_{n,m\ge0}a_{n,m}r^{n+m}e^{i(n-m)t}.$$Uniqueness for Fourier series shows that for small enough $r>0$ and every $k\in\Bbb Z$ we have $$\sum_{n-m=k}a_{n,m}r^{n+m}=0.$$ -BUT given $N\ge0$ there exists at most one pair $(n,m)$ with $n-m=k$ and $n+m=N$. Which is to say that the coefficient of $r^{n+m}$ in that last series really is $a_{n,m}$, as it would appear. So one complex variable shows $$a_{n,m}=0.$$<|endoftext|> -TITLE: Is there a symbol for plus and minus as opposed to plus or minus? -QUESTION [17 upvotes]: I know that you can use $\pm$ for when the answer could be either positive or negative, e.g., $x^2=16$, $x=\pm 4$. -But is there a symbol that implies that you use both the positive and the negative values? For example, I want to do something along the lines of: -$$(2/3a) \left(\sqrt[3]{2b^3 - 9abc + \sqrt{−4(b^2−3a)}} + \sqrt[3]{2b^3 - 9abc - \sqrt{−4(b^2−3a)}}\right)$$ -It would be very useful to not have to write out the cube root twice and instead have a plus and minus sign before the square root. - -REPLY [11 votes]: No, and there's a good reason for it: it cannot convey the necessary information. -How would the reader know that the intention is to add? -What if you intended the "and" to be for multiplication? -You need to denote the operator somehow, and that will take care of the "and" part by itself.<|endoftext|> -TITLE: A doubt in the proof of Kuratowski theorem -QUESTION [5 upvotes]: In trying to understand the proof of Kuratowski's theorem (namely, a graph is planar if and only if it contains no subdivision of $K_5$ or $K_{3,3}$) from this book (Page 299) I am first trying to understand the proof of the fact that a minimal non planar graph where each vertex is of degree at least $3$ is $3$-connected. -The book's proof is on the following lines. We start by noting that $G$ is $2$-connected. By way of contradiction, we then assume that $G=G_1\cup G_2$ with $V(G_1)\cap V(G_2)=\{x,y\},|V(G_i)|\ge 3$. Let $P_i$ be an $(x,y)$ path in $G_i$ and $H_i=G_i+P_{3-i}$. Then $H_i$ is planar and we can embed $H_i$ into the plane so that the path $P_{3-i}$ is on the boundary of the unbounded domain (this can be achieved by inverting the plane with respect to an appropriate circle). (And then the proof continues.) -I do not understand the last statement "we can embed $H_i$ into the plane so that the path $P_{3-i}$ is on the boundary of the unbounded domain". Can someone explain why is this so? I asked a more general question in this regard here but the answer to that does not solve the problem here. What is meant by "inverting the plane with respect to an appropriate circle"? - -REPLY [3 votes]: Planar graphs, by definition, are graphs that can be represented (drawn) in the Euclidean plane using distinct points for vertices and mutually disjoint continuous paths between those points for edges joining them. If we drop the "mutually disjoint" condition then any finite graph can be represented in that way. -The definition would be an equivalent one if we replaced the Euclidean plane with the sphere. Stereographic projection from the North Pole onto the plane tangent to the South Pole establishes the necessary homeomorphism between the sphere (minus one point, the North Pole) and the Euclidean plane. Points close to the North Pole are projected onto points far away from the origin of the Euclidean plane and far away from the ensuing Euclidean representation of the graph. -But the sphere can be rotated with respect to the graph so that the North Pole actually lies inside one of the inner loops. In fact for any given edge of the graph there exist rotations of the sphere that place the North Pole in a region adjacent to that edge. -Applying stereographic projection to the new situation ensures that the given edge appears on the outside of the new Euclidean representation of the graph. -My guess is that the author means by "circle" any bounded connected component of the complement of the graph representation. I have called this an "inner loop" but it is the same thing. When the author says "unbounded domain" they refer to the unique unbounded connected component of the complement of the graph representation. -Now $P_{3-i}$ in the proof is a path, not an edge. But its topological role in $H_i$ is the same as if it were a single edge, because $P_{3-i}\cap G_i=\{x,y\},$ i.e., none of the intermediate points on the path has any edges (in $H_i$) apart from the two edges that make it part of the path. Therefore any circle of $H_i$ that is adjacent to a single edge of $P_{3-i}$ is adjacent to all edges of $P_{3-i}$ and consequently the above construction that rotates the North Pole next to an edge of $P_{3-i}$ ensures that all edges of the path $P_{3-i}$ are on the outside of the new Euclidean representation.<|endoftext|> -TITLE: Real numbers $x,y$ satisfies $x^2+y^2=1.$If the minimum and maximum value of the expression $z=\frac{4-y}{7-x}$ are $m$ and $M$ -QUESTION [6 upvotes]: Real numbers $x,y$ satisfies $x^2+y^2=1.$If the minimum and maximum value of the expression $z=\frac{4-y}{7-x}$ are $m$ and $M$ respectively,then find $2M+6m.$ - -Let $x=\cos\theta$ and $y=\sin\theta$,because $\sin^2\theta+\cos^2\theta=1$. -Then we need to find the minimum and maximum value of the expression $\frac{4-\sin\theta}{7-\cos\theta}$. -I differentiated it and equated it to zero to find the critical points or points of extrema. -They are $\theta_1=\arcsin(\frac{1}{\sqrt{65}})-\arctan(\frac{7}{4})$ and $\theta_2=\arccos(\frac{1}{\sqrt{65}})+\arctan(\frac{4}{7})$ -I found $\frac{4-\sin\theta_1}{7-\cos\theta_1}$ and $\frac{4-\sin\theta_2}{7-\cos\theta_2}$. -$\frac{4-\sin\theta_1}{7-\cos\theta_1}=\frac{3}{4}$ and $\frac{4-\sin\theta_2}{7-\cos\theta_2}=\frac{5}{12}$ -This method is full of lengthy calculations.I want to know is there an elegant solution possible for this problem which is short and easy. - -REPLY [2 votes]: Let $\displaystyle k=\frac{4-y}{7-x}\Rightarrow 7k-kx=4-y\Rightarrow kx-y = 7k-4$ and given $x^2+y^2=1$ -Now using the Cauchy-Schwarz inequality, we get $$[k^2+(-1)^2](x^2+y^2)\geq (kx-y)^2$$ -So $$k^2+1\geq (7k-4)^2\Rightarrow 49k^2+16-56k\leq k^2+1$$ -So $$48k^2-56k+15\leq 0\Rightarrow (4k-3)(12k-5)\leq 0$$ -So we get $\displaystyle \frac{5}{12}\leq k\leq \frac{3}{4}$<|endoftext|> -TITLE: A.s. equality between limsup of random variables -QUESTION [8 upvotes]: "Let $(X_n)_{n\ge 1}$ be a sequence of uniformly bounded random variables defined on a probability space $(\Omega, \mathscr{F}, P)$. Moreover define $\mathscr{F_0}=\{\emptyset,\Omega\}$ and $\mathscr{F}_n=\sigma(X_1,\ldots,X_n)$ for each $n\ge 1$. Then with probability $1$ it holds -$$ -\limsup_{n\to \infty} \frac{1}{n}\sum_{m=1}^n X_m=\limsup_{n\to \infty} \frac{1}{n}\sum_{m=1}^n \mathbf{E}[X_m|\mathscr{F}_{m-1}]." -$$ -I couldn't find the proof of this fact, which is in some old article.. How can we prove it? - -REPLY [3 votes]: This is a form of martingale convergence. Setting $Y_n=X_n-\mathbf{E}[X_n\mid\mathscr{F}_{n-1}]$, we need to show that -$$ -\frac1n\sum_{m=1}^nY_m=\frac1n\sum_{m=1}^nX_m-\frac1n\sum_{m=1}^n\mathbf{E}[X_m\mid\mathscr{F}_{m-1}]\to0 -$$ -with probability one. The uniformly bounded hypothesis says that there is an $A > 0$ such that $\lvert X_n\rvert\le A$ for all $n$. In particular, $\mathbf{E}[X_n^2]\le A^2$ (which is all we really need), and this implies that $\mathbf{E}[Y_n^2]\le A^2$. Also, $\mathbf{E}[Y_n\mid\mathscr{F}_{n-1}]=0$ so the process $M_n=\sum_{m=1}^nY_m/m$ is a martingale. That is, $\mathbf{E}[M_n\mid\mathscr{F}_{n-1}]=M_{n-1}$. It is also $L^2$-bounded, -$$ -\mathbf{E}[M_n^2]=\sum_{m=1}^n\mathbf{E}[Y_m^2/m^2]\le\sum_{m=1}^nA^2/m^2\le\frac{A^2\pi^2}6. -$$ -Now, Doob's martingale convergence theorem says that $\sum_{m=1}^nY_m/m=M_n$ converges to a limit in $\mathbb{R}$ with probability one. Kronecker's lemma then gives -$$ -\frac1n\sum_{m=1}^nY_m=\frac1n\sum_{m=1}^nm(Y_m/m)\to0 -$$ -with probability one.<|endoftext|> -TITLE: Integral of Bessel function multiplied with sine $\int_0^\infty J_0(bx) \sin(ax) dx$. -QUESTION [6 upvotes]: I need advice on how to solve the following integral: - -$$\int_0^\infty J_0(bx) \sin(ax) dx$$ - -I've seen it referenced, e.g. here on MathSE, so I know the solution is $(a^2-b^2)^{-1/2}$ for $a>b$ and $0$ for $b>a$, but I don't know how to get there. -I have tried to solve it by using the integral representation of the Bessel function and switching the integrals, resulting in -$$ -\frac{1}{\pi}\int_0^\pi \int_0^\infty \sin(ax)\cos(bx\sin(\theta))dx d\theta. -$$ -Doing the dx-integration, I get -$$ -=\frac{1}{\pi}\int_0^\pi \frac{2a}{a^2-b^2\sin^2(\theta)}\left(1-\lim_{x\to\infty}\cos(ax)\cos(bx\sin(\theta)\right)d\theta -$$ -and have no idea how to proceed from there. -Is there anything wrong with my calculations? Should I use a totally different approach? Any help appreciated. - -REPLY [8 votes]: We can generalize the integral by manipulating the Laplace transform of $J_{n}(bx)$, namely $$ \int_{0}^{\infty} J_{n}(bx) e^{-sx} \, dx = \frac{(\sqrt{s^{2}+b^{2}}-s)^{n}}{b^{n}\sqrt{s^{2}+b^{2}}}\ , \quad \ (n \in \mathbb{Z}_{\ge 0} \, , \text{Re}(s) >0 , \, b >0 )\tag{1}. $$ -(See this question for a derivation of $(1)$ using contour integration.) -First let $s=p+ia$, where $p,a >0$. -A slight modification of the answer here shows that $\int_{0}^{\infty} J_{n}(bx) e^{-(p+ia)x} \, dx $ converges uniformly for all $p \in [0, \infty$). -This allows us to conclude that $$\begin{align} \int_{0}^{\infty} J_{n}(bx) e^{-iax} \, dx &= \lim_{p \downarrow 0}\int_{0}^{\infty} J_{n}(bx) e^{-(p+ia)x} \, dx \\ &= \lim_{p \downarrow 0} \frac{\left(\sqrt{(-p+ia)^2+b^{2}}-p-ia\right)^{n}}{b^{n}\sqrt{(p+ia)^2+b^{2}}} \\ &= \frac{\left(\sqrt{b^{2}-a^{2}}-ia\right)^{n}}{b^{n}\sqrt{b^{2}-a^{2}}}. \end{align}$$ -So if $ a < b$, $$ \begin{align} \int_{0}^{\infty} J_{n}(bx) e^{-iax} \, dx &= \frac{\left(\sqrt{b^{2}-a^{2}+a^{2}} e^{-i \arcsin \left(\frac{a}{b}\right)}\right)^{n}}{b^{n} \sqrt{b^{2}-a^{2}}} \\ &= \frac{e^{-in \arcsin \left(\frac{a}{b}\right)}}{\sqrt{b^{2}-a^{2}}} .\end{align}$$ -And if $a >b$, $$ \begin{align} \int_{0}^{\infty} J_{n}(bx) e^{-iax} \, dx &= \frac{\left(i\sqrt{a^{2}-b^{2}}-ia \right)^{n}}{b^{n}i \sqrt{a^{2}-b^{2}}} \\ &= \frac{-i e^{i \pi n /2} \left(\sqrt{a^{2}-b^{2}}-a \right)^{n}}{b^{n} \sqrt{a^{2}-b^{2}}}. \end{align}$$ -Therefore, -$$\int_{0}^{\infty} J_{n}(bx) \sin(ax) \, dx = \begin{cases} - \frac{\sin \left(n \arcsin \left(\frac{a}{b} \right) \right)}{\sqrt{b^{2}-a^{2}}} \, & \quad 0 < a < b \\ - \frac{\cos \left(\frac{\pi n}{2} \right) \left(\sqrt{a^{2}-b^{2}} -a \right)^{n}}{b^{n} \sqrt{a^{2}-b^{2}}} & \quad a > b >0 - \end{cases} $$<|endoftext|> -TITLE: Proof for vectors involving cross and dot product -QUESTION [7 upvotes]: Prove that for any two vectors $\mathbf a$ and $\mathbf b$, $\lvert \mathbf a \times \mathbf b \rvert^2 + (\mathbf a \cdot \mathbf b)^2 = \lvert \mathbf a \rvert^2 \, \lvert \mathbf b \rvert^2$. - -Can someone offer me advice on how to prove this in an easier way? -So far, I'm solving it in a really complicated way, by labelling $\mathbf a$ as $(x,y,z)$ and $\mathbf b$ as $(a,b,c)$, then multiplying them out. -So, for the $\lvert \mathbf a \times \mathbf b \lvert^2$ term, I found $(yc-bz)^2 + (za-xc)^2 + (xb-ya)^2$, and then $(\mathbf a \cdot \mathbf b)^2=(ax+yb+zc)^2$. -Is there an easier way? - -REPLY [4 votes]: I prefer this purely vector method (using the Einstein summation convention) to the trig based method in the answer by Eli Rose. -$$ -\begin{align*} -&\left| a\times b\right|^2+\left(a\cdot b\right)^2\\ -=\;&\varepsilon_{ijk}a_jb_k\varepsilon_{ilm}a_lb_m+a_jb_ja_kb_k\\ -=\;&a_jb_ka_lb_m(\delta_{jl}\delta_{km}-\delta_{jm}\delta_{kl})+a_jb_ja_kb_k\\ -=\;&a_ja_jb_kb_k-a_jb_ka_kb_j+a_jb_ja_kb_k\\ -=\;&a_ja_jb_kb_k=\left|a\right|^2\left|b\right|^2 -\end{align*} -$$ -(Where $\varepsilon$ is the Levi-Civita symbol, $\delta$ is the Kronecker delta, and I've used the relation $\varepsilon_{ijk}\varepsilon_{ilm}=\delta_{jl}\delta_{km}-\delta_{jm}\delta_{kl}$)<|endoftext|> -TITLE: How does contour integral work? -QUESTION [10 upvotes]: It might be a vague question but I can't help asking what is so powerful in contour integral that makes it possible to compute certain improper real integrals which are seemingly very difficult to compute by real variable calculus method. - -REPLY [3 votes]: As mentioned in comments, the reason we can change/deform contours is due to a generalized version of Stoke's Theorem. Which has an extremely intuitive explanation here on Wikipedia, see underlying principles. Here is another here. The hand wavy reason it works is because the curl of a complex function is $0$. -Circles are really easy to parametrize. That's why we (generally) integrate about circles rather than..say triangles. This is due to Euler's Identity. -In addition, we're used to writing down power series for variables. However, in some cases, we sometimes need, and as you'll see, often times desire negative powers. This motivates the discussion of Laurent Series. -So when we integrate about a curve, $C(\theta)$, we multiply by a differential $dC(\theta)$ such that -$$(1) \quad dC(\theta)=C(\theta+d\theta)-C(\theta)$$ -Multiplying and dividing the right hand side of $(1)$ by $d\theta$ yeilds, -$$(2) \quad dC(\theta)= \cfrac{dC}{d\theta} \cdot dt$$ -So, -$$(3) \quad \int_C f(C(\theta)) \ dC(\theta)=\int f(C(\theta)) \cdot \cfrac{dC}{d\theta} \ d\theta$$ -The meaning of $(3)$ is as follows. The Line Integral of a function $f(x)$ about a curve $C(\theta)$ gives the average value of $f \cdot \cfrac{dC}{|dC|}$ on that curve multiplied by the length of the curve. -If you accept this claim, we can simply substitute the functions in question. However that isn't really the problem is it? -First, let's address a major issue; why does $(3)$ average over $f \cdot \cfrac{dC}{|dC|}$? -It'll take some imagination but a reasonable explanation can be given. Imagine $f(t)$ gives the speed of a walking person at a particular time. If it's positive, (s)he's walking to the right. If it's negative, (s)he's walking to the left. Now imagine that we were viewing this on a TV rather than in real life. If we speed up time, fast forwarded, the person would appear to be walking faster. If we rewinded the person would switch directions. However, if we have a real time $p$ then we can know the time on the TV by making $t$ a function of the real time $p$. Now, if $dt$ is negative, we know that the time on the TV is going backward. However, if the person's speed is still positive, we know that the true speed, relative to the observer at time $p$, is actually negative. So if we want the average speed of the person, it's not enough to watch the TV, you also need to know how time is progressing. That's why $(3)$ can be negative. However, if time progresses normally relative to the observer, the ratio becomes unity. -So, in short $(3)$ averages, but does so with respect to direction. Sadly, explaining imaginary time, would take a lot of real time, so we'll have to settle for the above sentence rather than the time analogy. -However, recall that multiplication by a complex number $v$ rotates the multiplicand $w$ by $arg(v)$ degrees/radians and scales by $|v|$ in the complex plane. For a more in depth explanation and review see here. -So, $\cfrac{dC(\theta)}{|dC(\theta)|}$ gives the normed differential, the direction of the differential, at an angle $\theta$. -Imagine lines running from the origin to a point on $C$. Then imagine another line going from that point then tangent counter-clockwise along the curve. Move this new line and place it's tail at the origin. So, you should see a line representing $C(t)$, then you should also see another line representing the direction of $dC$. Notice that for the complex circle, $90^o$ seperates these lines. Since, multiplication can rotate, the direction of $dC$ is $C$ rotated by $90^o$. Better yet, -$(4) \quad dC=i \cdot C$ -$\Rightarrow arg(dC)=arg(C)+\pi/2$ -Where we've switched to radians. It's important to realize that it's the difference between the directions, or angles, that's key. In symbols, -$(5) \quad \Delta \theta=arg(dC)-arg(C)=\pi/2$ -Which is constant. We know that, -$(6) \quad arg(v \cdot w)=arg(w)+arg(v)$ -By extension, -$(7) \quad arg \left(\cfrac{w}{v} \right)=arg(w)-arg(v)$ -Using the principle of correspondence yeilds, -$(8) \quad arg \left(\cfrac{dC}{C} \right)=arg(dC)-arg(C)$ -Thus, if we want $(3)$ to be constant, in other words if we want the angle difference to be constant, we should have $f=\cfrac{1}{C(\theta)}$. -What about $f=\cfrac{1}{C(\theta)^2}$? It rotates faster than $dC$, so the difference between angles won't be constant. -What about $f=\cfrac{1}{C(\theta)^0}$? This time it rotates to slowly for the difference to be constant. -Finally, putting everything together, -$$(9) \quad \cfrac{1}{2 \cdot \pi} \cdot \int_{C_r} \cfrac{dC}{C-z_0}=i$$ -This says that the average angle between $dC$ and $\cfrac{1}{C-z_0}$ in the complex plane is $\pi/2$ radians or $90^o$ degrees. Which in this case is best represented as $i$.<|endoftext|> -TITLE: Volume of tetrahedron using cross and dot product -QUESTION [14 upvotes]: Consider the tetrahedron in the image: Prove that the volume of the tetrahedron is given by $\frac16 |a \times b \cdot c|$. - -I know volume of the tetrahedron is equal to the base area times height, and here, the height is $h$, and I’m considering the base area to be the area of the triangle $BCD$. -So, what I have is: -$$\begin{align} -\text{base area} &= \frac12 \lvert a \times b \rvert \\ -\text{height $h$} &= \lvert c\rvert \cos \theta -\end{align}$$ -So volume is $$V=\frac12 \lvert a \times b\rvert \cdot \lvert c\rvert \cos \theta $$ -But I don’t know how to arrive from this at $\frac16 |a \times b \cdot c|$. -Please advise. - -REPLY [3 votes]: Hint: $\mathbf{a}\times\mathbf{b}$ "points straight up". It is therefore parallel to the line $h$ that you show. Therefore, it has the same angle with $c$ as $h$ does.<|endoftext|> -TITLE: derivative of expected value with respect to parameter in both pdf and expectation -QUESTION [5 upvotes]: Say $X \sim N(\mu, \sigma^2)$ with pdf $f(x, \mu)$. We are interested in expectation of $g(x)$. Then -$$E[g(x, \mu)] = \int_{-\infty}^{\infty} g(x, \mu) f(x, \mu) dx$$ -Now I want partial derivative of this. Why does the following hold? -$$\frac{d}{d\mu}E[g(x, \mu)]= \int_{-\infty}^{\infty} \frac{dg(x, \mu)}{d\mu} f(x, \mu) dx = \int_{-\infty}^{\infty} g(x, \mu) \frac{df(x)}{d\mu} dx$$ -I mean, why not the chain rule inside the integral and only one function is partially differentiated? -It is easy to understand the rationale of it, but I am struggling with the proof... - -REPLY [9 votes]: This question has such messy notation and mixes up so many different things that it is difficult to -figure out what exactly is being asked. -Let $g(x,\mu)$ denote an ordinary two-variable real-valued function of -two real variables. We will assume that the function is integrable with -respect to both variables. Let $X$ denote a normal random variable -with mean $\mu$ and variance $\sigma^2$. Thus, $X$ has a probability -density function given by -$$f_X(t) = f(t,\mu) = \frac{1}{\sigma\sqrt{2\pi}}\exp\left(-\frac{1}{2} -\left(\frac{t-\mu}{\sigma}\right)^2\right), -~~-\infty < t < \infty.\tag{1}$$ -Now, there is nothing random about the function $g(x,\mu)$ and so its -expected value is just $g(x,\mu)$ 1tself. Indeed the same notion, - -the expected value of a constant is the constant itself - -is a fundamental notion of probability theory (as well as of -real life). -More formally, -$$E[g(x,\mu)] = \int_{-\infty}^\infty g(x,\mu)f_X(t)\,\mathrm dt -= g(x,\mu)\int_{-\infty}^\infty f(t,\mu)\,\mathrm dt -= g(x,\mu). \tag{2}$$ -and it is worth noting in passing that we would have obtained the -same answer even if we had used a different density function for -$X$, or indeed a discrete mass function, etc. From $(2)$, we have -that -$$\frac{\partial}{\partial\mu}E[g(x,\mu)] -= \frac{\partial}{\partial\mu}g(x,\mu)\tag{3}$$ -and that's all there is to it. You could, if you like, -write $(3)$ as -$$\frac{\partial}{\partial\mu}E[g(x,\mu)] -= \int_{-\infty}^{\infty}\frac{\partial}{\partial\mu}g(x,\mu) -f(t,\mu)\,\mathrm dt\tag{4}$$ -which sort of looks like what you want to prove, but actually -isn't at all the same. - -"But, but, but,..." you splutter indignantly, "All this is pure hooey. -Everybody, with the possible exception of Old Harry And His Old Aunt, knows that $(1)$ is incorrect: the density of $X$ is $f_X(x)$ and -not $f_X(t)$ as you have written in $(1$). Thus $(2)$ is not at all what -I know $E[g(x,\mu)]$ to be. It's got to be -$$E[g(x,\mu)] = \int_{-\infty}^\infty g(x,\mu)f(x,\mu)\,\mathrm dx -\tag{5}$$ -the way I wrote it in my question." -All right, have it your way.. If $Y = g(X,\mu)$ is a random -variable that is -a function of the random variable $X$, then the -law of the unconscious statistician gives us that -$$E[g(X,\mu)] = \int_{-\infty}^\infty g(x,\mu)f(x,\mu)\,\mathrm dx. -\tag{6}$$ -If you cannot see any difference between $(5)$ and $(6)$, -compare the 5th character in each equation; that is a fundamental -difference. -The equation $(6)$ can be differentiated on both sides -with respect to $\mu$ giving -\begin{align} -\frac{\partial}{\partial\mu}E[g(X,\mu)] &= \frac{\partial}{\partial\mu}\int_{-\infty}^\infty g(x,\mu)f(x,\mu)\,\mathrm dx\\ -&=\int_{-\infty}^\infty \left(f(x,\mu)\frac{\partial}{\partial\mu}g(x,\mu)+g(x,\mu)\frac{\partial}{\partial\mu}f(x,\mu)\right)\,\mathrm dx\\ -&= \int_{-\infty}^\infty f(x,\mu)\frac{\partial}{\partial\mu}g(x,\mu)\,\mathrm dx -+ \int_{-\infty}^\infty g(x,\mu)\frac{\partial}{\partial\mu}f(x,\mu)\,\mathrm dx\tag{7} -\end{align} -which more nearly resembles what you want to prove if you replace -that second $=$ sign in your question with a $+$ sign. -But note that -$$\frac{\partial}{\partial\mu}f(x,\mu) -= \frac{x-\mu}{\sigma^2}f(x,\mu)$$ -and so if $g(x,\mu)$ is an even function of $(x-\mu)$, then -the integrand in the second integral in $(7)$ is a odd function -of $(x-\mu)$, the value of that integral is $0$ and so -$$\frac{\partial}{\partial\mu}E[g(X,\mu)] -= \int_{-\infty}^\infty f(x,\mu)\frac{\partial}{\partial\mu}g(x,\mu)\,\mathrm dx$$ -just the way you want it.<|endoftext|> -TITLE: Vector Field on the $n$-dimensional torus -QUESTION [5 upvotes]: Give examples of vector fields on the $n$-dimensional torus. -What I have done: -on $S^1$ it's easy to give one example with perpendicular vectors of length $1$ rotating in one direction, and another example in the other direction. How many different vector fields are there on $S^1$? I know they are $X =$ $f$ $\cdot$ $d\over dx$ thus they should be an infinite vector space over real numbers. -on $T^2$ I think we can draw the square and draw arrows in just 1 direction to get a vector field, for example all the vertical arrows of length $1$. Again, how many different vector field are there on $T^2$? -on $T^n$ I think we can use that $T^n = S^1 \times \cdots \times S^1$ for $n$ times, maybe something like $X =$ $f_1$ $\cdot$ $d\over d\theta_1$ $+ \cdots +$ $f_n$ $\cdot$ $d\over d\theta_n$, but I don't know! - -REPLY [4 votes]: It depends on what you mean by "how many". If you are really counting each one as different, then there is always an infinite number of vector fields. -However, each manifold you mention is parallelizable. It follows that they must have $n$ vector fields which are linearly independent at every point of the manifold. Note however that your question can't be properly addressed in general. For example, I don't know how one could make sense (in a non-trivial way) of the question "How many vector fields are there on $S^2$?". -$S^1$ is clearly parallelizable (you can just stack the tangent spaces vertically on the circle), and the product of parallelizable manifolds is parallelizable. Hence, $T^n$ is parallelizable. -With respect to your specific questions: Yes, that gives a vector field in $S^1$. Note that in the "parallelization" this is just the "constant" unit vector upwards, or downwards, depending on orientation. -The vector field which you propose in $T^2$ is also another example of a vector field. Note that in the standard embedding of $T^2 \hookrightarrow \mathbb{R}^3$, this vector field is a "flow of water on the surface of the pipe" given by the torus.<|endoftext|> -TITLE: Can a Prime Ideal be equal to the parent Ring? -QUESTION [5 upvotes]: If $I$ is a prime ideal of $R$, can $I=R$? - -REPLY [6 votes]: No. By definition, a prime ideal is a proper ideal, meaning that it is not the entire ring. This is mostly because there are theorems about prime ideals that don't work for the entire ring, and so it's easier to assert that prime ideals are not the whole ring than to put a "Let $I$ be a prime ideal of $R$ which is not equal to the whole ring" in many theorems and proofs.<|endoftext|> -TITLE: Do two permutations in $S_n$ generate a transitive subgroup of $S_n$? -QUESTION [5 upvotes]: On page 139 of Flajolet and Sedgewick's Analytic Combinatorics we read: -"To two permutations $\sigma,\tau$ of the same size, associate a graph $G_{\sigma,\tau}$ whose set vertices is $V=[1\ldots n],$ if $n = |σ| = |τ |,$ and set of edges is formed of all the pairs $(x,\sigma(x)), (x,\tau(x)),$ for $x\in V.$" -The claim is then made that the probability that such a random graph is connected is -$$\frac1{n!}[x^n]\log\left(\sum_{n\geq0} n!x^n\right).$$ -This cannot be correct. (I think the factor of $1/n!$ should be $1/n!^2$) ? -I understand that the number of such graphs that are connected is the number of ordered pairs in $S_n$ that would generate a transitive group. -In Sloane's OEIS A122949 we see a count of the number of ordered pairs of $n$-permutations that generate a transitive subgroup. The exponential generating function (egf) is $\log(\sum_{n\geq0} n!x^n).$ -I want to derive (via the symbolic method) an egf for the number of size $2$ (and then generally size $k$) subsets of $S_n$ that generate a transitive group. Cf. A266910. By brute force I managed to get Mathematica to count the number of such subsets of size $3$ in $S_n$ for $n = 3,4,5.$ They are $20,$ $1932,$ and $269040$ respectively. -My specific questions are: Do you agree that the statement made in the book is an error? -Can I utilize the egf for the connected graph objects (ordered pairs in $S_n$ that generate a transitive group) to derive an egf for size $k$ subsets of $S_n$ that generate a transitive group? -Can GAP verify the three terms that I have computed above with Mathematica? - -REPLY [5 votes]: The formula given in the book is actually correct. It is easy to calculate "by hand" that the coefficient of $n=2$ in the formal power series of $\log \sum_{n=0}^\infty n! x^n$ is $\frac32$. The probability that two uniformly random elements of $S_2$ generate a transitive subgroup is $\frac34 = \frac{1}{2!} \frac{3}{2}$, whereas your modification would give $\frac{1}{2!^2}\frac{3}{2} = \frac{3}{8}$. -The original reference for the result itself is Dixon, John D., The probability of generating the symmetric group, Math. Z. 110, 1969, 199–205. The proof (of Theorem 2 in the paper) is relatively short, uses standard mathematical notation, and arrives at the formal identity -$$ -\sum_{n=0}^\infty n! X^n = \exp \sum_{i=1}^\infty i! t_i X^i -$$ -where $t_i$ is the probability that two uniformly random partitions from $S_i$ generate a transitive subgroup. This is equivalent to the formula given in the book.<|endoftext|> -TITLE: Online visualization tool for planes (spans in linear algebra) -QUESTION [11 upvotes]: I would like to visualize planes in 3D as I start learning linear algebra, to build a solid foundation. Surprisingly, I have been unable to find an online tool (website/web app) to visualize planes in 3 dimensions. For example, I'd like to be able to enter 3 points and see the plane. -Does something like this exist? - -REPLY [2 votes]: You should checkout CPM_3D_Plotter. -It runs in the browser, therefore you don't have to download or install any programs. -Because it is browser-based, it is also platform independent. -The user-interface is very clean and simple to use:<|endoftext|> -TITLE: Placing the integers $\{1,2,\ldots,n\}$ on a circle ( for $n>1$) in some special order -QUESTION [33 upvotes]: For which integer $n>1$ can we place the integers $\{1,2,\ldots,n\}$ on a circle (say boundary of $S^1$ ) in some order such that for each $s \in \{1,2,\ldots,\dfrac {n(n+1)}{2}\}$ , there exist a connected subset of the circle on which the sum of the integers placed is exactly $s$? - -REPLY [7 votes]: I think I have got an answer : The arrangement is possible for any $n\ge 1$ . -Proof : -$\underline {case 1}$ , $n$ is even : -$n$ is even , so pair the numbers $1,2,...,n$ into $n/2$ pairs with the two numbers in each pair adding to $n+1$ i.e. the pairing is like $\{1,n\} ; \{2,n-1\} ; ...;\{\dfrac n2 , \dfrac n2+1\}$ . We claim that any circular arrangement of the numbers $1,2,...,n$ that keeps the numbers in each pair adjacent to one another gives all possible sum values from $1 $ to $\dfrac {n(n+1)}2$ . To see this , consider any $s$ as $1\le s \le \dfrac {n(n+1)}2$ , by division algorithm , there are integers $q,r$ such that $s=q(n+1)+r$ , where $0\le r\le n$ and $0 \le q \le n/2$ . If $r=0$ then we choose any $q$ consecutive pairs , as numbers in each pair are adjacent , this gives a connected subset with sum ( since each pair has sum $n+1$) $q(n+1)=q(n+1)+r(=0)=s$ ; if $r>0$ ( and so that $q < n/2)$ , we choose the connected subset to be beginning with $r$ and then $q$ consecutive pairs clockwise or anticlock wise as appropriate , obtaining a sum equal to $r+q(n+1)=s$ -[Note :In $s=(n+1)q+r$ , the inequality $0\le r \le n$ comes from division algorithm but the bounds $0\le q \le n/2$ does not come directly from division algorithm ; it is due to as follows : $q(n+1)=s-r\le s \le \dfrac {n(n+1)}2$ so $q \le n/2$ , and $q(n+1)=s-r\ge1-r\ge1-n=-(n-1)>-(n+1)$ , so that $q>-1$ , then since $q$ is an integer , we get $q \ge 0$ ] -$\underline {case 2}$ , $n$ is odd : -$n$ is odd ,then we form $\dfrac {n+1}2$ pairs each with sum $n$ , thinking of the singleton $n$ as a degenerate pair ; i.e. $\{1,n-1\};\{2,n-2\};...;\{\dfrac {n-1}2 , \dfrac {n+1}2\};\{n\}$ are the required pairs . Here also , any arrangement that keeps the numbers in each pair adjacent to one another gives all possible sum , the justification being same as that of the previous one .<|endoftext|> -TITLE: Sufficient condition to show $f$ is monotonically increasing in some neighborhood -QUESTION [5 upvotes]: I am curious if the following statement holds. -Let $f:[a,b] \rightarrow \mathbb{R}$ be a continuous function differentiable on the open interval $(a,b)$. Then if $f'(c)>0$ for some $c \in (a,b)$, there exists a neighbourhood of $c$ in which $f$ is monotonically increasing. -An ideal answer to this question would include either a proof or a counterexample. - -REPLY [6 votes]: As noted in other answers the results holds if $f\in C^1$. Consider the following counterexample: -$$f(x)=\begin{cases} -x+2x^2 \sin(\frac{1}{x})\quad \text{ if } x\neq 0 \\ -0\qquad \quad \qquad \quad \quad \text{ if } x=0 -\end{cases}$$ -with derivative -$$f^\prime(x)=\begin{cases} -1+4x \sin(\frac{1}{x})-2\cos(\frac{1}{x}) \quad \text{ if } x\neq 0 \\ -1\quad \qquad\qquad\qquad\quad \quad\qquad \text{ if } x=0 -\end{cases}$$ -Note that $f^\prime(0)>0$ but every neighbourhood of $0$ has negative values (and positive values).<|endoftext|> -TITLE: The Archimedes Cattle Problem and how to find $x^2-dp^2y^2=1$? -QUESTION [5 upvotes]: This was inspired by the Archimedes Cattle Problem. A crucial step is to solve the Pell equation, -$$u^2-(609)(7766)v^2=1\tag1$$ -and whose fundamental solution is, -$$\big(300426607914281713365\sqrt{609} + 84129507677858393258\sqrt{7766}\big)^2=u+\sqrt{(609)(7766)}\,v$$ -Of course, there are an infinite number of $u,v$ that solve $(1)$. However, the complete solution to the problem desires that $v=9314y$, or the Pell equation, -$$x^2-(609)(7766)(9314^2)y^2=1\tag2$$ -which has fundamental solution, -$$\big(u+\sqrt{(609)(7766)}\,v\big)^{\color{brown}{2329}} = x+\sqrt{(609)(7766)}\,\color{blue}{9314}\,y$$ -Note that $4\times\color{brown}{2329} - 2 = \color{blue}{9314}$. - -Questions: - - -In general, given the fundamental solution to, -$$u^2-dv^2=1$$ -how do we find integer $n$ such that, -$$(u+\sqrt{d}\,v)^n = x+\sqrt{d}\,y$$ -and $y$ is integrally divisible by some desired integer $p$? In other words, can we express $n$ as a function in terms of $p$? (If the general case is too complicated, then assume $d,p$ to be primes.) -From a limited computer search, I observed that minimum $n\leq p$. Is this indeed true? - - -Example: - -Given the expansion of, -$$\big(649 + 180\sqrt{13}\big)^n = x+y\sqrt{13}$$ -then the smallest $n$ such that $\displaystyle\frac{y}{p}$ is an integer are the following, -$$\begin{array}{|c|c|c|c|c|c|} -\hline -p&n& & &p&n\\ -\hline -\color{green}2&1& & &\color{blue}{19}&10\\ -\color{green}3&1& & &\color{red}{23}&11\\ -\color{green}5&1& & &\color{red}{29}&7\\ -\color{blue}7&4& & &\color{blue}{31}&16\\ -\color{blue}{11}&2& & &\color{blue}{37}&19\\ -\color{green}{13}&13& & &\color{blue}{41}&7\\ -\color{red}{17}&4& & &\color{red}{43}&7\\ -\hline -\end{array}$$ -For example, let $p=\color{blue}7$, so $\big(649 + 180\sqrt{13}\big)^4 = 1419278889601 + \color{blue}7\times56233877040\sqrt{13}$. -Edit: Per Batominovski's answer, primes in either blue, green, red have $p+1,\;p,\;p-1$ divisible by $n$, respectively. - -REPLY [2 votes]: Define $u_r+\sqrt{d}v_r:=\left(u+\sqrt{d}v\right)^r$, where $u_r,v_r\in\mathbb{Z}$ and $r\in\mathbb{N}_0$. Then, you can see that $v_{r+2}+av_{r+1}+bv_r=0$ for some $a,b\in\mathbb{Z}$ and for all $r\in\mathbb{N}_0$ (with $v_0=0$ and $v_1=v$). Indeed, $a=-2u$ and $b=1$. If $p$ is a prime natural number, then either $t^2+at+b\in\mathbb{F}_p[t]$ is reducible or $t^2+at+b$ factors into linear terms over $\mathbb{F}_{p^2}$. - -If $t^2+at+b$ with $b\neq 0$ in $\mathbb{F}_p$ is reducible over $\mathbb{F}_p$ with simple roots, then it follows easily that $v_{p-1}=v_0=0$ in $\mathbb{F}_p$, so we can take $n=p-1$. The minimum of such $n$'s is a divisor of $p-1$. -If $t^2+at+b$ with $b\neq 0$ in $\mathbb{F}_p$ is a perfect square in $\mathbb{F}_p[t]$, then we have $v_p=v_0=0$, so we can take $n=p$. The minimum of such $n$'s is either $1$ or $p$. -Now, suppose that $t^2+at+b$ is irreducible over $\mathbb{F}_p$. Then, the roots of $t^2+at+b$ in $\mathbb{F}_{p^2}$ are $\alpha,\beta\in\mathbb{F}_{p^2}$ satisfying $\alpha^p=\beta$ and $\beta^p=\alpha$. Since $v_r=\kappa\left(\alpha^r-\beta^r\right)$ for some $\kappa\in\mathbb{F}_{p^2}$ and for all $r\in\mathbb{N}_0$, it follows that $v_{p+1}=v_0=0$, so we may take $n=p+1$. The minimum of such $n$'s must divide $p+1$. - -Suppose that $p$ is an odd prime. Of course, $n$ can be taken to be $\frac{p- 1}{2}$ in Case 1 and $\frac{p+1}{2}$ in Case 3 (or the smallest of such $n$'s must divide $\frac{p-1}{2}$ in Case 1 or $\frac{p+1}{2}$ in Case 3). If $\alpha$ and $\beta$ are the roots of $t^2+at+b$ in $\mathbb{F}_p$ or $\mathbb{F}_{p^2}$, then $\beta=\frac{1}{\alpha}$ (this argument works only when $b=1$ in $\mathbb{F}_p$, which holds in your problem). Note that either $\alpha^{p-1}=1$ (if $\alpha\in\mathbb{F}_p$) or $\alpha^{p+1}=1$ (if $\alpha\in\mathbb{F}_{p^2}\setminus\mathbb{F}_p$, where we have $\alpha^p=\beta=\frac{1}{\alpha}$). Since $v_r=\kappa\left(\alpha^r-\beta^r\right)=\frac{\kappa}{\alpha^r}\left(\alpha^{2r}-1\right)$ for some $\kappa$ in $\mathbb{F}_p$ or $\mathbb{F}_{p^2}$ and for all $r\in\mathbb{N}_0$, we conclude that $v_{\frac{p-1}{2}}=v_0=0$ in Case 1 and $v_{\frac{p+1}{2}}=v_0=0$ in Case 3. -In this particular problem, $a=0$ and $b=1$ in $\mathbb{F}_2$, so $p=2$ falls into Case 2 always. More generally, a prime $p$ falls into Case 2 if and only if $p$ divides $2dv$. If $p$ divides $v$, then $n$ can be taken to be $1$. If $p$ divides $2d$ but not $v$, then $n=p$ is the smallest of such $n$'s. -P.S.: While the case $b=0$ in $\mathbb{F}_p$ doesn't happen in your particular problem, it is worth noting that there may not exist $n\in\mathbb{N}$ such that $v_n=v_0=0$ in $\mathbb{F}_p$ if $p$ divides $b$. - -It is also not difficult to show that, if $p=p_1^{k_1}p_2^{k_2}\cdots p_l^{k_l}$, where $p_1,p_2,\ldots,p_l$ are pairwise distinct prime natural numbers and $k_1,k_2,\ldots,k_l$ are nonnegative integers, then the smallest $n$ satisfies $n\leq p$. In fact, we have a stronger bound for this $n$: $$n\leq \prod_{p_i \mid 2dv}\,p_i^{k_i} \,\prod_{p_i \nmid 2dv}\,\left(\frac{p_i+1}{2}\right)p_i^{k_i-1}\,.$$ -We have even a better bound: the smallest $n$ must divide -$$\text{lcm}\left(L_1,L_2,L_3\right) \mid \prod_{p_i \mid 2dv}\,p_i^{k_i} \,\prod_{p_i \text{ in Case 1}}\,\left(\frac{p_i-1}{2}\right)p_i^{k_i-1}\,\,\prod_{p_i \text{ in Case 3}}\,\left(\frac{p_i+1}{2}\right)p_i^{k_i-1}\,,$$ -where $L_1$ is the least common multiple of $\left(\frac{p_i-1}{2}\right)p_i^{k_i-1}$ for $p_i$ in Case 1, $L_2:=\prod_{p_i \mid 2dv}\,p_i^{k_i}$, and $L_3$ is the least common multipleof $\left(\frac{p_i+1}{2}\right)p_i^{k_i-1}$ for $p_i$ in Case 3. Yet still a better bound exists: if $n_i$ is the smallest positive integer such that $p_i$ divides $v_{n_i}$, then the minimum value of $n$ is a factor of $$\text{lcm}\left(n_1p_1^{k_1-1},n_2p_2^{k_2-1},\ldots,n_lp_l^{k_l-1}\right)\mid\prod_{i=1}^l\,n_ip_i^{k_i-1}\,.$$ -For example, with $d:=13$ and $p:=7^2\cdot 29$, we can take $n$ to be $\text{lcm}(4\cdot7,7)=28$ (which is also the smallest value of all such $n$'s).<|endoftext|> -TITLE: Measurable projection theorem proof reference -QUESTION [6 upvotes]: I'm beginning to study about stochastic processes, and currently focusing on stopping times and hitting times. The textbook I'm using is "Stochastic Integration Theory" by Medvegyev (and Karatzas & Shreve as a second reference), and in some of the theorems the following measurable projection theorem is used. - -If the space $(\Omega,\mathcal{A},\mathbb{P})$ is complete and $$U \in \mathcal{B}(\mathbb{R}^n) \otimes \mathcal{A},$$ then $$\text{proj}_\Omega (U) := \{x: \exists t\text{ such that }(t,x) \in U\} \in \mathcal{A}.$$ - -On the authors homepage there is a note containing a proof as well as many definitions such as Suslin (also called analytic) sets and auxiliary lemmas, however I find the material to be lacking in rigor and it is missing some assumptions. Therefore I am looking for a textbook in which the measurable projection is covered in detail. I've looked at the textbooks by Kechris, and Srivastava without finding what I was looking for. - -REPLY [4 votes]: Theorem 13 in Chapter III of the first volume of Dellacherie & Meyer (cited by @zhoraster; see the foot of page 43 in the English translation) tells you that the projection onto $\Omega$ of $U$ is $\mathcal A$-analytic. As such, this projection is $\mathcal A$-measurable, because $(\Omega,\mathcal A,\Bbb P)$ is complete; see no. III-33 at the top of p. 58 of D. & M.<|endoftext|> -TITLE: Is it possible that the product of two non-affine schemes becomes affine? -QUESTION [13 upvotes]: Question: Is there an example of some $X$ and $Y$ non-affine schemes, with $X \times_{\operatorname{Spec} \mathbb{Z}} Y$ affine? -Updated question (after Eric Wofsey's example): Is there an example of some $X$ and $Y$ non-affine $k$-schemes, with $X \times_{\operatorname{Spec k}} Y$ affine? -My long and rambling thoughts about this question: -I can think of examples where we allow more general fiber products: we can intersect non-affine subschemes of projective space to get an affine scheme (for example take some affine plane without the origin embedded in $\mathbb{P}^2$ and intersect it with a projective line that doesn't meet that origin). So I mean in particular the product over $\operatorname{Spec} \mathbb{Z}$ (or $\operatorname{Spec} k$ for varieties). -I also know of a near example where the product is allowed to be "twisted" as in a fibration / locally a product, though in this case the fibers are affine, namely: $GL_2(\mathbb{C}) \to \mathbb{CP}^1$. (Here the map is the one induced by the natural action. The fibers are the invertible upper triangular matrices, etc.) (This example really can't be refined to an answer to my original question, for the reasons of the next paragraph.) -Another vague conclusion: Suppose that there is some map $X \to X \times_{\mathbb{Z}} Y$ which induced by the identity on $X$ and some map $X \to Y$ (lets say we are working with $k$-schemes and $Y$ has a $k$ point... or some other condition to guarantee that this map actually exists) which is a closed embedding - for example when $Y$ is separated (I think $Y$ has to be separated for this to be a closed embedding, but maybe I'm overthinking it: if $Y$ is separated, then $X \times_{\mathbb{Z}} Y \to X$ is separated, and $X \to X \times Y \to X$ is just the identity, so it is a closed embedding and so the Cancellation Theorem - 10.1.19 in Ravi - applies). -Then if the product is affine it must necessarily be the case that $X$ (and $Y$) have "a lot" of functions on them (relative to "the size" of $X$ and $Y$ - I'm drawing on the intuition that the only global sections on projective varieties are $k$). So they can't be, for example, projective varieties. -From this I want to say that an example (if it is exists) is likely to involve silly things like $\mathbb{A}^2$ (or I think more generally some $dim \geq 2$ Noetherian normal affine scheme minus some nonempty codimension $\geq 2$ set?). Or maybe part of the issue here is that my repertoire of non-affine schemes is limited to basically just quasi-projective varieties (and amusing pathologies built out of $\operatorname{Spec} k[x_0, x_1, \ldots]$) -Those are the thoughts I've had. I don't know a good way to measure non-affineness in general... Except that the higher cohomology of quasi-coherent sheaves should vanish.... but this is not something I really "know" yet, so I feel uncomfortable invoking it for now. I found via google that this is an iff criterion for affineness under nice conditions: https://mathoverflow.net/questions/153523/does-vanishing-of-cohomology-of-locally-free-sheaves-imply-affiness-of-scheme -So maybe some combination of Kunneth formula like computations would suffice to prove that this cannot happen (I am secretly thinking of the insight of this math overflow post: https://mathoverflow.net/questions/60375/is-mathbb-r3-the-square-of-some-topological-space ). -Thank you for your patience!!! - -REPLY [7 votes]: A kind of trivial example which is similar to your intersection example for general fiber products: if $K$ and $L$ are fields of different characteristic and $X$ is a non-affine scheme over $K$ and $Y$ is a non-affine scheme over $L$, then $X\times_\mathbb{Z} Y=\emptyset$ is affine. -Over a field $k$, however, this cannot happen. More generally, if $X$ and $Y$ are schemes over $k$ such that $Y$ is nonempty and $X\times Y$ is affine, then $X$ is affine. To prove this, choose a point $y\in Y$ and consider $X\times \operatorname{Spec} k(y)$. Since the inclusion $\operatorname{Spec} k(y)\to Y$ is an affine morphism, so is $X\times \operatorname{Spec} k(y)\to X\times Y$, and so $X\times \operatorname{Spec} k(y)$ is an affine scheme. Thus the projection $X\times \operatorname{Spec}k(y)\to\operatorname{Spec} k(y)$ is an affine morphism. Since affineness of morphisms is local on the base in the fpqc topology (see here, for instance), it follows that $X\to\operatorname{Spec} k$ is affine, and hence $X$ is an affine scheme. (The choice of a point $y$ here is just because in general, I don't think you can conclude that the projection $X\times Y\to Y$ is an affine morphism from the fact that $X\times Y$ is affine without some hypothesis on $Y$.)<|endoftext|> -TITLE: Empty interior, equivalent definitions from Munkres. -QUESTION [5 upvotes]: The Munkres book states the following definition: - -Recall that if $A$ is a subset of a space $X$ the interior of $A$ is - defined as the union of all open sets of $X$ that are contained in - $A$. To say that $A$ has empty interior is to say then that $A$ - contains no open set of $X$ other than the empty set. Equivalently, - $A$ has empty interior if every point of $A$ is a limit point of the - complement of $A$, that is, if the complement of $A$ is dense in $X$. - -In such definition I don't understand the double implication -$A$ has empty interior $\Leftrightarrow$ every point of $A$ is a limit point of the complement of $A$ $\Leftrightarrow$ the complement of $A$ is dense in $X$. -I tried to formally prove the equivalence using the definition of dense set, interior etc but I just got confused. -Could you help me to understand the equivalence reported in the definition? - -REPLY [3 votes]: The closure of $X-A$ is equal to $\cap F$, where $F$ is family of all closed sets that have $X-A$ as a subset. The set of complements of members of $F$ is the family $G$ of all open subsets of $A.$ Hence $$Int (A)=\cup G=X-\cap F=X- \overline {X-A}.$$ $$\text {Therefore }\quad Int (A)=\phi \iff \overline {X-A}=X.$$<|endoftext|> -TITLE: How to calculate the expected value of the Powerball Lottery? -QUESTION [5 upvotes]: The current Powerball jackpot is at roughly 675 million USD and the chances of winning with one random ticket is 1 in 292.2 million. Each ticket costs 2 USD. -From a general perspective, it appears that the Powerball lottery tickets have a positive expected value but we didn't include all the other factors yet. -If we decide to take the entire winnings at once instead of getting paid over 30 years, we should be expecting to receive 428 million before taxes. Then we have to include both state and (25%) federal tax which shaves off at least another 100 million for the United States. -Then we would also have to include the possible of the Powerball having multiple winners. We could estimate how many people will purchase tickets for the next drawing based on how many purchased for the last one. -Lastly, we would also have to factor in the cost of maintenance for this large amount of money. -How can we go about calculating our expected value when purchasing a ticket in hopes of hitting the jackpot? Do these tickets truly give us a positive return at its current price? -Source for Powerball statistics - -REPLY [5 votes]: This is a tricky problem. The lottery people would love for you to think of the problem simplistically so you arrive at the wrong answer. However, a careful analysis shows why the lottery people really know why they will make tons of money from even a huge payout. -Let $p$ be the probability of winning, $C$ be the cost of a ticket, and $V$ be the value of the winnings. Then the expected value $E$ of a ticket, assuming one winner, would be approximately -$$E = p V - (1-p) C$$ -However, this is really not correct. In reality, there will be more than one winner. Or none. Who knows? But when there are more than one winner, the value of the the winnings to each person are reduced, as the winnings are split evenly among the winners. -It may be assumed that the number of winners follows a binomial distribution. Assume a population of $N$ possible tickets. The probability of $k$ tickets, including yours, being winners is -$$\binom{N-1}{k-1} p^k (1-p)^{N-k} $$ -where $k =1$ corresponds to the ideal case in which you alone are the winner, and k may vary between $1$ and $N-1$. so that the actual expected value of your winning ticket is equal to -$$E = \sum_{k=1}^{N} \binom{N-1}{k-1} p^{k} (1-p)^{N-k} \frac{V}{k} - (1-p) C $$ -which may be simplified to -$$E = \frac{1-(1-p)^N}{N} V - (1-p) C $$ -Note this takes into account the number of tickets purchased and will reduce the expected value of the ticket from the simpler assumption. -Given the numbers: $p$ being $1$ in $282.2$ million, $V=\$700$ million, and $C=\$2$, this distinction is crucial. The expected value for the simple case $N=1$ is positive (about $ \$0.396 $); people who understand the expected value at this level may be induced to buy a ticket, thinking that each ticket has positive value. However, one may show that, when there are more than about $108.8$ million tickets sold, the expected value goes negative. My guess is that the number of tickets sold will certainly exceed this number and that the lottery people will make a profit.<|endoftext|> -TITLE: When is the image of a proper map closed? -QUESTION [20 upvotes]: A map is called proper if the pre-image of a compact set is again compact. -In the Differential Forms in Algebraic Topology by Bott and Tu, they remark that the image of a proper map $f: \mathbb{R}^n \to \mathbb R^m$ is closed, adding the comment "(why?)". -I can think of a simple proof in this case for continuous $f$: - -If the image is not closed, there is a point $p$ that does not belong to it and a sequence $p_n \in f(\mathbb R^n)$ with $p_n \to p$. Since $f$ is proper $f^{-1}(\overline {B_\delta(p)})$ is compact for any $\delta$. Let $x_n$ be any point in $f^{-1}(p_n)$ and wlog $x_n \in f^{-1}(\overline{B_\delta(p)})$. Since in $\mathbb{R}^n$ compact and sequentially compact are equivalent, there exists a convergent subsequence $x_{n_k}$ of $x_n$. From continuity of $f$: $f(x_{n_k}) \to f(x)$ for some $x$. But $f(x_{n_k})=p_{n_k} \to p$ which is not supposed to be in the image and this gives a contradiction. - -My problem is that this proof is too specific to $\mathbb{R}^n$ and uses arguments from basic analysis rather than general topology. -So the question is for what spaces does it hold that the image of a proper map is closed, how does the proof work, and is it necessary to pre-suppose continuity? - -REPLY [15 votes]: One may generalize the result in R_D's answer even further: - -A proper map $f:X\to Y$ to a compactly generated Hausdorff space is a closed map (A space $Y$ is called compactly generated if any subset $A$ of $Y$ is closed when $A\cap K$ is closed in $K$ for each compact $K\subseteq Y$). -Proof: Let $C\subseteq X$ be closed, and let $K$ be a compact subspace of $Y$. Then $f^{-1}(K)$ is compact, and so is $f^{-1}(K)\cap C =: B$. Then $f(B)=K\cap f(C)$ is compact, and as $Y$ is Hausdorff, $f(B)$ is closed. Since $Y$ is compactly generated, $f(C)$ is closed in $Y$. - -A locally compact space $Y$ is compactly generated: If $A\subset Y$ intersects each compact set in a closed set, and if $y\notin A$, then $A$ intersects the compact neighborhood $K$ of $y$ in a closed set $C$. Now $K\setminus C$ is a neighborhood of $y$ disjoint from $A$, hence $A$ is closed.<|endoftext|> -TITLE: Action of a matrix on the exterior algebra -QUESTION [8 upvotes]: I read in a paper that if $M$ is a real square matrix of size $n$, then we can consider the action of $M$ in the third exterior algebra $\Lambda^3 \mathbb{R}^n$, and the matrix of this action are the $3\times 3$ minors of $M$. Here I am not clear what the action mentioned above is, and thus I do not understand the latter statement. Can someone help me? Thanks a lot! - -REPLY [29 votes]: Let me expand my comment into a complete answer. -Let $V$ be a finite-dimensional vector space, with dimension $n$ and basis $\left\{b_1,\ldots,b_n\right\}$. The $k^{\text{th}}$ exterior power $\Lambda^k(V)$ has dimension $n\choose k$, and the basis for $V$ induces a standard basis on $\Lambda^k(V)$ given by the collection of all (wedge) products of the form $b_{i_1}\wedge\cdots\wedge b_{i_k}$, where the $i_j$ are increasing, i.e. $1\leq i_j -TITLE: Find type of a differential form on an almost complex manifold -QUESTION [5 upvotes]: If $M$ is a nearly Kähler manifold (that is, an almost Hermitian manifold on which $\nabla_X(J)X=0$) we have the three-forms -$$ A(X,Y,Z)=\langle\nabla_X(J)Y,Z\rangle \quad\text{and}\quad B(X,Y,Z)=\langle\nabla_X(J)Y,JZ\rangle. $$ -How can I prove that these forms are of type $(0,3)+(3,0)$? - -Edit: This claim can be found on pp. 3-4 of the paper "Nearly Kähler geometry and Riemannian foliations" by P. A. Nagy. - -REPLY [4 votes]: First extend $J$ complex linearly so that $A$ and $B$ are defined on $TM\otimes_{\mathbb{R}}\mathbb{C}$. Note that $TM\otimes_{\mathbb{R}}\mathbb{C}$ decomposes as the direct sum of the two eigenspaces of $J$, namely $T^{1, 0}M$ and $T^{0,1}M$, with eigenvalues $i$ and $-i$ respectively. -Also extending the metric and $\nabla$ complex linearly, $A$ becomes a real three-form on $TM\otimes_{\mathbb{R}}\mathbb{C}$. As such, it can be uniquely written as -$$A = A^{3,0} + A^{2, 1} + A^{1, 2} + A^{0, 3}$$ -where $A^{p, q}$ is a $(p, q)$-form. In fact, as $A$ is a real form we have $A^{2,1} = \overline{A^{1,2}}$ and $A^{3,0} = \overline{A^{0,3}}$. -Note that $A(X, Y, JZ) = B(X, Y, Z) = -B(X, Z, Y) = -A(X, Z, JY) = A(X, JY, Z)$. As $J$ preserves its eigenspaces, we see that the identity is true at the $(p, q)$ level. -Now suppose $X, Y \in \Gamma(M, T^{0,1}M)$ and $Z \in \Gamma(M, T^{1,0}M)$. Then -\begin{align*} -A^{1,2}(X, Y, JZ) &= A^{1,2}(X, JY, Z)\\ -iA^{1,2}(X, Y, Z) &= -iA^{1,2}(X, Y, Z)\\ -A^{1,2}(X, Y, Z) &= 0 -\end{align*} -and hence $A^{1,2} = 0$ by skew-symmetry. As $A^{2,1} = \overline{A^{1,2}} = 0$, $A$ is of type $(3, 0) + (0, 3)$. -As $B(X, Y, JZ) = B(X, JY, Z)$, the same calculation can be used to show that $B$ is also of type $(3, 0) + (0, 3)$. - -The form $A$ satisfies the identity $A(X, Y, JZ) = A(X, JY, Z)$. In fact, it follows from skew-symmetry that -$$A(JX, Y, Z) = A(X, JY, Z) = A(X, Y, JZ),$$ -and likewise for $B$. More generally, if $C$ is a $p$-form such that -$$C(JX_1, X_2, \dots, X_k) = \dots = C(X_1, \dots, JX_i, \dots, X_k) = \dots = C(X_1, X_2, \dots, JX_k),$$ -then $C$ is of type $(p, 0) + (0, p)$. - -Thanks to the OP for helping me simplify the above arguments and for pointing out the final general fact.<|endoftext|> -TITLE: Prove every element of $G$ has finite order. -QUESTION [5 upvotes]: Let $G$ be a group such that the intersection of all its subgroups which are different from $\{e\}$, is a subgroup different from $\{e\}$. Prove that every element of $G$ has finite order. -Assume that $a$ has infinite order in $G$. Then let $y\neq e$ belong to the intersection of all subgroups of $G$ different from $\{e\}$. Then $y=a^n$ and $y$ will also belong to subgroup generted by $a^2$ hence $y=a^{2m}$ for some integers $m$. This contradicts that $a$ has infinte order. -Is this correct? - -REPLY [2 votes]: To express your idea differently: -If $G$ has an element $a$ of infinite order, then the intersection of all nontrivial subgroups is trivial. -Indeed, this is true for $\mathbb Z$, which is isomorphic to $\langle a \rangle$. -Therefore, if the intersection of all nontrivial subgroups is nontrivial, then $G$ cannot have an element of infinite order and so all elements must have finite order.<|endoftext|> -TITLE: Show that $\beta $ is algebraic over $F(\alpha)$. -QUESTION [6 upvotes]: I have started reading field theory. -Let $E$ be an extension field of $F$ and let $\alpha,\beta\in E$.Suppose that $\alpha $ is transcendental over $F$ but algebraic over $F(\beta)$. -Show that $\beta$ is algebraic over $F(\alpha)$. -Since $\alpha$ is algebraic over $F(\beta)\implies \exists p(x)\neq 0$ such that $p(\alpha)=0$ .So $p(x)$ must be a polynomial over $F(\beta)$ and not over $F$. -But these facts are taking me nowhere near the solution.Any help will be appreciated. - -REPLY [8 votes]: Consider the polynomial $p$ and write it as -$$ -p(x) = a_0 + a_1x + \ldots + a_nx^n -$$ -where each $a_i \in F(\beta)$ is of the form -$$ -a_i = \frac{q_i(\beta)}{r_i(\beta)} -$$ -where $q_i,r_i \in F[x]$ are polynomials. Hence if $r := \prod r_i$, then -$$ -r(\beta)[\tilde{q_o}(\beta) + \tilde{q_1}(\beta)\alpha + \ldots + \tilde{q_n}(\beta)\alpha^n] = 0 -$$ -for some polynomials $\tilde{q_1}, \ldots, \tilde{q_n} \in F[x]$. Collecting like terms, one can write this in the form -$$ -b_0 + b_1\beta + \cdots + b_m\beta^m = 0 -$$ -where each $b_i$ is a polynomial expression in $\alpha$. This proves that $\beta$ is algebraic over $F(\alpha)$<|endoftext|> -TITLE: What are some math concepts which were originally inspired by physics? -QUESTION [35 upvotes]: There are a number of concepts which were first introduced in the physics literature (usually in an ad-hoc manner) to solve or simplify a particular problem, but later proven rigorously and adopted as general mathematical tools. -One example is the Dirac delta "function" which was used to simplify integrals, but at the time was perhaps not very well-defined to any mathematica standard. However, it now fits well within the theory of distributions. Perhaps another example is Newton's calculus, inspired by fundamental questions in physics. -Are there any other examples of mathematical concepts being inspired by work in physics? - -REPLY [4 votes]: One that came to mind is the concept of the soliton, which is a self-reinforcing solitary wave, whose discovery eventually led to the Korteweg–de Vries equation and other applications in differential systems, field theory, etc.<|endoftext|> -TITLE: Prove: $\forall$ $n\in \mathbb N, (2^n)!$ is divisible by $2^{(2^n)-1}$ and is not divisible by $2^{2^n}$ -QUESTION [6 upvotes]: I assume induction must be used, but I'm having trouble thinking on how to use it when dealing with divisibility when there's no clear, useful way of factorizing the numbers. - -REPLY [2 votes]: A Combinatorial Approach: -For each nonnegative integer $n$, show that $a_n:=\frac{\left(2^n\right)!}{2^{2^n-1}}$ is the number of ways to label the leaves of a complete binary tree $T_n$ with $2^n$ leaves by $1$, $2$, $\ldots$, $2^n$. (Two labelings are considered to be the same if there is a graph automorphism $f$ on $T_n$ such that, for $i=1,2,\ldots,2^n$, $f$ sends the leaf with label $i$ in one labeling to the leaf with the same label in the other labeling.) Prove that $a_n$ is odd for all $n\in\mathbb{N}_0$ by showing that $\frac{a_n}{a_{n-1}}$ is an odd integer for $n=1,2,3,\ldots$, whereas $a_0=1$. (This second part can be proven combinatorially as well, because $\frac{a_n}{a_{n-1}}$ for $n\in\mathbb{N}$ is the number of ways to partition $\left\{1,2,\ldots,2^n\right\}$ into $2^{n-1}$ subsets each of which has $2$ elements. It follows immediately that $\frac{a_n}{a_{n-1}}=\left(2^n-1\right)!!=\prod_{i=1}^{2^{n-1}}\,(2i-1)$.)<|endoftext|> -TITLE: Interval for area bounded by $r = 1 + 3 \sin \theta$ -QUESTION [6 upvotes]: I'm trying to calculate the area of the region bounded by one loop of the graph for the equation -$$ -r = 1 + 3 \sin \theta -$$ -I first plot the graph as a limaçon with a maximum outer loop at $(4, \frac{\pi}{2})$ and a minimum inner loop at $(-2, -\frac{3 \pi}{2})$. I then note the graph is symmetric with respect to the $\frac{\pi}{2}$ axis and the zero for the right half is at $\theta = \arcsin(-\frac{1}{3})$. -So, I chose the interval $[\arcsin(-\frac{1}{3}),\frac{\pi}{2}]$ to calculate the area which can then be multiplied by $2$ for the other half. The problem is that the answer in the book seems to use $\arcsin(\frac{1}{3})$ instead, note the change of sign. -Just to make sure I'm not misunderstanding where I went wrong, I get the answer -$$ -\frac{11 \pi}{4} - \frac{11}{2} \arcsin(-\frac{1}{3}) + 3 \sqrt 2 -$$ -Whereas the book gets -$$ -\frac{11 \pi}{4} - \frac{11}{2} \arcsin(\frac{1}{3}) - 3 \sqrt 2 -$$ -It's a subtle change of sign but I'd really like to understand where I went wrong. - -REPLY [3 votes]: Notice how $\arcsin(-\frac{1}{3}) = - \arcsin(\frac{1}{3})$, so your answer now looks like -$$ -\frac{11 \pi}{4} + \frac{11}{2} \arcsin(\frac{1}{3}) + 3 \sqrt 2 \\ -$$ -That means your area is greater than the answer in your book by: -$$ -2 \left(\frac{11}{2} \arcsin(\frac{1}{3}) + 3 \sqrt 2\right) -$$ -This might indicate you are calculating the area of the outer loop whereas your book is calculating the inner loop. If you choose the interval $[\frac{3 \pi}{2}, 2 \pi - \arcsin(\frac{1}{3})]$ to calculate the half as you did before, you get: -$$ -\begin{eqnarray} -A &=& 2 \times \frac{1}{2} \int_{\frac{3 \pi}{2}}^{2 \pi - \arcsin \frac{1}{3}} (1 + 3 \sin \theta)^2 \, \textrm{d}\theta \\ -&=& \left[\frac{11 \theta}{2} - 6 \cos \theta - \frac{9 \sin(2 \theta)}{4} \right]_{\frac{3 \pi}{2}}^{2 \pi - \arcsin \frac{1}{3}} \\ -&=& \frac{11 \pi}{4} - \frac{11}{2} \arcsin(\frac{1}{3}) - 3 \sqrt 2 \\ -\end{eqnarray} -$$ -This seems to agree with the answer in your book.<|endoftext|> -TITLE: Some equations from Russian maths book. -QUESTION [6 upvotes]: Could you please help me with solving these equations. I would like to solve them in the most sneaky way. All of the exercises in this book can be solved in some clever way which I can't often find. -$$ -\frac{(x-1)(x-2)(x-3)(x-4)}{(x+1)(x+2)(x+3)(x+4)} = 1 -$$ -$$ -\frac{6}{(x+1)(x+2)} + \frac{8}{(x-1)(x+4)} = 1 -$$ -$$ -\sqrt[7]{ (ax-b)^{3}} - \sqrt[7]{ (b-ax)^{3} } = \frac{65}{8}; a \neq 0 -$$ - -REPLY [6 votes]: The first equation implies that the product of the distances from $x$ to $-1, -2, -3, -4$ is the same as the product of the distances from $x$ to $1, 2, 3, 4$. (This condition is equivalent to the quotient appearing in the equation being $\pm 1$.) If $x > 0$, the latter distances are smaller than the corresponding former ones, so the latter product is smaller. If $x < 0$, then the former is. So the only possibility is $x = 0$. -In the second equation, make the substitution $u = x^2 + 3x$. -For the third one, you were almost there with what you wrote in the comments. But remember that $\sqrt[7]{-A} = - \sqrt[7]{A}$. You get $(ax - b)^{3/7} = 65/16$. Raise both sides of the equation to the power of $7/3$. -Edit The third equation has a typo in it. It was supposed to read -$$\sqrt[7]{ (ax-b)^{3}} - \sqrt[7]{ (b-ax)^{-3} } = \frac{65}{8}.$$ -In this case, Write $u = (ax-b)^{3/7}$. Then the equation becomes $u + 1/u = 65/8$. Since after clearing denominators this becomes a quadratic equation, there are at most two possibilities for $u$. Since $u = 8$ and $u = 1/8$ work, these must be the ones. We get $ax - b \in \{128, 1/128\}$, after which it's easy to solve for $x$.<|endoftext|> -TITLE: Extend isometry on some cube vertices to the entire cube -QUESTION [5 upvotes]: Let $K\subset V=\{-1,1\}^n$ be a set of vertices of the $n$-dimensional hypercube $D=[-1,+1]^n$ and let $f:K\to V$ be an isometry with respect to the Euclidean metric inherited from $\mathbb R^n.$ We do not know a priori that $f$ corresponds to a linear endomorphism of $\mathbb R^n.$ -Can $f$ necessarily be expressed as a composition of reflections by coordinate hyperplanes and permutations of coordinates, i.e., -does there exist a global symmetry of $D$ that restricts to $f$ on $K$? -This question came up here when analysing the distribution of some random vectors. - -REPLY [2 votes]: Here is a counterexample. Let $n=4$ and let $K$ consist of the rows of the following array: -$$\begin{matrix} --1 & -1 & -1 & -1 \\ -1 & 1 & -1 & -1 \\ -1 & -1 & 1 & -1 \\ -1 & -1 & -1 & 1 \\ -\end{matrix}$$ -Note that any isometry $D\to D$ which fixes the first $3$ points must be the identity (fixing the first point says you don't reverse the sign of any coordinate, and each coordinate is uniquely determined by the subset of the $3$ points that take the value $1$ on that coordinate, so the permutation of the coordinates must be trivial). But there is an isometry $K\to V$ which fixes the first $3$ points and sends the last point to -$$\begin{matrix} --1 & 1 & 1 & -1 -\end{matrix}$$<|endoftext|> -TITLE: This theorem about matrices of linear maps doesn't look correct. -QUESTION [9 upvotes]: Consider the following theorem: -Theorem. Let $f\colon L\to M$ be a linear mapping of finite-dimensional vector spaces. Then there exist bases in $L$ and $M$ and a natural number $r$ such that the matrix of $f$ in these bases has the form $(a_{ij})$, where $a_{ii}=1$ for $1\leq i\leq r$ and $a_{ij}=0$ for the other values of $i,j$. Furthermore, $r$ is the rank of $f$. -This theorem doesn't make much sense to me. Doesn't it imply that, for example, if $L$ and $M$ have the same dimension, every injective linear map can be represented by the identity matrix in some basis? This looks weird. -Can you comment on this? It is a theorem in Section 8, Chapter 1 of Kostrikin and Manin's book "Linear Algebra and Geometry". -Actually, it is not copied word-by-word, but I think that I wrote exactly what they meant. - -REPLY [15 votes]: Perhaps you're confused because you know that if you were to have an injective linear map $g:L\rightarrow L$ then it wouldn't necessarily be true that there was a basis of $L$ so that the matrix of $g$ in this basis was the identity. -But the stated theorem talks about an injective linear map $f:L\rightarrow M$, between two different vector spaces. So we are considering picking a basis of $L$ and (completely separately) picking a basis of $M$. This gives us much more freedom, indeed enough to make the theorem true.<|endoftext|> -TITLE: $5$ questions on the definition of the Gelfand triple -QUESTION [13 upvotes]: Let $(H,\langle\;\cdot\;,\;\cdot\;\rangle)$ be a Hilbert space over $\mathbb F\in\left\{\mathbb R,\mathbb C\right\}$, $\left\|\;\cdot\;\right\|$ be the norm induced by $\langle\;\cdot\;,\;\cdot\;\rangle$ and $\Phi$ be a subspace of $H$. - -Question 1: Why can we find a finer topology $\tau$ on $\Phi$ such that $$\iota:(\Phi,\tau)\to(H,\left\|\;\cdot\;\right\|)\;,\;\;\;x\mapsto x\tag 1$$ is continuous? -Question 2: Why is it no loss to assume that $\Phi$ is dense in $(H,\left\|\;\cdot\;\right\|)$? - -Now, let $$\Phi^\ast\stackrel{\text{def}}=\left\{f:\Phi\to\mathbb F\mid f\text{ is continuous and linear}\right\}\tag 2$$ denote the dual space of $\Phi$. Then, for all $f\in\Phi^*$ there is exactly one $\phi\in\Phi$ such that $$f\equiv\langle\;\cdot\;,\phi\rangle\tag 3$$ by the Fréchet-Riesz representation theorem. -Let me quote from the Wikipedia article about the Gelfand triple: - -We consider the inclusion of dual spaces $H^\ast$ in $\Phi^\ast$. The latter, dual to $\Phi$ in its 'test function' topology, is realised as a space of distributions or generalised functions of some sort, and the linear functionals on the subspace $\Phi$ of type $$\phi\mapsto\langle v,\phi\rangle$$ for $v$ in $H$ are faithfully represented as distributions (because we assume $\Phi$ dense). - -I can't make much sense of this paragraph. - -Question 3: In $(1)$ we had considered the inclusion of $\Phi$ in $H$. Why do we now consider the inclusion of $H^\ast$ in $\Phi^\ast$? Moreover, given the definition of the dual space in $(2)$, we won't have $H^\ast\subseteq\Phi^\ast$ unless $\Phi=H$. So, what is meant by inclusion here? -Question 4: What do they mean by 'test function' topology? Is that just a fancy name for $\tau$? -Question 5: I have no idea what they mean in the last sentence. I'm not familiar with distributions. Is this somehow related to $(3)$? And why do we need the density of $\Phi^\ast$? - -REPLY [5 votes]: Answer to question 1: Let $\left|\;\cdot\;\right|$ be a norm on $\Phi$. Since $\iota$ is linear, it is continuous if and only if $$\left\|\phi\right\|\le c|\phi|\;\;\;\text{for all }\phi\in\Phi\tag 1$$ for some $c>0$. Since $\Phi$ is a subset of $H$ we can choose $\left|\;\cdot\;\right|$ to be the restriction of $\left\|\;\cdot\;\right\|$ to $\Phi$ and $c=1$. Now we can choose $\tau$ to be the topology induced by $\left|\;\cdot\;\right|$. It's clear, that the topology of open sets in $(\Phi,\left\|\;\cdot\;\right\|)$ is contained in $\tau$, i.e. $\tau$ is finer and hence $\iota$ is a continuous embedding. - -** Answer to question 2**: I'm unsure why they state, that it's no loss to assume that $\Phi$ is dense in $(H,\left\|\;\cdot\;\right\|)$. However, they might mean the following: - -Let $(\;\cdot\;,\cdot\;)$ be the inner product on $\Phi$ derived from $(H,\langle\;\cdot\;,\;\cdot\;\rangle)$. Then, there is a Hilbert space $\tilde H$ containing a dense subspace $\tilde\Phi$ such that $\tilde H$ is unique up to isometric isomorphy and $\tilde\Phi$ is isometrically isomorph to $\Phi$. $$\tilde H=:\overline\Phi^{(\;\cdot\;,\cdot\;)}$$ is called completion of $\left(\Phi,(\;\cdot\;,\cdot\;)\right)$. - -Now, we might be willing to replace $(H,\Phi)$ by $(\tilde H,\tilde\Phi)$. In the mentioned sense, it's "no loss" to assume the density. At least if the main object of interest is $\Phi$ (and not $H$). - -Answer to question 3: Clearly, $$\left.f\right|_{\Phi}\in\Phi^\ast\;\;\;\text{for all }f\in H^\ast\;.$$ - -Let $(X,\left\|\;\cdot\;\right\|_X)$ be a normed space $\Rightarrow$ $$\left\|f\right\|_{X^\ast}:=\sup_{\left\|x\right\|_X=1}|f(x)|$$ is a norm on $X^\ast$. - -I will assume, that $\tau$ is generated by a norm on $\Phi$. According to the Wikipedia article, we should be able to prove the following result without this assumption. Maybe someone else is able to provide an answer targeting this issue. -Since $$\iota^\ast:H^\ast\to\Phi^\ast\;,\;\;\;f\mapsto\left.f\right|_{\Phi}\tag 2$$ is linear and $$\left\|\iota^\ast(f)\right\|_{\Phi^\ast}\le\left\|f\right\|_{H^\ast}$$ by definition of the supremum, $\iota^\ast$ is continuous. Now, we need to prove, that $\iota^\ast$ is injective: - -Let $f\in H^\ast$ and $g:=\left.f\right|_{\Phi}$ -Since $\Phi$ is dense in $(H,\left\|\;\cdot\;\right\|)$, for all $x\in H$, there is a sequence $(\phi_n)_{n\in\mathbb N}$ such that $$\left\|\phi_n-x\right\|\stackrel{n\to\infty}\to 0$$ and hence (since $f$ is continuous) $$|g(\phi_n)-f(x)|\stackrel{n\to\infty}\to 0$$ -Thus, $f$ is uniquely determined by its values on $\Phi$, i.e. $\iota^\ast$ is injective - -So, we can conclude, that $H^\ast$ is continuously embedded into $\Phi^\ast$, $$H^\ast\hookrightarrow\Phi^\ast\;,\tag 3$$ which is most probably what they mean by "$H^\ast\subseteq\Phi^\ast$". - -Question 4 and Question 5 are still open and might be answered by someone else. However, let me repeat the fact that for each $f\in H^\ast$ there is exactly one $x=x(f)\in H^\ast$ such that $$f\equiv\langle\;\cdot\;,x\rangle$$ and hence $$H^\ast\to H\;,\;\;\;f\mapsto x(f)\tag 4$$ is injective. Since $\langle\;\cdot\;,x\rangle\in H^\ast$ for all $x\in H$, $(4)$ is even bijective. Thus, we can identify $H^\ast$ and $H$ and summarize $$\Phi\hookrightarrow H\cong H^\ast\hookrightarrow\Phi^\ast\;.$$<|endoftext|> -TITLE: A $\log \Gamma $ identity: Where does it come from? -QUESTION [8 upvotes]: $$\log \Gamma (n)=n\log n -n +\frac{1}{2} \log \frac{2\pi}{n}+\int_0^\infty \frac{2\arctan (\frac{x}{n})}{e^{2\pi x}-1} \,\mathrm{d}x$$ -Is an identity that is derived from using Sterling's approximation. I can't quite figure out how it was used, and was wondering for a proof. - -REPLY [3 votes]: Proposition : $$\int_{0}^{\infty} \dfrac{\log(1-e^{-2a\pi x})}{1+x^2} \mathrm{d}x = \pi \left[\dfrac{1}{2} \log (2a\pi ) + a(\log a - 1) - \log(\Gamma(a+1)) \right]$$ - -Proof : Let $ \displaystyle \text{I} (a) = \int_{0}^{\infty} \dfrac{\log(1-e^{-2a\pi x})}{1+x^2} \mathrm{d}x$ -$\displaystyle = -\sum_{r=1}^{\infty} \dfrac{1}{r} \int_{0}^{\infty} \dfrac{e^{-2ar \pi x}}{1+x^2} \mathrm{d}x $ -$\displaystyle = -\sum_{r=1}^{\infty} \dfrac{1}{r} \int_{0}^{\infty} \int_{0}^{\infty} e^{-x(2ar \pi + y)} \sin y \ \mathrm{d}y \ \mathrm{d}x $ -$\displaystyle = -\sum_{r=1}^{\infty} \dfrac{1}{r} \int_{0}^{\infty} \dfrac{\sin y}{2ar\pi + y} \mathrm{d}y $ -$\displaystyle = - \sum_{r=1}^{\infty} \dfrac{1}{r} \int_{0}^{\infty} \dfrac{\sin x}{2ar\pi + x} \mathrm{d}x$ -Substitute $\displaystyle x \mapsto 2 r \pi x$ -$\displaystyle \implies \text{I}(a) = - \sum_{r=1}^{\infty} \dfrac{1}{r} \int_{0}^{\infty} \dfrac{\sin 2 r \pi x}{x + a} \mathrm{d}x $ -$\displaystyle = -\int_{0}^{\infty} \dfrac{1}{x+a} \sum_{r=1}^{\infty} \dfrac{\sin(2 r \pi x)}{r} \mathrm{d}x $ -$\displaystyle = -\pi \int_{0}^{\infty} \dfrac{1}{x+a} \left(\dfrac{1}{2} - \{x\} \right) \mathrm{d}x \quad \left( \because \sum_{r=1}^{\infty} \dfrac{\sin (2 r \pi x)}{r} = \dfrac{\pi}{2} - \pi \{x\} \right) $ -$\displaystyle = -\pi \left[ \int_{0}^{\infty} \dfrac{\mathrm{d}x}{2(x+a)} - \int_{0}^{\infty} \dfrac{\{x\}}{x+a} \mathrm{d}x \right] $ -$\displaystyle = -\pi \lim_{n \to \infty} \left[ \dfrac{1}{2} \log \left(\dfrac{a+n}{a}\right) - \text{A}(n) \right]$ -where $\displaystyle \text{A}(n) = \int_{0}^{n} \dfrac{\{x\} }{x+a} \mathrm{d}x$ -Now, -$\displaystyle \text{A}(n) = \int_{0}^{n} \dfrac{\{x\} }{x+a} \mathrm{d}x $ -$\displaystyle = \sum_{k=0}^{n} \int_{k}^{k+1} \dfrac{x-k}{x+a} \mathrm{d}x $ -$\displaystyle = \sum_{k=0}^{n} \left[1 - (k+a)\log \left(\dfrac{k+a+1}{k+a}\right) \right] $ -$\displaystyle = n - \sum_{k=0}^{n} \left[ (k+a+1) \log (k+a+1) - (k+a) \log (k+a) - \log (k+a+1) \right] $ -$\displaystyle = n + a\log a - (a+n) \log (a+n) +\log(a \cdot (a+1) \cdot \ldots \cdot (a+n)) - \log a $ -$\displaystyle \implies \text{I} (a) = -\pi \lim_{n \to \infty} \left[ \dfrac{1}{2} \log \left(\dfrac{a+n}{a}\right) - n - a\log a + (a+n) \log (a+n) - \log(a \cdot (a+1) \cdot \ldots \cdot (a+n)) + \log a \right] $ -Note that $\displaystyle \lim_{n \to \infty} \dfrac{ n! n^t}{t \cdot (t+1) \cdot \ldots \cdot (t+n)} = \Gamma(t) $ -$\displaystyle \implies \text{I} (a) = -\pi \lim_{n \to \infty} \left[ \dfrac{1}{2} \log \left(\dfrac{a+n}{a}\right) - n - a\log a + (a+n) \log (a+n) - a\log(n) - \log (n!) + \log(\Gamma(a)) + \log a \right] $ -Simplifying using Stirling's Approximation and $\displaystyle \lim_{n \to \infty } n \log \left(1 + \dfrac{a}{n} \right) = a $, we have, -$\displaystyle \text{I} (a) = -\pi \left[\log(\Gamma(a+1)) - \dfrac{1}{2} \log(2a \pi) - a (\log(a) -1) \right] $ -$\displaystyle = \pi \left[ \dfrac{1}{2} \log(2a \pi) + a (\log(a) -1) - \log(\Gamma(a+1)) \right] \quad \square $ -Now, -Applying Integration By Parts on the proposition and simplifying, we get, -$ \displaystyle \int_{0}^{\infty} \tan^{-1} \left(\dfrac{t}{a}\right)\dfrac{\mathrm{d}t}{e^{2\pi t} - 1} = \dfrac{1}{2} \left[ \log(\Gamma(a)) - \dfrac{\log(2a)}{2} - \left( a - \dfrac{1}{2} \right)\log (a) +a\right] $ -$$\therefore \log(\Gamma(a)) = 2\int_{0}^{\infty} \tan^{-1} \left(\dfrac{t}{a}\right)\dfrac{\mathrm{d}t}{e^{2\pi t} - 1} + \dfrac{\log(2a)}{2} + \left( a - \dfrac{1}{2} \right)\log (a) -a \quad \square $$<|endoftext|> -TITLE: Inverse Laplace transform of $1/\sqrt{s^2-a^2}$ using complex integration -QUESTION [9 upvotes]: I want to find the inverse Laplace transform of -$$F(s) = \frac{1}{\sqrt{s^2-a^2}}$$ -preferably using the Bromwich integral: -$$f(t) = \frac{1}{2\pi i}\int_{\beta -I \infty}^{\beta +i \infty}e^{st}F(s) ds $$ -The problem is that the integrand has two branch points at $s=+a$ and $s=-a$ . (I've seen examples which have a branch point on the origin, that can be solved easily by excluding that point with an infinitesimal circle). -In order to be able to apply the Cauchy theorem to find the Bromwich integral on the original contour, the new contour should be like this, but I don't know how to perform the integration: - -REPLY [5 votes]: Consider the contour integral -$$\oint_C dz \frac{e^{t z}}{\sqrt{z^2-a^2}} $$ -where $C$ is the contour drawn above, and $t \gt 0$. By Cauchy's theorem, this integral is zero. However, to evaluate the ILT, we need to evaluate all of the pieces of the contour integral. Thankfully, the OP has provided a diagram with such nice labels. Thus, -$$\int_{AB} dz \frac{e^{t z}}{\sqrt{z^2-a^2}} = \int_{\beta-i R}^{\beta+i R} ds \frac{e^{t s}}{\sqrt{s^2-a^2}}$$ -$$\int_{BC} dz \frac{e^{t z}}{\sqrt{z^2-a^2}} = i R \int_{\pi/2}^{\pi} d\theta \, e^{i \theta} \frac{e^{t R e^{i \theta}}}{\sqrt{R^2 e^{i 2 \theta}-a^2}} $$ -$$\int_{CD} dz \frac{e^{t z}}{\sqrt{z^2-a^2}} = \int_{-R}^{-a-i \epsilon} dx \frac{e^{t x}}{\sqrt{x^2-a^2}} $$ -$$\int_{DE} dz \frac{e^{t z}}{\sqrt{z^2-a^2}} = i \epsilon \int_{\pi}^0 d\phi \, e^{i \phi} \frac{e^{t(-a+\epsilon e^{i \phi})}}{\sqrt{(-a+\epsilon e^{i \phi})^2-a^2}}$$ -$$\int_{EF} dz \frac{e^{t z}}{\sqrt{z^2-a^2}} = \int_{-a+\epsilon}^{a-\epsilon} dx \frac{e^{t x}}{e^{i \pi/2} \sqrt{a^2-x^2}} $$ -$$\int_{FG} dz \frac{e^{t z}}{\sqrt{z^2-a^2}} = i \epsilon \int_{\pi}^{-\pi} d\phi \, e^{i \phi} \frac{e^{t(a+\epsilon e^{i \phi})}}{\sqrt{(a+\epsilon e^{i \phi})^2-a^2}}$$ -$$\int_{GH} dz \frac{e^{t z}}{\sqrt{z^2-a^2}} = \int_{a+\epsilon}^{-a-\epsilon} dx \frac{e^{t x}}{e^{-i \pi/2} \sqrt{a^2-x^2}} $$ -$$\int_{HI} dz \frac{e^{t z}}{\sqrt{z^2-a^2}} = i \epsilon \int_{2 \pi}^{\pi} d\phi \, e^{i \phi} \frac{e^{t(-a+\epsilon e^{i \phi})}}{\sqrt{(-a+\epsilon e^{i \phi})^2-a^2}}$$ -$$\int_{IJ} dz \frac{e^{t z}}{\sqrt{z^2-a^2}} = \int_{-a-i \epsilon}^{-R} dx \frac{e^{t x}}{\sqrt{x^2-a^2}} $$ -$$\int_{JA} dz \frac{e^{t z}}{\sqrt{z^2-a^2}} = i R \int_{\pi}^{3 \pi/2} d\theta \, e^{i \theta} \frac{e^{t R e^{i \theta}}}{\sqrt{R^2 e^{i 2 \theta}-a^2}} $$ -OK, there's a lot there, but it's not nearly as bad as it looks. The integral over $AB$ will be $i 2 \pi$ times the ILT as $R \to \infty$. The integral over $BC$ vanishes in this limit because its magnitude is bounded by -$$\frac{R}{\sqrt{R^2-a^2}} \int_0^{\pi/2} d\theta \, e^{-t R \sin{\theta}} \le \frac{R}{\sqrt{R^2-a^2}} \int_0^{\pi/2} d\theta \, e^{-2 t R \theta/\pi} \le \frac{\pi}{2 t \sqrt{R^2-a^2}}$$ -The integral over $JA$ vanishes for similar reasons. The integrals over $CD$ and $IJ$ cancel each other out. The integrals over $DE$, $HI$, and $FG$ vanish as $\epsilon \to 0$. Thus, in these limits, we may write the ILT as follows: -$$\int_{\beta-i \infty}^{\beta+i \infty} ds \frac{e^{t s}}{\sqrt{s^2-a^2}} - i 2 \int_{-a}^a dx \frac{e^{t x}}{\sqrt{a^2-x^2}} = 0$$ -or -$$\frac1{i 2 \pi} \int_{\beta-i \infty}^{\beta+i \infty} ds \frac{e^{t s}}{\sqrt{s^2-a^2}} = \frac1{\pi} \int_{-a}^a dx \frac{e^{t x}}{\sqrt{a^2-x^2}} $$ -We may evaluate the integral on the RHS as follows. Sub $x=a \cos{u}$; then the integral is equal to -$$\frac1{\pi} \int_0^{\pi} du \, e^{a t \cos{u}} = I_0(a t)$$ -where $I_0$ is the modified Bessel function of the first kind of zeroth order. Thus, - -$$\frac1{i 2 \pi} \int_{\beta-i \infty}^{\beta+i \infty} ds \frac{e^{t s}}{\sqrt{s^2-a^2}} = I_0(a t)$$<|endoftext|> -TITLE: Jeep problem variant: cross the desert with as much fuel as possible -QUESTION [5 upvotes]: I'm dealing with the following variant of the well-known Jeep problem: - -A 1000 mile wide desert needs to be crossed in a Jeep. The mileage is one mile / gallon and the Jeep can transport up to 1000 gallons of gas at any time. Fuel may be dropped off at any location in the desert and picked up later. There are 3000 gallons of gas in the base camp. How much fuel can the Jeep transport to the camp on the other side of desert? - -So instead of "exploring" the desert or trying to drive as far as possible, the problem here is to transport as much fuel as possible for a given distance. -I've thought about reducing this problem to the well-studied ones, but I can't come up with anything that makes sense. I don't even know how to approach this. -Any pointers? - -REPLY [4 votes]: Let's represent your starting location as $0$ and the destination as $1000$. -Let $f(x)$ be the greatest amount of fuel that can possibly be transported -to or past $x$ miles from the starting point. -For example, if you pick up $1000$ gallons, drive to $1$ (one mile), drop off $998$ gallons, drive back, repeat the trip to $1$ and back, -and on the third trip out you drive to $100$ where you drop -$801$ gallons of fuel, then you will have transported $2995$ gallons -to point $1$: the $1996$ gallons you cached there and the $999$ gallons -that were in the jeep when you passed $1$ on the third trip from $0$. -You should be able to show that for $0 \leq x \leq 200$, -$f(x) = 3000 - 5x$. -The intuitive reason is that you will either have to pass every point -between $0$ and $200$ five times (three times outbound and twice in the -return direction) have to abandon some fuel without using it; -and the latter strategy will deliver less fuel to points beyond where -you abandoned the fuel. -The previous example that transported $2995$ gallons to or past -point $1$ was therefore optimal, or at least was optimal up to $1$. -It follows that only $2000$ gallons can reach $200$ no matter where you -leave your caches along the way. -You should then be able to show that for -$0 \leq y \leq \frac{1000}{3}$, -$f(200 + y) = 2000 - 3y$. -Moreover, you achieve this by delivering exactly $2000$ gallons of fuel -to $200$, including the fuel in the jeep the last time you arrive -at $200$ in the forward direction, -then making sure you have $1000$ gallons in the jeep each time you -drive forward from $200$. -Finally, for $0 \leq z \leq 1000$, -$f\left(200 + \frac{1000}{3} + z\right) = 1000 - z$. -You achieve this by delivering exactly $1000$ gallons of fuel -to $200 + \frac{1000}{3}$ and then fully loading the jeep with any -fuel you have cached at that point and -making just one trip forward. -The answer is $f(1000)$.<|endoftext|> -TITLE: Christoffel symbols vanishing in normal coordinates -QUESTION [7 upvotes]: Let $(M,g)$ be a Riemannian manifold, and let $(\varphi,U)$ be normal coordinates in $p\in M$. For every $v\in T_p M$, denote $\gamma_v :I_v \to M$ the maximal geodesic with initial point $p$ and initial velocity $v$. Since $U$ is a normal neighborhood of $p$, we have that $\gamma_v ^{-1} (U):=J_v $ is an open interval containing $0$. Now, in normal coordinates, for every $t \in J_v $ we have $\gamma_v (t) \equiv t(v^1 ,...,v^n )$, where $v^i$ are the components of $v$ with respect to the ortonormal basis of $T_p M$ which we used (together with the exp map) to define $\varphi$. So $\gamma _v $ must satisfy the geodesic equation $\ddot \gamma^k _v (t) + \dot \gamma^i _v (t) \dot \gamma^j _v (t) \Gamma^k _{ij} (\gamma_v (t))=0$ for every $t \in J_v$, and using the local expression of $\gamma _v $ and the symmetry of the Levi-Civita connection, we obtain $\Gamma _{ij} ^k (\gamma_v (t))=0$ for every $t\in J_v$. Since for every $q\in U$ there exists a $v\in T_p M$ and a $t\in J_v $ such that $\gamma_v (t)=q$, we have that $\Gamma ^k _{ij} \equiv 0$ in $U$. -The previous reasoning must have something wrong, becouse I know that not every Riemannian manifold is locally flat, but I can't find the mistake. Can you help me? - -REPLY [6 votes]: Let $q\neq p$ be such that $\gamma_v(t)=q$ for some $t$. In normal coordinates (using $\gamma_v (t) = t(v^1 ,...,v^n )$): $$\Gamma^i_{jk}v^jv^j=0$$ But, here, $v$ is fixed, so it can't be concluded that $\Gamma^i_{jk}=0$. But for $p$, it can be applied to every $\gamma_v(t)$ (every geodesic is such that $\gamma_{v'}(0)=p$), i.e., $\forall v$. Then take $v_i=\delta_{ij}$, using the equation above, $$\Gamma^i_{jj}|_p=0 $$ Now set $v_i=(\delta_{ij}+\delta_{ik})$, it follows (using $\Gamma^i_{jj}|_p=0 $) $$\Gamma^i_{(jk)}|_p=0 $$ The symmetric part (assuming $\nabla$ is a general affine connection). So we conclude the symmetric part at $p$ is zero, only at $p$.<|endoftext|> -TITLE: Square and cubic roots in $\mathbb Q(\sqrt n)$ -QUESTION [7 upvotes]: Here is my question : - -Let $n$ a squarefree positive integer, $m \ge 2$ an integer and $a+b \sqrt n \in\mathbb Q (\sqrt n).$ What (sufficient or necessary) conditions should $a$ and $b$ satisfy so that $a+b \sqrt n$ has a $m$-th root in $\mathbb Q (\sqrt n)$? - -Here is my attempt : -I tried the case $m=2$. If $\sqrt{a+b \sqrt n} = c+d\sqrt n$ with $c,d \in \mathbb Q$ then -$$ a=c^2+d^2n, b=2cd. $$ Assuming $b \neq 0$, I get $c^2 + n\left(\frac{b}{2c}\right)^2 = a$, and for instance $c = \pm \sqrt{\frac{a+\sqrt{a^2-nb^2}}{2}}$, so it is necessary to have $\frac{a+\sqrt{a^2-nb^2}}{2}$ is a square in $\mathbb Q$ (and then $d$ is also rational). -We may find better conditions than this one. But I don't know how to manage with the cases $m \ge 3$, because the calculations become difficult. Is there some theoretical approach (e.g. Galois theory) to treat this problem ? -Thank you ! - -REPLY [2 votes]: Let me come back to your question for more practical purposes. In my former theoretical approach (I keep all my previous notations), I gave a necessary and sufficient condition for an element $\alpha\in K^*$ to be a global $m$-th power, but in practice this criterion works well only to give a negative answer, i.e. to show that $\alpha$ is not an $m$-th power, because in that case, one needs only a finite number of trials and errows to find a prime $\mathcal L_v$ outside $S$ such that $\alpha$ is not a local $m$-th power in $K_v^{*}$. But a positive answer would require an infinity of checks, which is not very satisfying in practice. -A « finite » criterion for a « positive » answer when $m$ = an odd prime $p$ (because we want to avoid some « silly special cases », @franz lemmemermer dixit) can be derived from an « interesting » (Tate’s own words) local-global principle in chapter 7 of Cassels-Fröhlich’s book (p. 184, remark 9.3). A particular case is the following : let $E$ be a number field containing the group $\mu_p$ of $p$-th roots of unity ; pick an $\alpha \in E$ and let $S$ be a finite set of primes of $E$ containing (i) all archimedean primes (ii) all primes dividing $p$ and $\alpha$ (iii) all representatives of a system of generators for the ideal class group of $E$. Then any $S$-unit of $E$ which is a local $p$-th power at all primes inside $S$ is a global $p$-th power. -In our initial problem, $\alpha$ is in $K$ which does not necessarily contain $\mu_p$. Put $E = K(\mu_p)$ , $G = Gal(E/K)$, and try to relate $(K^*)^p$ and $(E^*)^p$. Taking $G$-cohomology of the exact sequence 1 --> $\mu_p$--> $E^*$--> $(E^*)^p$ -->1 , we get 1 -->$\mu_p^{G}$--> $K^*$ -->$ K^*\cap (E^*)^p$-->$H^1(G, \mu_p)$ ... If $K$ does not contain $\mu_p$, $G$ has order prime to $p$ and $H^1(G, \mu_p)$ is trivial. In any case we get $(K^*)^p \cong K^*\cap (E^*)^p$. Summarizing, Tate’s remark gives us a finite criterion for a positive answer (in the above sense) when $m = p$. -We may suspect that with the general local-global principle, we are going too far beyond the simple case of a quadratic field, for which the solution should be much less elaborate. In the case of $\mathbf Q$, the solution is immediate because of the factoriality of $\mathbf Z$, so the natural idea is to replace factoriality by the uniqueness of decomposition into prime ideals in a Dedekind domain. A necessary condition, as suggested by @Qiaochu Yuan, is obtained by taking norms down to $\mathbf Q$. But for a sufficient condition, it seems that we are definitely blocked by by the units of norm 1 in the totally real case. This is rather irritating.<|endoftext|> -TITLE: What does $\textstyle y \in \Re^{100}$ mean? -QUESTION [8 upvotes]: Just reading online and came across this: -$\textstyle y \in \Re^{100}$ -I am guessing it's something like "y is an element of [something about real numbers]". Can anyone help me out? - -REPLY [2 votes]: It is just "Fraktur", a German typeface, emphasizing of $R$, thus $\mathfrak{R}$, instead of the usual "blackboard bold" emphasizing $\mathbb{R}$ or plain bold $\bf{R}$ -A frequent use of $R$ for a set is the set of real numbers. -And the given expression is likely to be the Cartesian product of the set $R$, thus $100$-tuples with components from $R$.<|endoftext|> -TITLE: Is $\sum_{n \ge 1}{\frac{p_n}{n!}}$ irrational? -QUESTION [11 upvotes]: Is $\sum_{n \ge 1}{\frac{p_n}{n!}}$ irrational, where $p_n$ is the $n^{\text{th}}$ prime number? -This question is spurred by the comment thread on this question where I presented a rough idea of a proof similar to the well-known proof that $e$ is irrational. I will try to post my idea as a self-answer, other attempts are welcome too of course. -EDIT: Is there a way to prove this using the prime number theorem only? - -REPLY [13 votes]: We know from Ingham's 1937 result that $p_{n+1}-p_n = O(p_n^{0.7})$, so for sufficiently large $n$: -$p_{n+1} - p_n \lt p_n^{0.8} \lt \frac{n}{3} \lt \frac{n}{3} + \frac{p_{n+1}}{n+1}$ -where the middle inequality is a consequence of the prime number theorem. We can rewrite this as: -$\frac{p_{n+1}}{n+1} - \frac{p_n}{n} \lt \frac{1}{3}$ -From this we can conclude that for infinitely many $n$, the fractional part of $\frac{p_n}{n}$ is less than $\frac{1}{2}$, since it is unbounded (again by the prime number theorem) and it can't jump by more than $\frac{1}{3}$ in one step. -Next, suppose $\sum_{i \ge 1}{\frac{p_i}{i!}} = \frac{a}{b}$. Clearly, $(n-1)! \cdot \sum_{1 \le i \lt n}{\frac{p_i}{i!}}$ is an integer, and if we require $n \gt b$, then $(n-1)! \cdot \sum_{n \le i}{\frac{p_i}{i!}}$ must also be an integer. But as shown above, $n$ can be chosen so that the first term of the latter sum, $\frac{p_n}{n}$, has a fractional part less than $\frac{1}{2}$; and the sum of the following terms $\frac{p_{n+1}}{n \cdot (n+1)} + \frac{p_{n+2}}{n \cdot (n+1) \cdot (n+2)} + \dots$ will also be less than $\frac{1}{2}$ provided $n$ is large enough (again using PNT). This contradicts our assumption that the number is rational.<|endoftext|> -TITLE: Closed form of $\sum\frac{1}{k}$ where $k$ has only factors of $2,3$ -QUESTION [5 upvotes]: Consider the set containing $A$ all positive integers with no prime - factor larger than $3$, and define $B$ as - $$ -B= \sum_{k\in A} \frac{1}{k} -$$ - Thus, the first few terms of the sum are: - $$ -\frac{1}{1} +\frac{1}{2} +\frac{1}{3} +\frac{1}{4} +\frac{1}{6} +\frac{1}{8} +\frac{1}{9} +\frac{1}{12} +\frac{1}{16} +\frac{1}{18} + \cdots -$$ -a) Write a - closed-form expression for $X$ that makes the equation below true. In - other words what expression should $X$ be so that the following - equation is true, i.e. writing $B$ in terms of $X$. - $$ -B = \sum_{i=0}^\infty \sum_{j=0}^\infty X -$$ - b) Write a closed-form expression for . - -REPLY [2 votes]: In general, you use the same logic used for the Euler product. Notice that your form becomes $$(1+\sum\limits_{k=1}^{\infty}\frac{1}{2^k})(1+\sum\limits_{k=1}^{\infty}\frac{1}{3^k})$$ since you have all powers of 2 and all powers of 3 and when you multiply them you have all possible combinations without repetition, and since prime decomposition is unique we are sure we have the same series as you have required. But this is -$$(1+\sum\limits_{k=1}^{\infty}\frac{1}{2^k})(1+\sum\limits_{k=1}^{\infty}\frac{1}{3^k})=\frac{1}{1-\frac{1}{2}}\frac{1}{1-\frac{1}{3}}=3$$ -If you ask the same question for other cases and you know the prime numbers involved $p_{1},p_{2},...,p_{m}$ you will have the answer as -$$\prod_{k=1}^{m}\frac{p_{k}}{p_{k}-1}$$<|endoftext|> -TITLE: If squaring a number means multiplying that number with itself then shouldn't taking square root of a number mean to divide a number by itself? -QUESTION [119 upvotes]: If squaring a number means multiplying that number with itself then shouldn't taking square root of a number mean to divide a number by itself? -For example the square of $2$ is $2^2=2 \cdot 2=4 $ . -But square root of $2$ is not $\frac{2}{2}=1$ . - -REPLY [2 votes]: Assuming $x > 0$ -Algebraically: -$$ -\begin{matrix} -x \cdot x = x^2 & \rightarrow & \sqrt{x \cdot x} = \sqrt{x^2} & \rightarrow & \sqrt{x} \cdot \sqrt{x} = x \\ -\downarrow & & & & \downarrow \\ -x^2 \div x = x & \rightarrow & \sqrt{x^2 \div x} = \sqrt{x} & \rightarrow & x \div \sqrt{x} = \sqrt{x} -\end{matrix} -$$ -Visually (expanding on dkeck's answer): - -Graphically: -Another way of wording the question is: -"If the curve for $y=x^2$ intersects with the line for $y = x \cdot z$ at $x=z$, then shouldn't the curve for $y=\sqrt{x}$ intersect with the line for $y=x \div z$ at $x=z$?" -See for yourself, where $z=2$: - -$y = \sqrt{x}$ intersects with $y = x \div z$ at $x = z^2$, not at $x = z$. $y = \sqrt{x}$ intersects with $y = x \div \sqrt{z}$ at $x = z$. -The question comes from the very subtle logical error of confusing the function $\lambda x \space \space x \cdot x$ with the function $\lambda a \space \space a \cdot x$. The above graph calls attention to the difference between these functions. -If squaring a number meant multiplying that number by $z$, and square root were defined as the inverse of square, then yes, taking the square root of a number would mean dividing it by $z$.<|endoftext|> -TITLE: Can every set be expressed as the union of a chain of sets of lesser cardinality? -QUESTION [5 upvotes]: If a set $S$ has countably many elements $\{x_n\}$, it can be expressed as a union of a chain of finite sets -$$ \{x_1\} \subset \{x_1,x_2\} \subset $$ -But what about a set of arbitrary cardinality $S$? Can we express it as a union : -$$S=\bigcup_{i\in I}S_i$$ -where $I$ is a totally ordered set of some cardinality and each $|S_i|$ has cardinality less than $|S|$? - -REPLY [6 votes]: Yes, assuming the axiom of choice. Suppose that $S$ is infinite. Then $|S|=\kappa$ for some infinite cardinal $\kappa$ so there is a bijection $f:\kappa\to S$. For each $\alpha<\kappa$ let $S_\alpha=f[\alpha]=\{f(\xi):\xi<\alpha\}$; since $\kappa=\bigcup_{\alpha<\kappa}\alpha$, clearly $S=\bigcup_{\alpha<\kappa}S_\alpha$, and $S_\alpha\subseteq S_\beta$ whenever $\alpha\le\beta<\kappa$. -Without the axiom of choice it need not be possible. In particular, it cannot be done with a sequence of proper subsets if $S$ is an amorphous set. (As Asaf points out in the comments, it’s possible with a two-element sequence if one is $S\setminus\{p\}$ for some $p\in S$, and the other is $S$ itself.)<|endoftext|> -TITLE: A circle has the same center as an ellipse and passes through the foci $F_1$ and $F_2$ of the ellipse, two curves intersect in $4$ points. -QUESTION [5 upvotes]: A circle has the same center as an ellipse and passes through the foci $F_1$ and $F_2$ of the ellipse,such that the two curves intersect in $4$ points.Let $P$ be any one of their point of intersection.If the major axis of the ellipse is $17$ and the area of the triangle $PF_1F_2$ is $30$,then find the distance between the foci. - -Let the center of the ellipse and the circle be $(0,0)$ -We are given $2a=$length of major axis$=17$. -Let the coordinates of foci be $F_1(c,0)$ and $F_2(-c,0)$ -We need to find $2c$. -Area of $PF_1F_2=\frac{1}{2}\times 2c\times$perpendicular distance between $P$ and the axis of the major axis of the ellipse. -I do not know how to solve it further. - -REPLY [8 votes]: We may suppose that -$$\text{the ellipse$\ :\ \frac{x^2}{a^2}+\frac{y^2}{b^2}=1,\quad a\gt b\gt 0$}$$ -$$\text{the circle$\ :\ x^2+y^2=a^2-b^2$}$$ -As you wrote, we have -$$2a=17\quad\Rightarrow \quad a=\frac{17}{2}$$ -Since -$$\frac{a^2-b^2-y^2}{a^2}+\frac{y^2}{b^2}=1\quad\Rightarrow\quad |y|=\frac{b^2}{\sqrt{a^2-b^2}}$$ -we have -$$30=\frac 12\times 2\sqrt{a^2-b^2}\times \frac{b^2}{\sqrt{a^2-b^2}}\quad\Rightarrow\quad b=\sqrt{30}.$$ -Thus, the answer is -$$2\sqrt{a^2-b^2}=2\sqrt{\left(\frac{17}{2}\right)^2-30}=\color{red}{13}.$$<|endoftext|> -TITLE: How many sequences of rational numbers converging to 1 are there? -QUESTION [19 upvotes]: I have a problem with this exercise: - -How many sequences of rational numbers converging to 1 are there? - -I know that the number of all sequences of rational numbers is $\mathfrak{c}$. But here we count sequences converging to 1 only, so the total number is going to be less. But is it going to be $\mathfrak{c}$ still or maybe $\aleph _0$? - -REPLY [4 votes]: We have that non-repeating (injective) sequences of elements in $\{\,1+1/n:n\in\mathbb{N}\,\}$ form a continuum, and all of them have limit $1$, so our set is at least a continuum. Since also all rational sequences form a continuum, our set is also at most a continuum.<|endoftext|> -TITLE: Conditions implying uniform integrability -QUESTION [6 upvotes]: We say that a family of random variables $X_n, n \geq 1$ is uniformly integrable if -$$\lim_{M \rightarrow \infty} \sup_{n} E[|X_n| 1_{|X_n|>m}]=0.$$ -I am struggling with some proofs and could need some help. are my ideas correct? how does one conclude correctly? Is there an easier proof, ... - -We want to show that $\sup_n ||X_n||_p < \infty$ for some $p> 1$ implies uniform integrability. - -$$\sup_{n} E[|X_n| 1_{|X_n|>M} ] \leq \sup_{n} E[|X_n|] \leq \sup_{n} E[|X_n|^p],$$ -using Jensen. -Since $\sup_n ||X_n||_p < \infty$, we also have $\sup_n E[|X_n|^p] < \infty$, and the claim follows by letting $M \rightarrow \infty$. - -Now we want to show that a finite family of random variables in $L^1$ is always uniformly integrable. - -Let $n \in N$ for some finite set $N$. Define $$M_0:=\max_{n \in N} |X_n|.$$ Then we have $$E[|X_n| 1_{|X_n|>M_0}]= E[|X_n|\cdot 0 ] = 0,$$ -and hence we can take the $sup$ to get for all $M \geq M_0$ that -$$\sup_n E[|X_n| 1_{|X_n|> M}] = 0.$$ -The result follows by taking the limit $M \rightarrow \infty$. -Do we need here something like monotonce or dominated convergence? Is this proof valid? If not, how would one prove it? is there a more elegant way of proving it? - -When $E[\sup_n |X_n|] < \infty$, then the sequence is uniformly integrable. - -How can one interchange order of $\sup$ and expectation? I have no idea! - -REPLY [7 votes]: $$\sup_{n} E[|X_n| 1_{|X_n|>M} ] \leq \sup_{n} E[|X_n|] \leq \sup_{n} E[|X_n|^p],$$ - using Jensen. - -You didn't apply Jensen's inequality correctly; it should read -$$\sup_{n} E[|X_n| 1_{|X_n|>M} ] \leq \sup_{n} E[|X_n|] \leq \sup_{n} \left( E[|X_n|^p] \right)^{\color{red}{\frac{1}{p}}}.$$ - -[...] and the claim follows by letting $M \rightarrow \infty$. - -No, it's not that simple. Letting $M \to \infty$ you get -$$\lim_{M \to \infty} \sup_n \mathbb{E}(|X_n| 1_{|X_n|>M}) \leq \sup_{n \in \mathbb{N}} \|X_n\|_p,$$ -but that's not good enough; you have to show that the limit equals $0$. Hint for this problem: Use Markov's inequality, i.e. -$$\mathbb{E}(|X_n| 1_{\{|X_n|>M}) \leq \frac{1}{M^{p-1}} \mathbb{E}(|X_n|^p 1_{|X_n|>M}) \leq \frac{1}{M^{p-1}} \mathbb{E}(|X_n|^p).$$ - - -Define $$M_0:=\max_{n \in N} |X_n|.$$ Then we have $$E[|X_n| 1_{|X_n|>M_0}]= E[|X_n|\cdot 0 ] = 0,$$ - -No this doesn't work, because $M_0$ depends on $\omega$. Unfortunately, this means that your approach fails. Hint for this one: Using e.g. the dominated convergence theorem check first that the set $\{f\}$ is uniformly integrable. Extend the approach to finitely many integrable random variables. - - -When $E[\sup_n |X_n|] < \infty$, then the sequence is uniformly integrable. - -Hint: By assumption, $Y := \sup_n |X_n|$ is integrable and $|X_n| \leq Y$ for all $n \in \mathbb{N}$. Consequently, -$$\mathbb{E}(|X_n| 1_{|X_n|>M}) \leq \mathbb{E}(|Y| 1_{|Y|>M}) \qquad \text{for all $M>0$ and $n \in \mathbb{N}$.}$$ -Now use the fact that $\{Y\}$ is uniformly integrable (see question nr. 2).<|endoftext|> -TITLE: On the order of natural functions {f:N→N} -QUESTION [5 upvotes]: Define a partial order on natural-valued functions (or sequences, depends on how you see it): $fx\rightarrow f(n)2^{\aleph_0}$ (so the number does change, quite a lot). To see why, let $\{f_i: i< 2^{\aleph_0}\}$ be any family of continuum-many maps from $\mathbb{R}$ to $\mathbb{R}$. Now build $g$ such that $g(x)>f_i(x)$ for infinitely many $i$, as follows: write $\mathbb{R}$ as a partition of continuum-many infinite sets $A_i$, and let $g(x)=f_i(x)+1$ for each $x\in A_i$. Certainly $g$ doesn't dominate $f_i$ - this is a difference from the argument that the original dominating number is uncountable! - but also clearly $f_i$ doesn't dominate $g$. -We can use similar arguments on every other notion of domination I can think of. However, the relationships of the exact values of the generalized bounding numbers is not clear to me. For instance; - -Let $\le_0$ and $\le_1$ be the orderings on $\mathbb{R}^\mathbb{R}$ defined as follows: $f\le_0g$ if $f(x)\le g(x)$ for all but finitely many $x$, and $f\le_1g$ if $f(x)\le g(x)$ for all but bounded-from-above-many $x$. Is it consistent with ZFC that the corresponding dominating numbers are different? - -Note the switch from "$<$" to "$\le$", which I think is more natural for a variety of reasons. -Such "higher" cardinal characteristics have been studied a little, although (as far as I can tell) not extensively; see e.g. http://www.sciencedirect.com/science/article/pii/016800729500003Y.<|endoftext|> -TITLE: Why is $e$ close to $H_8$, closer to $H_8\left(1+\frac{1}{80^2}\right)$ and even closer to $\gamma+\log\left(\frac{17}{2}\right) +\frac{1}{10^3}$? -QUESTION [7 upvotes]: The eighth harmonic number happens to be close to $e$. -$$e\approx2.71(8)$$ -$$H_8=\sum_{k=1}^8 \frac{1}{k}=\frac{761}{280}\approx2.71(7)$$ -This leads to the almost-integer -$$\frac{e}{H_8}\approx1.0001562$$ -Some improvement may be obtained as follows. -$$e=H_8\left(1+\frac{1}{a}\right)$$ -$$a\approx6399.69\approx80^2$$ -Therefore -$$e\approx H_8\left(1+\frac{1}{80^2}\right)\approx 2.7182818(0)$$ -http://mathworld.wolfram.com/eApproximations.html -Equivalently -$$ \frac{e}{H_8\left(1+\frac{1}{80^2}\right)} \approx 1.00000000751$$ -Q: How can this approximation be obtained from a series? -EDIT: After applying the approximation $$H_n\approx \log(2n+1)$$ (https://math.stackexchange.com/a/1602945/134791) -to $$e \approx H_8$$ -the following is obtained: -$$ e - \gamma-\log\left(\frac{17}{2}\right) \approx 0.0010000000612416$$ -$$ e \approx \gamma+\log\left(\frac{17}{2}\right) +\frac{1}{10^3} +6.12416·10^{-11}$$ - -REPLY [2 votes]: Quesly Daniel obtains -$$e\approx \frac{19}{7}$$ -from -$$\int_0^1 x^2(1-x)^2e^{-x}dx = 14-38e^{-1} \approx 0$$ -(see https://www.researchgate.net/publication/269707353_Pancake_Functions) -Similarly, -$$\int_0^1 x^2(1-x)^2e^{x}dx = 14e-38 \approx 0$$ -The approximation may be refined using the expansion -$$e^x=\sum_{k=0}^\infty \frac{x^k}{k!} = 1+x+\frac{x^2}{2}+\frac{x^3}{6}+...$$ -so -$$\frac{1}{14} \int_0^1 x^2(1-x)^2(e^x-1)dx =e-\frac{163}{60}\approx 0$$ -gives the truncation of the series to six terms -$$e\approx\frac{163}{60}=\sum_{k=0}^{5}\frac{1}{k!}$$ -using the largest Heegner number $163$, and - -$$\frac{1}{14} \int_0^1 x^2(1-x)^2(e^x-1-x)dx = e-\frac{761}{280}=e-H_8\approx 0$$ - -gives -$$e\approx H_8$$ -Similar integrals relate $e$ to its first four convergents $2$,$3$,$\frac{8}{3}$ and $\frac{11}{4}$. -$$\int_0^1 (1-x)e^x dx = e-2$$ -$$\int_0^1 x(1-x)e^x dx = 3-e$$ -$$\frac{1}{3}\int_0^1 x^2(1-x)e^x dx=e-\frac{8}{3}$$ -$$\frac{1}{4}\int_0^1 x(1-x)^2e^x dx=\frac{11}{4}-e$$ -These four formulas are particular cases of Lemma 1 by Henry Cohn in A Short Proof of the Simple Continued -Fraction Expansion of e.<|endoftext|> -TITLE: In how many ways $A$ speaks before $B$ and $B$ speaks before $C$ -QUESTION [5 upvotes]: $10$ persons has to give a speech among which three are $A$, $B$ and $C$. In how many ways can they give speech so that $A$ speaks before $B$ and $B$ speaks before $C$. -I have taken the fixed speech order of $A$,$B$, $C$ as $$*A*B*C*$$ -where the stars represent remaining $7$ persons can be accommodate. -That means i have to find number of non negative integral solutions of $$x_1+x_2+x_3+x_4=7$$. -But i have no idea how can arrangements be done in a particular star. - -REPLY [9 votes]: Your idea will work. Using Stars and Bars we find that there are $\binom{10}{7}$ ways to decide the positions to be occupied by "others." Once these positions are decided, they can be filled in $7!$ ways. We get a total of $\binom{10}{7}7!$, which can be simplified greatly. -There are many other approaches. Maybe the shortest is to note that the $10$ people can be permuted in $10!$ ways. By symmetry, the fraction of these in which A, B, C are in the right order is $\frac{1}{3!}$, for a total of $\dfrac{10!}{3!}$. - -REPLY [5 votes]: The simplest way is to think of all the ways of ordering the speeches. By symmetry, $\frac 1{3!}$ of these will have A before B before C because you can group them in batches that only reorder A,B,C. I am assuming the other speakers are distinguishable. - -REPLY [2 votes]: I think you're basically done. If we have 10 spaces for each of 10 speakers, and we place the other 7 speakers first, then we will be left with 3 spaces no matter how we arrange the other 7. And in these 3 spaces it will be no issue to simply place them in the order A, B, C. Therefore the real issue of the problem becomes arranging the other 7 speakers. They can be arranged in $10P7$ ways, and so after this the other 3 speakers can be arranged in the order you wish. And so $10P7$ is the answer.<|endoftext|> -TITLE: Example of a ring such that $R^2\simeq R^3$, but $R\not\simeq R^2$ (as $R$-modules) -QUESTION [13 upvotes]: The usual example of unitary ring without the IBN property is the ring of column finite matrices, and in this case we have $R\simeq R^2$ as (left) $R$-modules. (See also here.) In particular, we have $R^2\simeq R^3$. - -I wonder if there is an example of unitary ring (without the IBN property) such that $R^2\simeq R^3$, but $R\not\simeq R^2$ (as $R$-modules). - -REPLY [15 votes]: The only other examples of non IBN rings that I am aware of are built using Leavitt path algebras, and they do exactly what you want. It is possible to specify positive integers $n -TITLE: Trace operator and $W^{1,p}_0$ -QUESTION [5 upvotes]: Let $W^{1,p}$ be the Sobolev space of $L^p$ functions with $L^p$ first derivatives. Let $W^{1,p}_0$ be the closure of the test functions in $W^{1,p}$. I am not explicitly writing the domain of the functions because I expect it won't matter, but call it $\Omega$ if you wish. What I will need is $\Gamma$ to be the boundary. I know we can define an operator $\gamma_0:W^{1,p}\to L^p(\Gamma)$ such that it is a bounded linear operator and its restriction to smooth functions is the restriction operator $u\mapsto u|_\Gamma$. I have been told the following holds: -$$W^{1,p}_0=\ker\gamma_0.$$ -I can easily see $\subseteq$: if $u\in W^{1,p}_0$, then by definition we have test functions $u_n$ converging to $u$ in $W^{1,p}$, but test functions belong to the kernel and $\gamma_0$ is continuous, hence: -$$\gamma_0u=\lim_{n\to\infty}\gamma_0u_n=0.$$ -But what about the converse? Assume $\gamma_0u=0$. How do I prove this implies $u$ is the limit of test functions? I wasn't able to find that on the internet, and the teacher decided to omit the proof, so here I am asking for a proof. How do I proceed? I know that I can find smooth functions $u_n\to u$ in $W^{1,p}$ (e.g. convolutions with mollifiers, which are not necessarily test functions, I mean the convolutions, the mollifiers are) since smooth functions on $\overline\Omega$ are dens i $W^{1,p}$, but how do I show they are (at least eventually) with zero trace? I can only see that their traces converge to 0 in $W^{1,p}$ by continuity of $\gamma_0$… - -REPLY [2 votes]: In Evans book (Chapter 5) we can find a proof for bounded $\Omega$ with $C^1$ boundary. (The author says to omit it in a first reading.) Here it is in full: - -THEOREM 2 (Trace-zero functions in $W^{1,p}$). Assume $U$ is bounded and $\partial U$ is $C^1$. Suppose furthermore that $u\in W^{1,p}(U)$. Then $$u\in W^{1,p}_0(U)\ \ \ \textit{if and only if}\ \ \ Tu=0\ \, \textit{on}\,\ \partial U.\tag4$$ -Proof$^*$. - -Suppose first $u\in W_0^{1,p}(U).$ Then by definition there exist functions $u_m\in C_c^\infty(U)$ such that $$u_m\to u\quad{\rm in}\ W^{1,p}(U).$$ As $Tu_m=0$ on $\partial U\ (m=1,...)$ and $T:W^{1,p}(U)\to L^p (\partial U)$ is a bounded linear operator, we deduce $Tu=0$ on $\partial U$. - -The converse statement is more difficult. Let us assume that $$\tag{5} Tu=0\quad{\rm on}\ \partial U.$$ Using partitions of unity and flattening out $\partial U$ as usual, we may as well assume $$\begin{cases}u\in W^{1,p}(\Bbb R^n _+),\ \ u \ {\rm has\, compact\, support\, in\ }\overline{{ \Bbb R}}^n_+, \\ \qquad\ Tu=0\ {\rm on }\ \partial \Bbb R^n_+=\Bbb R^{n-1}. \end{cases}\tag6$$ Then since $Tu=0$ on $\Bbb R^{n-1}$, there exist functions $u_m\in C^1(\overline{ \Bbb R}^n_+)$ such that $$u_m\to u\ \ \ \ {\rm in} \ \, W^{1,p}(\Bbb R^n_+)\tag7$$ and $$Tu_m=u_m|_{\Bbb R^{n-1}}\to0\ \ \ \ {\rm in}\ L^p(\Bbb R^{n-1}).$$ Now if $x'\in\Bbb R^{n-1}$, $x_n\geq0$, we have $$|u_m(x',x_n)|\leq|u_m(x',0)|+\int_0^{x_n}|u_{m,x_n}(x',t)|\, dt.$$ Thus $$\begin{aligned}&\int_{\Bbb R^{n-1}}|u_m(x',x_n)|^p \, dx' \\ \leq &\; C\left(\int_{\Bbb R^{n -1}}|u_m(x',0)|^p\, dx' + x_n^{p-1}\int_0^{x_n}\int_{\Bbb R^{n-1}}|Du_m(x',t)|^p dx'\, dt\right)\end{aligned}.$$ Letting $m\to\infty$ and recalling (7), (8), we deduce: $$\int_{\Bbb R^{n-1}}|u(x',x_n)|^p dx'\leq Cx^{p-1}_n \int_0^{x_n}\int_{\Bbb R^{n-1}}|Du|^p dx'dt\tag9$$ for a.e. $x_n>0$. - -Next let $\zeta\in C^\infty(\Bbb R)$ satisfy $$\zeta\equiv1\ {\rm on}\ [0,1],\ \zeta\equiv0\ {\rm on}\ {\Bbb R}-[0,2],\ \ \ 0\leq \zeta\leq1,$$ and write $$\begin{cases}\zeta_m(x):=\zeta(mx_n)\ \ \ \ (x\in\Bbb R^n_+) \\ w_m:=u(x)(1-\zeta_m).\end{cases}$$ Then $$\begin{cases}w_{m,x_n}=u_{x_n}(1-\zeta_m)-mu\zeta'\\ D_{x'}w_m=D_{x'}u(1-\zeta_m).\end{cases}$$ Consequently $$\begin{align}\int_{\Bbb R^n_+}|Dw_m-Du|^p \, dx&\leq C\int_{\Bbb R^n_+}|\zeta_m|^p|Du|^p\, dx \\ & \qquad\ +Cm^p\int_0^{2/m}\int_{\Bbb R^{n-1}}|u|^p\, dx'dt\\ &=:A+B. \end{align}$$ Now $$\tag{11}A\to0\quad{\rm as}\ m\to\infty,$$ since $\zeta_m\neq0$ only if $0\leq x_n\leq 2/m$. To estimate the term $B$, we utilize inequality (9): $$\tag{12}\begin{align}B&\leq Cm^p\left(\int_0^{2/m}t^{p-1}dt\right)\left(\int_0^{2/m}\int_{\Bbb R^{n-1}}|Du|^p dx'dx_n\right)\\ &\leq C\int_0^{2/m}\int_{\Bbb R^{n-1}}|Du|^p dx'dx_n\to 0\quad{\rm as}\ m\to 0.\end{align}$$ Employing (10)-(12), we deduce $Dw_m\to Du$ in $L^p(\Bbb R^n_+)$. Since clearly $w_m\to u$ in $L^p (\Bbb R^n_+)$, we conclude $$w_m\to u\quad{\rm in}\ W^{1,p}(\Bbb R^n _+).$$ But $w_m=0$ if $0 -TITLE: What is the line bundle on $\mathbb{P}^1_\mathbb{C}$ whose transition function is $e^z$ -QUESTION [5 upvotes]: Let $U_0,U_\infty$ be the two affine patches of $\mathbb{P}^1_\mathbb{C}$, neighborhoods of "0" and "$\infty$" respectively. Let $L$ be the line bundle on $\mathbb{P}^1_\mathbb{C}$ constructed by gluing the trivial bundles over $U_0$ and $U_\infty$ via the function $e^z$, which is a nowhere vanishing holomorphic function on $U_0\cap U_\infty$. -I've never taken complex geometry (my background is in algebraic geometry, and $e^z$ isn't an algebraic function), so my question is - does $L$ exist in the algebraic world? Which one is it? (for which $n\in\mathbb{Z}$ is $L\cong\mathcal{O}(n)$?). If it isn't any of them, what is the "right statement" to say that it doesn't exist? For example, is $e^z$ somehow not in $\mathcal{O}_{\mathbb{P}^1}(U_0\cap U_\infty)$? - -REPLY [2 votes]: Since $e^z$ extends as a non-zero function over $U_0$, you can change basis on $U_0$ so that your gluing function is now just $1$. This makes it clear that you have the trivial bundle. -(Incidentally, by GAGA, any holomorphic line bundle on $\mathbb P^1$ has to be algebraic, so what you wrote down had to be $\mathscr O(n)$ for some $n$.)<|endoftext|> -TITLE: How do you find the pair of witnesses in Big-O notation? -QUESTION [5 upvotes]: In this example my textbook provides: -$4n^2+21n+100$   is   $O(n^2)$. -What I do not understand is that the book says the witnesses to this relationship is C = 8, K = 9. How did they come up with those number? -The book kind of gives an answer to how they came up with C = 8: -Suppose $n \ge 0$ -$4n^2+21n+100 \le 4n^2+24n+100$ (Yes, the left is $21n$ and the right is $24n$) -$\le 4(n^2 + 6n + 25)$ -$\le 8n^2$ -I have no idea how they got these numbers, could someone explain this to me. -I also understand that if there is one pair of witnesses to a relationship, there exists an infinite number of witnesses therefore what is another pair that would be valid for this relationship? Thank you in advance! - -REPLY [2 votes]: The idea here is that they try to get some inequality of the form : -$$4n^2+21n+100 \leq Cn^2$$ to prove that this function is $O(n^2)$ . -The inequality they use : -$$4(n^2+6n+25) \leq 8n^2$$ is equivalent with : $$6n+25 \leq n^2$$ which is true for every $n \geq 9$ . -It doesn't matter which $C$ you choose as long as the inequality is true from some point on , for $n \geq n_0$ . -You could choose $C=1000$ to make things even simpler , as obviously : -$$4(n^2+6n+25) \leq 4(n^2+6n^2+25n^2)=128n^2 < 1000n^2$$ for every $n \geq 1$ -You could also choose $n=4+ \epsilon$ for some very small $\epsilon$ and the inequality would be : -$$24n+100 \leq \epsilon n^2$$ which is true from some point on but for small numbers it fails . -They have thus chosen a bigger constant to make the inequality simpler to verify .<|endoftext|> -TITLE: Find the minimum roots of $f'(x)\cdot f'''(x)+(f''(x))^2 =0$ given certain conditions on $f(x)$. -QUESTION [5 upvotes]: Problem: -Let $f(x)$ be a thrice differentiable function satisfying: -$$|f(x) - f(4-x)| + |f(4-x)-f(4+x)| = 0, \forall x \in R$$ -If $f'(1)=0$, then find the minimum number of roots of $f'(x)\cdot f'''(x)+(f''(x))^2 =0$, on $x \in [0,6]$ -My attempt: -We know: $f(x)=f(4-x)$ and $f(4-x)=f(4+x)$ -So, $f(x)=f(x+4)$. That is, the period of the given function is $4$. -It can also be noted that the function is symmetric about $2$ and $4$. -I also know that the second equation is nothing but $\frac{d}{dx}(f'(x) \cdot f''(x))$ -I don't know how to proceed from here. - -REPLY [3 votes]: Since $f(x) = f(4-x) = f(4+x)$ we have $f'(x) = -f'(4-x) = f'(4+x)$ for all $x \in \mathbb{R}$. -Since $f'(1) = 0$, using the above identity for $x = 1$ gives us $f'(1) = f'(3) = f'(5) = 0$. -Also, using the above identity for $x = 2$ yields $f'(2) = -f'(2) = f'(6)$, so $f'(2) = f'(6) = 0$. -Similarly, for $x = 0$, we get $f'(0) = -f'(4) = f'(4)$, so $f'(0) = f'(4) = 0$. -By Rolle's theorem, for $n = 1,2,3,4,5,6$, there exists a $x_n \in (n-1,n)$ such that $f''(x_n) = 0$. -Thus, the function $g(x) = f'(x)f''(x)$ has zeros at $x = 0,x_1,1,x_2,2,x_3,3,x_4,4,x_5,5,x_6,6$. -By using Rolle's theorem again, $g'(x) = f'(x)f'''(x)+f''(x)^2$ has at least $12$ zeros in $[0,6]$. - -The function $f(x) = \cos\pi x$ satisfies $f(x) = f(4-x) = f(4+x)$ for all $x \in \mathbb{R}$ and $f'(1) = 0$. -For this function, $f'(x)f'''(x)+f''(x)^2 = (-\pi \sin \pi x)(\pi^3\sin\pi x)+(-\pi^2\cos\pi x)^2$ $= \pi^4(\cos^2\pi x - \sin^2\pi x) = \dfrac{\pi^4}{2}\cos 2\pi x$, which has $12$ zeros on $[0,6]$. - -Therefore, the minimum number of zeros of $f'(x)f'''(x)+f''(x)^2$ on $[0,6]$ is $12$.<|endoftext|> -TITLE: What is an example of a SVM kernel, where one implicitly uses an infinity-dimensional space? -QUESTION [6 upvotes]: Reading the Wikipedia article about SVMs, I noticed - -More formally, a support vector machine constructs a hyperplane or set of hyperplanes in a high- or infinite-dimensional space, which can be used for classification, regression, or other tasks. - -I continued with "A Tutorial on Support Vector Machines for Pattern -Recognition" by Christopher JC Burges and stumbled over the following (please not that $x \cdot y$ is the dot product): - -Now suppose we first mapped the data to - some other (possibly infinite dimensional) Euclidean space $\mathcal{H}$, using a mapping which we - will call $\Phi$: - $$\Phi : \mathbb{R}^d \rightarrow \mathcal{H}$$ - Then of course the training algorithm would only depend on the data through dot products - in $\mathcal{H}$, i.e. on functions of the form $\Phi(\mathbf{x}_i)\cdot \Phi(\mathbf{x}_j)$. Now if there were a “kernel function” $K$ - such that $K(\mathbf{x}_i, \mathbf{x}_j) = \Phi(\mathbf{x}_i)\cdot\Phi(\mathbf{x}_j)$, we would only need to use $K$ in the training algorithm, - and would never need to explicitly even know what $\Phi$ is. One example is - $$K(\mathbf{x}_i, \mathbf{x}_j ) = e^{- \| \mathbf{x}_i - \mathbf{x}_j\|^2 / 2 \sigma^2}$$ - In this particular example, $\mathcal{H}$ is infinite dimensional, so it would not be very easy to work - with $\Phi$ explicitly. - -I have three questions which are closely related to this. I am happy with any answer which answers any of my questions: - -Why would $\mathcal{H}$ be infinitely dimensional in this case? -What is $\Phi$ in this case? -In other sources I read that the Kernel function has to be positive definite. Why? - -REPLY [7 votes]: To understand the first two questions, let's consider $x, y \in \mathbb{R}^2, x=(x_1,x_2), y=(y_1, y_2)$ and examine the polynomial kernel of degree 2: -$$K(x,y)=(x^Ty)^2$$ -Which can be rewritten as: -$$K(x,y) = (x_1y_1 + x_2y_2)^2 = x_1^2y_1^2 + 2x_1y_1x_2y_2 + x_2^2y_2^2$$ -We know that the kernel function is $K(x,y)=\Phi(x)^T\Phi(y)$, therefore we try to find a feature map $\Phi$ that will be equivalent to the above. Let -$$\Phi(x)=(x_1^2, \sqrt{2}x_1x_2, x_2^2)$$ -From this, we can see that $\Phi(x)^T\Phi(y) = x_1^2y_1^2 + 2x_1y_1x_2y_2 + x_2^2y_2^2$, which is the kernel function! -Notice that by using $\Phi$ we mapped the input vectors from $\mathbb{R}^2$ to $\mathbb{R}^3$, therefore when we compute $K(x,y)$, this mapping will be implicitly performed. -Now, going back to your example (the RBF kernel). Let $\gamma = \frac{1}{2\sigma^2}$ and let's assume $x \in \mathbb{R}^1$: -$$K(x_i, x_j) = e^{-\gamma||x_i - x_j||^2} = e^{-\gamma(x_i - x_j)^2} = e^{-\gamma x_i^2 + 2\gamma x_i x_j - \gamma x_j^2}$$ -Using the Taylor expansion of the exponential function for $e^{2\gamma x_i x_j}$ we can rewrite the above as: -$$ K(x_i, x_j) = e^{-\gamma x_i^2-\gamma x_j^2} \left(1 + \frac{2\gamma x_i x_j}{1!} + \frac{(2\gamma x_i x_j)^2}{2!} + \frac{(2\gamma x_i x_j)^3}{3!} + \ldots \right)$$ -$$ = e^{-\gamma x_i^2-\gamma x_j^2} \left(1 \cdot 1 + \sqrt{\frac{2\gamma}{1!}}x_i \cdot \sqrt{\frac{2\gamma}{1!}}x_j + \sqrt{\frac{(2\gamma)^2}{2!}}x_i^2 \cdot \sqrt{\frac{(2\gamma)^2}{2!}}x_j^2 + \sqrt{\frac{(2\gamma)^3}{3!}}x_i^3 \cdot \sqrt{\frac{(2\gamma)^3}{3!}}x_j^3 + \ldots \right) = \Phi(x_i)^T \Phi(x_j)$$ -And, explicitly the feature map will be: -$$\Phi(x) = e^{-\gamma x^2} \left[1, \sqrt{\frac{2\gamma}{1!}}x,\sqrt{\frac{(2\gamma)^2}{2!}}x^2, \sqrt{\frac{(2\gamma)^3}{3!}}x^3, \ldots\right]$$ -Which is an infinite-dimensional vector. As you said, in kernel methods we will compute inner products in feature space without explicitly having to define the mapping $\Phi$. Avoiding this explicit mapping is the famous "kernel trick".<|endoftext|> -TITLE: Durrett Example 1.9 - Pairwise independence does not imply mutual independence? -QUESTION [5 upvotes]: The example in question is from Rick Durrett's "Elementrary Probability for Applications", and the setup is something like this: - -Let $A$ be the event "Alice and Betty have the same birthday", $B$ be the event "Betty and Carol have the same birthday", and $C$ be the event "Carol and Alice have the same birthday". - -Durrett goes on to demonstrate that each pair is independent, since for example, -$P(A \cap B) = P(A)P(B)$. -However, he concludes that $A, B$, and $C$ are not independent, since -$P(A \cap B \cap C) = \frac{1}{365^2} \neq \frac{1}{365^3} = P(A)P(B)P(C)$. -I understand the reasoning here, and that one can generally show that arbitrary events $X$ and $Y$ are not independent by showing that $P(X\cap Y) \neq P(X)P(Y)$. -I am a little new to probability, though, and don't understand why exactly $P(A \cap B \cap C) = \frac{1}{365^2}$. -My progress so far: -I do see why $P(A) = P(B) =P(C) = \frac{1}{365}$, and thus why $P(A)P(B)P(C) = \frac{1}{365^3}$. -It seems like the sample space $\Omega = \{ (a, b, c) \mid a,b,c \in [365] \}$ -- i.e., all of the possible triples of numbers from 1 to 365, where 1 denotes January 1st, 2 denotes January 2nd, etc. From that, I can conclude $|\Omega| = 365^3$, but I'm not sure where to go from here. -It seems like once a single birthday is chosen, the rest are completely determined if they're all equal to each other - is this a good direction to go in? - -REPLY [3 votes]: The event $A\cap B\cap C$ happens if and only if the event $A\cap B$ happens. And you know how to find $\Pr(A\cap B)$.<|endoftext|> -TITLE: Error on a complex analysis qualifying exam? -QUESTION [5 upvotes]: Let $f(z) = \sum_{n = 0}^\infty c_n z^n$ for $\left| z \right| < R$. -The problem as stated. - -For all $r < R$, $$\int_{\left\{ |z| = r \right\}} \left| f(z) \right|^2 \, dz = 2\pi \sum_{n=0}^\infty \left| c_n \right|^2 r^{2n}. $$ - -What I think the statement should be. - -For all $r < R$, $$\int_{\left\{ |z| = r \right\}} \frac{\left| f(z) \right|^2}{iz} \, dz = 2\pi \sum_{n=0}^\infty \left| c_n \right|^2 r^{2n}. $$ - -Basically, I think the professor forgot to account for the derivative $\gamma^\prime(t) = ire^{it}$ that enters into the integrand when we calculate the integral over $[0, 2\pi]$. -Question. Am I correct? -Thanks for your help. - -REPLY [3 votes]: As pointed out in the comments the first formula is wrong since when $f(z)=1$ the integral is zero and not $2\pi$ as the formula suggests. -In addition, here is a proof that the second formula I suggested gives the correct result. -Let $\gamma(t) = re^{it}$. Then -\begin{align} \int_\left\{ \left| z \right| = r \right\} \frac{\left| f(z) \right|^2}{iz} dz &= \int_0^{2\pi} \left( \sum_{n=0}^\infty c_n r^n e^{i n t} \right) \left( \sum_{m = 0}^\infty \overline{c_m} r^m e^{-i m t} \right) dt \\ &= \sum_{i = 0}^\infty \sum_{n + m = i} c_n \overline{c_m} r^i \int_0^{2\pi}e^{i(n-m)t} dt \\ &= 2\pi \sum_{n = 0}^\infty \left| c_n \right|^2 r^{2n} \end{align} as claimed. The equalities are justified essentially by Cauchy's theorem together with the fact that the power series $f(z)$ converges absolutely within its radius of convergence.<|endoftext|> -TITLE: Are interior points ever limit points as well? -QUESTION [10 upvotes]: From my understanding of limit points and interior points there is somewhat of an overlap and that a lot of the time interior points are also limit points. -For the reals, a neighborhood, $r>0$, around a point must contain only a single point of the set in question to determine if it's a limit point or not. However, an interior must be completely contained in the set in question, meaning it has a neighborhood that contains at least a point in the set, which also makes it a limit point. -For example: $[0,1)$ in the reals. -From what I understand the set $(0,1)$ in the set of interior points, - the set $[0,1]$ in the set of all limit points, and the set $(0,1)$ is the set of all point which are both interior and limit points. -Is this correct or are interior points always not limit points for some reason? - -REPLY [5 votes]: A point $x$ of $A$ can be of one of two mutually exclusive types: a limit point of $A$ or an isolated point of $A$. -If the latter, it means that there exists some open $O$ in $X$ such that $\{x\} = O \cap A$. The negation of this is exactly that every open set $O$ that contains $x$ always intersects points of $A$ unequal to $x$ as well, and this means exactly that $x$ is a limit point of $A$. -E.g. $A = (0,1) \cup \{2,3\}$ (usual topology of the reals) has two isolated points $2$ and $3$ (which are not interior points of $A$), and the rest are limit points of $A$ as well as interior points. There are also limit points $0,1$ that are not in $A$ (showing $A$ is not closed). -So if $A$ has no isolated point, all of the points of $A$ (in particular all its interior points) are limit points of $A$. So often there will quite an overlap between interior and limit points.<|endoftext|> -TITLE: Recreational conjecture on factoring groups -QUESTION [5 upvotes]: Consider the following: For a group $G$ with identity $e$, define $s: G \to \mathbb{N} \cup \{ \infty \}$ by $s(g) = \min \{ k \in \mathbb{N} : g^{k} = e \}$, where $ \min \emptyset = \infty$. Moreover, let $\Theta(G) = \sup \{ s(g) : g \in G \}$. Refer to $G$ as bounded if there exists $N \in \mathbb{N}$ such that $g^{N} = e$ for all $g \in G$. -We see that $G$ is bounded iff $\Theta(G) < \infty$. If $\Theta(G) $ is finite, then $g^{\Theta(G) !} = e$; if $\Theta(G) = \infty$, then for every $N$ there exists $g \in G$ for which $s(g) > N$, so $g^{N} \neq e$. -Moreover, any group of order $n$ is bounded. To see this, the claim is made that $s(g) \leq n$ for all $g \in G$. To see this, consider the set $ \{ g, g^{2}, \ldots, g^{n + 1} \}$. By pigeonhole, there exist $1 \leq i < j \leq n + 1$ such that $g^{i} = g^{j}$, so $g^{j - i} = e$; then $s(g) \leq j - i \leq n$. Thus $\Theta(G) \leq n$. -Initially, I conjectured that a set was bounded off it was finite, considering the multiplicative group $\{ e^{iq} : q \in \mathbb{Q} \}$, where $s(g)$ is always finite, but unbounded. Then I dismissed this considering $G = \mathbb{Z}_{2}^{I}$, where $I$ is an infinite indexing set and $\Theta(G) = 2$. I revised this to the claim that a group $G$ is bounded if and if it can be written as a product $G = \prod_{i \in I} H_{i}$ of finite groups for some index set $I$, where $\sup \{ \# H_{i} : i \in I \} < \infty$. I believe I know how to show any factoring of $G$ into finite groups would satisfy this, but I can't show that every bounded group is factorable into finite groups, i.e. can't show there doesn't exist a bounded group which cannot be expressed as a product of finite groups. -Is this correct? If so, how might I show it? If not, is there a counterexample? Thanks. - -REPLY [4 votes]: The Tarski monster group has elements all of finite order $p$ for some large prime, but since it is simple it cannot be written as a direct product of groups.<|endoftext|> -TITLE: How to derive the posterior predictive distribution? -QUESTION [7 upvotes]: I often seen the posterior predictive distribution mentioned in the context of machine learning and bayesian inference. The definition is as follows: -$ p(D'|D) = \int_\theta p(D'|\theta)p(\theta|D)$ -How/why does the integral on the right equal the probability distribution on the left? In other words, which laws of probability can I use to derive $p(D'|D)$ given the integral? -Edit - After further consideration, I think I am able to see much of the derivation. That is, -$p(D'|D) = \int_\theta p(D', \theta | D)$ via the law of total probability -$p(D'|D) = \int_\theta p(D' | D, \theta) * p(\theta | D)$ via the chain rule -But I don't understand why $D$ may be dropped from the list of conditioned variables belonging to the integral's first term. - -REPLY [3 votes]: $p(D',\theta | D) = p(D' | \theta,D)p(\theta | D)$ is from Bayes rules, provided we have densities: -$p(D',\theta | D) = \frac{P(D', \theta, D)}{P(D)} = \frac{P(D'|\theta, D) P(\theta, D)}{P(D)} = P(D'|\theta, D) P(\theta | D)$. -Now integrate out the nuisance variable $\theta$ on both sides. Your formula also appears to have a Markov-type assumption $p(D'|\theta,D)=p(D'|\theta)$.<|endoftext|> -TITLE: How to prove that a finite-dimensional linear subspace is a closed set -QUESTION [19 upvotes]: Given a linear space $V$, a field $F$, a norm $||\cdot||$ on $V$ and a Base $B$. -How do I prove that the subspace span{$b_1,b_2,\ldots,b_n$} where $b_i \in B$ is a closed set under the topology that is created from the metric space that the norm creates? -Is it generally true? Do I need $V$ to be a Banach space? Or do I need $F$ to be the real numbers? - -REPLY [22 votes]: You only need that the ground field $K$ is a complete normed field, e.g. $K \in \{\mathbb{R},\mathbb{C}\}$. -If $(x^{(k)})_k$ is a sequence in $U := \langle b_1, \dotsc, b_n \rangle$ which converges to some $x \in V$ then $(x^{(k)})_k$ is a Cauchy sequence. Because $K$ is a complete normed field the finite dimensional normed space $U$ is complete. Therefore $(x^{(k)})_k$ converges to some $y \in U$. By the uniqueness of limits in $V$ we already have $x = y \in U$. So $U$ is closed. -Notice that the statement does not necessarily hold for normed spaces over non-complete normed fields , even if all occuring vector spaces are finite-dimensional. Take for example the $\mathbb{Q}$-vector spaces $\mathbb{Q} \subseteq \mathbb{Q}[\sqrt{2}]$.<|endoftext|> -TITLE: Number of the form $2^i3^j5^k$ closest to a given number $n$ -QUESTION [5 upvotes]: How do I find a number of the form $2^i3^j5^k$ closest to a given number $n$, with $i, j, k \in \mathbb{N}$ numerically? Of course, I could try $\lfloor \log_2{n}\rfloor \times \lfloor\log_3{n}\rfloor \times \lfloor \log_5{n}\rfloor$ combinations of $i$, $j$, and $k$, and pick the one which minimizes the difference. I was wondering if there are more elegant ways to do this. - -REPLY [4 votes]: One thing to note is that there won't always be a unique solution - $11$ is the same distance from $10=2\times5$ and $12=2^2\times3$. Still, we can narrow down where we look for solutions somewhat by applying some heuristics: -$2^i3^j5^k\approx n\implies i\log2+j\log3+k\log5\approx\log n\implies(i,j,k)\cdot(\log2,\log3,\log5)\approx\log n$ -So, we now know that our approximate solutions ought to lie close to a plane. Given that we're also taking $i,j,k\geq0$, we can restrict our search further, which gives the bounds you list in your initial post. -So, instead of a brute-force search over a cuboid, we can reduce it to a brute-force search over a (neighbourhood of a) triangular face of a simplex. This means a 2d search rather than a 3d search, which ought to reduce the complexity somewhat - I imagine if the original algorithm is $O(\log^3n)$, this ought to be $O(\log^2n)$. So it's a step forward in some sense. -Perhaps worth noting that the number of integers $\leq n$ expressible in the form $2^i3^j5^k$ is $\sim \frac{\log^3n}{(3)!\log2\log3\log5}$.<|endoftext|> -TITLE: Why are there no vectors of length $30$ which are orthogonal to a rotation? -QUESTION [5 upvotes]: Consider all $30$-dimensional vectors $v$ such that $v_i \in \{-1,1\}$. It seems that none of them has the property that $(v_1, \dots, v_{30})$ is orthogonal to $(v_{30}, v_1, \dots, v_{29})$. - -Without just exhaustively enumerating all $2^{30}$ such vectors, how - can one prove this? - -REPLY [6 votes]: This is true not just for $30$ but for all $n$ of the form $4k+2$. (Also all odd $n$, but only trivially) -Assume there exists such a vector such that $(v_1, \cdots, v_{4k+2}) \cdot (v_{4k+2}, v_1, \cdots, v_{4k+1}) = 0$ (s.t. they are othogonal). This can be written as the sum of $4k+2$ terms -$$v_1v_{4k+2} + v_2 v_1 + \cdots + v_{4k+2} v_{4k+1}= 0 $$ -since each $v_i \in \{1, -1\}$, each term is $\pm 1$. So $2k+1$ terms must be $1$ and $2k+1$ terms must be $-1$. -Now we count the $-1$ terms in the sum $v_1v_{4k+2} + v_2 v_1 + \cdots + v_{4k+2} v_{4k+1}$. If all $v_i$ are $1$, then there are zero terms that have value $-1$. Now consider an sum $v_1v_{4k+2} + v_2 v_1 + \cdots + v_{4k+2} v_{4k+1}$ with arbitrary $v_i$. Flipping the value of a $v_j$ for some $j \in \{1, \cdots, 4k+2\}$ will change the value of exactly two terms. Depending on the previous signs of these terms, the number of negative terms will be changed by $+2, 0$ or $-2$. So the number of negative terms in the sum is always even! This means that there can never be exactly $2k+1$ negative terms in the sum, so the dot product can never be $0$. -Set $k=7$ and done.<|endoftext|> -TITLE: Fill a cube with small cubes with different integer side lengths -QUESTION [6 upvotes]: We are given two (potentially unlimited) sets of cubes, say red cubes (with side $n$) and white cubes (with side $m$), with $m,n \in \mathbb{N} \setminus \{0\}, m \neq n$ (let's assume $n>m$). -Using the same number of red and white cubes, "build" a bigger cube, in the sense that this bigger cube is tessellated with the small cubes. -What is the minimum length $\mathfrak{L}_{n,m}$ of the side of the big cube (as a function of $m$ and $n$)? (How many small cubes are used?) -As noted by @achillehui, we have -$$ \mathfrak{L}_{an,am} = a \mathfrak{L}_{n,m}, \quad a \in \mathbb{N} \setminus \{0\},$$ -so it suffices to consider the case $\text{gcd}(n,m)=1.$ - -$\mathfrak{L}_{n,m}^1 = \text{lcm}(l,\frac{L}{l^2}(m^3 + n^3))$ is a solution (check my answer below for the details) -$\mathfrak{L}_{n,m}^2 = n (m^3 + n^3)$ is a solution (due to @Logophobic) -$\mathfrak{L}_{n,m}^3 = m (m^3 + n^3)$ and $\mathfrak{L}_{n,m}^4 = 2m (m^3 + n^3)$ are solutions for some pairs $(n,m)$. - -None of them is minimal, since we have -$$ \mathfrak{L}_{4,2}^1 = 36, \quad \mathfrak{L}_{4,2}^2 = 288 $$ -while $\mathfrak{L}_{4,2} = 12$ is also a solution. - -REPLY [2 votes]: Your solution is not minimal. I will provide a few examples. -I previously suggested that $\mathfrak{L} = m(m^3+n^3)$ is always a solution. That assumption proved false. However, $\mathfrak{L} = n(m^3+n^3)$ is a solution, though not always minimal. -Case $(n,m)=(3,2) : \mathfrak{L} = m(m^3+n^3) = 70$ - -Cube $70 \times 70 \times 70$ can be filled with $(m^3+n^3)^3=42,875$ cubes $m$ -Replace $n^3(m^3+n^3)^2=33,075$ cubes $m$ with $m^3(m^3+n^3)^2=9,800$ cubes $n$ -Cube $70 \times 70 \times 70$ then contains $9,800$ each of cubes $m$ and $n$ - -Case $(n,m)=(5,2) : \mathfrak{L} = n(m^3+n^3) = 665$ - -Cube $665 \times 665 \times 665$ can be filled with $(m^3+n^3)^3=2,352,637$ cubes $n$ -Replace $m^3(m^3+n^3)^2=141,512$ cubes $n$ with $n^3(m^3+n^3)^2=2,211,125$ cubes $m$ -Cube $665 \times 665 \times 665$ then contains $2,211,125$ each of cubes $m$ and $n$ - -Case $(n,m)=(4,2) : \mathfrak{L} = 12$ - -Cube $12 \times 12 \times 12$ can be filled with $27$ cubes $n$ -Replace $3$ cubes $n$ with $24$ cubes $m$ -Cube $12 \times 12 \times 12$ then contains $24$ each of cubes $m$ and $n$ - -I have revised my solution for $\mathfrak{L}_{n,m}=km$ to be stated more clearly. I have also taken into account achille hue's observation that $\mathfrak{L}_{dn,dm} = d\mathfrak{L}_{n,m}$ so this solution assumes $\text{gcd}(n,m)=1$ -$$\mathfrak{L}_{dn,dm} = d\mathfrak{L}_{n,m}=kdm\\ \text{where $k$ is the least positive integer satisfying}\\(m^3+n^3)|k^3 \text{ and } \left\lfloor\frac{k}{n}\right\rfloor^3 \ge \frac{k^3}{(m^3+n^3)}$$ -Unfortunately, this still does not guarantee a minimal solution because we could also find -$$\mathfrak{L}_{dn,dm} = d\mathfrak{L}_{n,m}=kdn\\ \text{where $k$ is the least positive integer satisfying}\\(m^3+n^3)|k^3 \text{ and } \left\lfloor\frac{k}{m}\right\rfloor^3 \ge \frac{k^3}{(m^3+n^3)}$$ -Which will in some cases provide a more minimal solution. Case in point: $$\mathfrak{L}_{5,3}=38n=190 \lt \mathfrak{L}_{5,3}=76m=228$$ -Proof that such cubes can be constructed: - -Cube with edge length $kdm$ will have $\frac{(km)^3}{(m^3+n^3)}$ of each cube size. -This is an integer because $(m^3+n^3)|k^3$ -The cube can be filled with $k^3$ cubes $dm$ -$n^3$ cubes $dm$ in sub-cube with edge length $dnm$ can be replaced with $m^3$ cubes $dn$ -This replacement must be repeated $\frac{k^3}{(m^3+n^3)}$ times so that the cube contains $\frac{(km)^3}{(m^3+n^3)}$ cubes $dn$ -The replacement can be completed because $\left\lfloor\frac{k}{n}\right\rfloor^3 \ge \frac{k^3}{(m^3+n^3)}$ - -Using $\mathfrak{L}_{3,2}=35m$ as an example (this is the same as Case $(n,m) = (3,2)$ above.) - -Cube with edge length $70$ will have $\frac{70^3}{35} = 9800$ of each cube -The cube can be filled with $35^3 = 42,875$ cubes $m$ -$27$ cubes $m$ in sub-cube with edge length $6$ can be replaced with $8$ cubes $n$ -This replacement must be repeated $\frac{35^3}{35} = 1225$ times so that the cube contains $9800$ cubes $n$ -The replacement can be completed because $\left\lfloor\frac{35}{3}\right\rfloor^3 = 11^3 = 1331 \gt 1225$<|endoftext|> -TITLE: Show there is an uncut square lying in a larger square cut by lines -QUESTION [16 upvotes]: I found this problem on Keith Ball's blog sometime ago but I've never really worked it out. - - -Show that if a square is cut by two lines (shown above in green) then - there is an uncut square at least one third as large (shown in red) - lying inside the original (and aligned with it). -If this is too easy, - try it with three lines and an uncut square at least one quarter as - large as the original. - -My first intuition was to use pigeonhole principle on a $3\times 3$ grid of equally sized squares, but one can clearly choose a sort of diagonal line passing through five squares. I don't think arguing by cases is particularly elegant and it may not be the inspired solution to the problem. Could anyone help? - -REPLY [13 votes]: Let the large square have side length $3$. Note that in the figures of the OP appear two completely different limiting cases where no small square with side length $>1$ can be placed. Any proof will have to incorporate these cases somehow. -If there are no cutting lines we can move a small (unit) square along the inner boundary of the large square along a track of total length $8$. We shall show that a single cutting line $\ell$ leaves a part of length $\geq4$ of this track available for the small square. It follows that two cutting lines cannot make all points of the track unavailable. - -Place the large square with its center at the origin. By symmetry, it is enough to consider cutting lines $\ell$ given by an equation of the form $y=\tan\alpha \cdot x+c$ with $0\leq\alpha\leq{\pi\over4}$ and $c\geq0$. Such a line $\ell$ will cut $n\in\{0,1,2\}$ little corner squares (shaded grey in the above figures). The case $n=0$ is not drawn; the upper two figures show the case $n=1$, and the lower two figures show the case $n=2$. It is easily verified that in each case the pink squares can move freely along a total track length $\geq4$.<|endoftext|> -TITLE: Notation: Is $(\Delta x)^2 = \Delta x^2$? -QUESTION [12 upvotes]: I read this in a book and was wondering whether it's valid or not: - -I thought $\Delta x^2$ would mean 'change in $x^2$', which would be quantitatively different to $(\Delta x)^2$; no? - -REPLY [24 votes]: This is just notation. It is a typical convention that $\Delta x^2 = (\Delta x)^2$. -You are right that it seems ambiguous, but it is consistent in the calculus literature that I have seen that whenever they write $\Delta x^2$, they mean $(\Delta x)^2$. - -REPLY [4 votes]: Yes, it is different from $(\Delta x)^2$. $(\Delta x)^2$ means square of change in $x$. Whereas $\Delta(x^2)$ means change in square of $x$.<|endoftext|> -TITLE: Expectation of expectation of indicator function -QUESTION [5 upvotes]: Is the following correct -$$E[E[\mathbb{I}(X)]] = E[\mathbb{I}(X)]$$ -I assume that $E[E[X]] = E[X]$, as $E[X]$ is a number and expected value of a constant is a constant, and that $\mathbb{I}(X)$ has a binomial distribution. $\mathbb{I}(X)$ represents an indicator function of some random variable $X$. -Exact definition of $\mathbb{I}()$ is not important. - -REPLY [11 votes]: Yes, $E[f(X)]$ is nonrandom, so $E[E[f(X)]] = E[f(X)]$ for any function $f$ for which the terms are well-defined.<|endoftext|> -TITLE: Find the maximum number of rational points on the circle with center $(0,\sqrt3)$ -QUESTION [9 upvotes]: Find the maximum number of rational points on the circle with center $(0,\sqrt3)$ - -Let the equation of the circle be $x^2+(y-\sqrt3)^2=r^2$ -Let $(a,b)$ be any rational point on the circle $x^2+(y-\sqrt3)^2=r^2$. -Then $a^2+(b-\sqrt3)^2=r^2$. -$a^2+b^2+3-2b\sqrt3=r^2$ -How can i find the maximum number of rational points from this equation.I have no idea.Can someone please elaborate? - -REPLY [2 votes]: For any pair of rational points, the straight line passing through the two may be expressed by an equation with rational coefficients. -The midpoint of the segment joining two rational points is itself a rational point. -I guess you know that a line perpendicular to a line with slope $m$ always has a slope $\frac {1}{m}$ implying that the slopes of the two lines are either both rational or both irrational. Therefore, the perpendicular bisector of a segment joining two rational points always has an equation with rational coefficients. -The solution to a system of two linear equations with rational coefficients, if exists, is necessarily rational. Which is to say, that if two straight lines that are expressed by linear equations with rational coefficients intersect, their point of intersection is necessarily rational. -Assume that there is a circle with irrational center on which there are three distinct rational points, say, $A, B, C$. You might be knowing that a three non collinear points always determine a unique circle. -Hence the perpendicular bisectors of $AB$ and $BC$ would meet at the circumcenter of our triangle which would be a rational point in contradiction with the conditions of the problem. -Hence the maximum number of rational points can only be 2.<|endoftext|> -TITLE: Smooth manifold which is a group, but not a Lie Group -QUESTION [5 upvotes]: Are there (preferably non-pathological) examples of smooth manifolds, which are groups, but not Lie groups? -In books one can see plenty of examples of Lie groups, but I haven't seen an example where a group is a manifold, yet its group actions are discontinuous somewhere. I have seen similar questions for topological groups, but they don't have to be locally Euclidean, so the situation is quite different. - -REPLY [11 votes]: Positive-dimensional smooth manifolds have cardinality $\Bbb R$. Pick a bijection $f: M \to \Bbb R$ and define the group operation by $g\cdot h = f^{-1}(f(g)+f(h))$. You may verify that this defines a group. In general, you can pull back any structure you like along a bijection. -You don't see people talking about it because it's not a very natural question. If you're not even preserving the topological structure, then your question actually has nothing to do with smooth manifolds. You're just asking "What are some groups of the same cardinality as $\Bbb R$?" -If you want a topological group structure on a manifold, then that's automatically a Lie group structure (for some smooth structure on the manifold).<|endoftext|> -TITLE: Why Riemann hypothesis and not Riemann's conjecture -QUESTION [6 upvotes]: I have a stupid question. We say Erdös's conjecture, Goldbach's conjecture, Beal's conjecture... and so on. But we don't say 'Riemann's conjecture.' Instead we use the word 'hypothesis'. Why? - -REPLY [4 votes]: There is an interchangeability, but one assumes hypothesis is more formal than a conjecture. -See here for a better treatment of this question, by a mathematician.<|endoftext|> -TITLE: Properties of Weak Convergence of Probability Measures on Product Spaces -QUESTION [5 upvotes]: EDIT: -For the Bounty, I made a substantial edit revision concerning the structure of the question, to make it more readable (hopefully). Moreover I added a question on problem 2.7 of Billingsley’s book. - -I have two problems concerning weak convergence of probability measures in product spaces, that arose from Billingsley’s classic “Convergence of Probability Measures” (Chapter I, Section 2, Subsection “Product Spaces”). -Below there are the part of the book that made me wonder, the numbered questions I have, plus my thoughts (hidden, in order to lighten the overall immediate reading of the question). -Notation: $P$ is a probability measure on $T = S' \times S''$, $P_n \Rightarrow P$ denotes weak convergence, while a $P$-continuity set is the family of those sets $A \in \mathcal{S}$ such that $P(\partial A)= 0$, where $\mathcal{S}$ denotes the Borel $\sigma$-algebra of $S$, and $\partial A$ denotes the boundary of $A$. Concerning product spaces, $\mathcal{T} := \mathcal{S}' \times \mathcal{S}''$ is the $\sigma$-algebra of $T$, while $P’$ is the marginal distribution on $\mathcal{S}’$ of $P$ on $\mathcal{T}$, defined as $P’ (A) := P(A \times S’’)$ (and the same applies to $P’’$). - - - -I PART -The first problem is with the following Billingsley’s statement in bold. -[Theorem 2.8.ii. should be a sort of a partial converse of the – trivial – following proposition, here named in the following way: -Proposition 1: If $P_n \Rightarrow P$, then $P^{'}_n \Rightarrow P'$, and $P^{''}_n \Rightarrow P’’$.] - -Therefore, we have the following theorem, in which (ii) is an obvious consequence of (i). -Theorem 2.8. -(i) If $T = S' \times S''$ is separable, then $P_n \Rightarrow P$ if and only if $P_ n (A’ \times A’’) \to P(A' \times A'')$ for each $P’$-continuity set $A’$ and each $P’’$-continuity set $A''$. -(ii) If $T$ is separable, then $P^{'}_n \times P^{''}_n \Rightarrow P’ \times P’’$ if and only if $P^{'}_n \Rightarrow P'$ and $P^{''}_n \Rightarrow P’’$. - - -II PART -The Second problem concerns problem 2.7 in the book, that reads: - -Problem 2.7: The uniform distribution on the unit square and the uniform distribution on its diagonal have identical marginal distributions. Relate this to Theorem 2.8. - - - -Questions: - -How do we prove (ii)? -1.a. How do we prove the ($\Leftarrow$) direction of (ii)? -1.b. How do we actually use (i) to prove (ii), as Billingsley suggests? -1.c. Is my way of addressing the problem below sound? - -How do we actually relate problem 2.7 to theorem 2.8? -2.a. Do we actually have to use the setting in the problem to come up with some measure that works as a counterexample of Proposition 1, or -2.b. It is enought to notice that the weak limit is now not unique, and hence the converse of proposition 1 does not hold anymore? - - - - -Here my thoughts on question 1: - - Attempted proof of (ii): -(only if) Assume that $P^{'}_n \times P^{''}_n \Rightarrow P' \times P’’$. Thus, by taking the marginals $P'$, and $P''$ on $P^{'}_n \times P^{''}_n \Rightarrow P' \times P''$, we obtain that $P{'}_n \Rightarrow P’$, and $P{''}_n \Rightarrow P''$. -(if) Assume that $P^{'}_n \Rightarrow P'$, and $P^{''}_n \Rightarrow P''$. Let $A' \in \mathcal{S}'$, $A'' \in \mathcal{S}'$ be arbitrary, such that $A'$ is a $P'$-continuity set, and $A''$ is a $P''$-continuity set. [… and here it ends. I thought that we could use the Portmanteau theorem to get somewhere from the fact that $P^{'}_n \Rightarrow P'$, and $P^{''}_n \Rightarrow P''$, but I really don't know.] - - - -As always, thank you for your time. - -REPLY [3 votes]: The direction that $P_n'\times P_n'' \Rightarrow P'\times P''$ implies the weak convergence $P_n' \Rightarrow P'$ and $P_n'' \Rightarrow P''$ is a special case of Proposition 1, since the marginals of a product measure are the respective factors. We could give a simpler proof for this special case, but I doubt that that would be very enlightening. -To show the other direction, we assume $P_n' \Rightarrow P'$ and $P_n'' \Rightarrow P''$. We use part $(i)$, which tells us that for all $P'$-continuity sets $A'$ and $P''$-continuity sets $A''$ we need to check -$$(P_n'\times P_n'')(A'\times A'') \to (P'\times P'')(A'\times A'').\tag{1}$$ -But by our assumption and the Portmanteau theorem, we have $P_n'(A') \to P'(A')$ and $P_n''(A'') \to P''(A'')$, and by limit algebra it follows that -$$(P_n'\times P_n'')(A'\times A'') = P_n'(A')\cdot P_n''(A'') \to P'(A')\cdot P''(A'') = (P'\times P'')(A'\times A''),\tag{1'}$$ -so indeed $(i)$ tells us that under our assumption $P_n'\times P_n'' \Rightarrow P'\times P''$. -Without using part $(i)$, we would have to show that $(P_n'\times P_n'')(A) \to (P'\times P'')(A)$ for all $(P'\times P'')$-continuity sets $A$, and there are generally a lot of $(P'\times P'')$-continuity sets that aren't products. Thus $(i)$ reduces the required work, and leaves only the case of product sets to be considered. -I think that answers 1.a. and 1.b. Concerning 1.c., in the "only if" part you should mention that you use Proposition 1, it's better to be explicit. In the "if" part, you have the right idea to use the Portmanteau theorem, but you didn't recognise how to use it. -Problem 2.7. is rather hard, since it's not clear what Billingsley expected there. I think the thing to take away is that a probability measure on a product space is in general not determined by its marginal measures. (But if all but possibly one of the marginal measures of $P$ are point masses, then $P$ is the product of its marginals.) That's in fact not hard to see once you think about it, but it's a tempting mistake. -Let's try to relate it, though. By theorem 2.8., if $P_n$ is a sequence of probability measures on the unit square converging weakly to the uniform distribution $U_{\Delta}$ on the diagonal of the unit square, then the product of the marginals, $P_n' \times P_n''$, converges weakly to the uniform distribution $U_{\square}$ on the unit square. In particular, from $P_n' \Rightarrow P'$ and $P_n'' \Rightarrow P''$ we can not conclude that $P_n \Rightarrow P$. The theorem only gives that conclusion if we additionally know that $P_n$ and $P$ are product measures, i.e. $P_n = P_n' \times P_n''$ and $P = P' \times P''$. -Ad 2.a.: We don't get counterexamples to proposition 1, since that proposition is true. What we get are counterexamples to the naïve converse of proposition 1, which would be "If $P_n' \Rightarrow P'$ and $P_n'' \Rightarrow P''$ then $P_n \Rightarrow P$". We can find counterexamples to that not only in the exact setting of problem 2.7 of course, all we need is a measure $P$ that is not a product measure, or a sequence of not-product measures whose marginals converge weakly. That can be constructed as soon as both factor spaces have more than one point. -Ad 2.b.: The weak limit - if it exists - is still unique. The point is that unlike for sequences in $\mathbb{R}^n$, the (weak) convergence of all marginals (analogous to the coordinate projections in $\mathbb{R}^n$) is no longer sufficient to deduce the (weak) convergence of the original sequence - consider $P_{2n} = U_{\Delta}$ and $P_{2n+1} = U_{\square}$ - nor, if the sequence is weakly convergent, to determine the weak limit.<|endoftext|> -TITLE: Linear span in the intersection of Hilbert spaces -QUESTION [5 upvotes]: Let $V$ be a vector space. Assume $H_1$ and $H_2$ are subspaces of $V$, and that both $H_1$ and $H_2$ are Hilbert spaces with inner-products $\langle \cdot, \cdot\rangle_1$ and $\langle \cdot,\cdot\rangle_2$ respectively. Let $x\in H_1$, and let $\left\{h_n,\,n\in\mathbb{N}\right\}$ be an orthonormal set in $H_1$ (not necessarily a basis) such that $$\big\Vert x - \sum_{k=1}^n\langle x, h_k\rangle_1 h_k\big\Vert_1 \underset{n\rightarrow\infty}\longrightarrow 0,$$ that is, $x$ lies in the closed linear span of $\left\{h_n\right\}$. -Assume now that $x\in H_2$ and $\left\{h_n\right\}\subset H_2$. Notice that since $\left\{h_n\right\}$ is orthonormal in $H_1$, it is linearly independent in $H_1$ and hence in $V$ and $H_2$, but not necessarily orthogonal in $H_2$. Let $\left\{e_n,\,n\in\mathbb{N}\right\}$ be an orthonormal set in $H_2$ obtained by a Gram-Schmidt process on $\left\{h_n\right\}$, that is, such that $\mbox{span}\left\{e_1,\dots,e_n\right\} = \mbox{span}\left\{h_1,\dots,h_n\right\}$ for each $n$. Is it true that $$\big\Vert x - \sum_{k=1}^n\langle x, e_k\rangle_2 e_k\big\Vert_2 \underset{n\rightarrow\infty}\longrightarrow 0?$$ -Edited -In response to user gerw: let me be more specific and maybe we can relate the two norms. Given two probability measures $\mu$ and $\nu$ on $(\mathbb{R},\mathcal{B})$, both equivalent to Lebesgue measure, let $V$ be the vector space of $\mu$-equivalence classes of measurable functions $f:\mathbb{R}\rightarrow\mathbb{R}$ (which is of course equal to the set of $\nu$-equivalence classes of such functions), let $H_1 = L^2(\mu)$ and $H_2 = L^2(\nu)$. -Let $T_\mu:L^2(\mu)\rightarrow L^2(\mu)$ be the positive Hilbert-Schmidt operator defined by $$T_\mu f(x) = \int c(x,y) f(y) d\mu(y),\qquad f\in L^2(\mu)$$ where $c$ is a bounded, measurable kernel. Define $T_\nu$ similarly. -I am given a certain bounded measurable function $\varphi$ such that both $$\big\Vert \varphi - \sum_{k=1}^n\langle \varphi, h^\mu_k\rangle_1 h^\mu_k\big\Vert_1 \underset{n\rightarrow\infty}\longrightarrow 0\quad \mbox{and} \quad\big\Vert \varphi - \sum_{k=1}^n\langle \varphi, h^\nu_k\rangle_2 h^\nu_k\big\Vert_2 \underset{n\rightarrow\infty}\longrightarrow 0$$ hold, where $\left\{h^\mu_n\right\}$ is the orthonormal set of eigenfunctions of $T_\mu$, and similarly for $\left\{h^\nu_n\right\}$. I'd like to show that $\varphi$ is the $\Vert\cdot\Vert_2$-limit of linear combinations of the $h^\mu_n$. -My question can be rephrased as follows: is the $\Vert\cdot\Vert_2$-closure of $\mbox{span}\left\{h_n^\mu\right\}$ equal to the $\Vert\cdot\Vert_2$-closure of $\mbox{span}\left\{h_n^\nu\right\}$? - -REPLY [2 votes]: Let $H_1 = L^2[0,\pi]$, and let let $H_2$ be the subspace of absolutely continuous functions on $[0,2\pi]$ with first derivative in $L^2$ and with Sobolev norm -$$ - \|f\|_2 = \sqrt{\|f\|_{L^2}^{2}+\|f'\|_{L^2}^{2}}. -$$ -Define $h_n(x) = \sin(nx)$ for $n=1,2,3,\cdots$. Then $\{ h_n \} \subset H_2$ as well. The constant function $1$ is in $H_1, H_2$ also. The set $\{ \sqrt{2}\sin(nx)\}_{n=1}^{\infty}$ is a complete orthonormal basis of $H_1$. Hence, -$$ - \lim_N\left\|1 - 2\sum_{n=1}^{N}(1,\sin(nx))_1\sin(nx)\right\|_{L^2} = 0. -$$ -The functions $\{ \sin(nx) \}_{n=1}^{\infty}$ are in $H_2$, and $1\in H_2$. However $1$ is not in the closure of the linear span of $\{ \sin(nx)\}_{n=1}^{\infty}$ in $H_2$ because convergence in $H_2$ implies uniform convergence on $[0,\pi]$; to see why, -\begin{align} - f(x) & =xf(x)+(1-x)f(x) \\ - & =\int_{0}^{x}\frac{d}{dt}(tf(t))dt - -\int_{x}^{1}\frac{d}{dt}((1-t)f(t))dt \\ - & =\int_{0}^{x}\{f(t)+tf'(t)\}dt-\int_{x}^{1}\{-f(t)+(1-t)f'(t)\}dt,\\ - |f(x)| & \le \|f\|_{L^2}\|1\|_{L^2}+\|f'\|_{L^2}(\|t\|_{L^2}+\|1-t\|_{L^2}) \\ - & \le C\sqrt{\|f\|_{L^2}^2+\|f_{L^2}'\|_1^2} \\ - & = C\|f\|_{2}. -\end{align}<|endoftext|> -TITLE: Prove that $ab$ is perfect cube. -QUESTION [6 upvotes]: Let $a,b$ be positive integers, $b -TITLE: Inverse image of composite function -QUESTION [14 upvotes]: There's a nice proof of the theorem about composite functions (see theorem 5 here) that states -$$(g\circ f)^{-1}=f^{-1}( g^{-1})$$ -Notice that $f^{-1}$ means the inverse of $f$. Could anyone help me with proving similar equality for preimages? -The preimage of a function is defined as $f^{-1}[Y]=\{x \in X | f(x) \in Y\}$. -Let $f: X \to Y$ and $g: Y \to Z$. The theorem states that for every subset $S \subseteq Z$: -$$(g\circ f)^{-1}[S]=f^{-1}(g^{-1}[S])$$ -So the thing is to show that two sets above are equal. -My attempt would be: -Let $M=(g\circ f)^{-1}[S] = \{x\in X | g(f(x))\in S\}$. -then $g^{-1}[S]$ is a set $G=\{y\in Y | g(y))\in S\}$, and $f^{-1}(g^{-1}[S])=\{x\in X | f(x) \in G\}$. -I'm not sure how to proceed with it next. - -REPLY [27 votes]: In general: $$x\in h^{-1}(S)\iff h(x)\in S$$ So the following statements are equivalent: - -$x\in(g\circ f)^{-1}(S)$ -$(g\circ f)(x)\in S$ -$g(f(x))\in S$ -$f(x)\in g^{-1}(S)$ -$x\in f^{-1}(g^{-1}(S))$ - -This shows that $$(g\circ f)^{-1}(S)=f^{-1}(g^{-1}(S))$$<|endoftext|> -TITLE: Every subgroup of a quotient group is a quotient group itself -QUESTION [11 upvotes]: Let $G$ be a group and $N$ its normal subgroup. Now, let $B$ be a subgroup of $G/N$. I need to prove that $B = A/N$ for some subgroup $A$ of $G$ that contains $N$. -Here's what I did: -Given a normal subgroup $N$, we have the canonical projection $\pi:G \to G/N$. Let $B$ be a subgroup of $G/N$. Then $A = \pi^{-1}(B)$ is a subgroup of $G$. $N \in B$, hence $\pi^{-1}(N) = N \subseteq A$. So, $N$ is a normal subgroup of $A$. -Not sure what to do next. - -REPLY [8 votes]: You're nearly there. Just use the set-theoretic fact that if $f : S \to T$ is a surjective function, and $B \subseteq T$, then -$$ -f( f^{-1}(B)) = B. -$$ -In your case, you obtain that $B = \pi(A) = A/N$. - -REPLY [4 votes]: You were almost done. Since $\pi$ is onto, you have -$$A/N=\pi (A) = \pi (\pi^{-1}(B))=B$$<|endoftext|> -TITLE: Example of a subnet that have no subsequence. -QUESTION [9 upvotes]: I have an elementary question on nets because I'm not familiar with this concept. Here are two basic facts: - -Every subsequence of a sequence is a subnet; -Not every subnet of a sequence is a subsequence. - -For the second fact, I have seen the following example: - -Given a sequence $(x_n)=(x_1,x_2,x_3,x_4,...)$, the net $$(x_\alpha)=(x_1,x_2,x_2,x_3,x_3,...,x_{1+[\frac{n}{2}]},...)$$ is a subnet of $(x_n)$ that is not a subsequence of $(x_n)$. - -In this example, $(x_\alpha)$ has a subnet which is a subsequence of $(x_n)$, namely, the sequence $(x_n)$. Could someone give me an example where this doesn't happen? -Explicitly: I'd like an example of sequence $(x_n)$ and a subnet $(x_\alpha)$ of $(x_n)$ such that no subnet of $(x_\alpha)$ is a subsequence of $(x_n)$. - -Motivation for the question: I have a bounded sequence in the dual of a normed space. If the space was separable, then I could pass to a weak-* convergent subsequence. However the space is not separable. So, all I have is a subnet weak-* convergent. Presumably, I can't pass to a subsequence. As I said, I'm not familiar with the concept of net and thus I'd like to see an example where the existence of the subsequence fails. - -REPLY [2 votes]: This question and answer give a concrete example of a sequence $(\delta_n)$ in a compact space that has no convergent subsequence (so this compact space is not sequentially compact). The answer gives a "concrete" (if you believe ultrafilters are concrete) subnet $(x_d), d \in D$ that converges to some $f_\mathcal{U}$. This subnet cannot have a subsequence (that is also a subnet!) that converges, because this would be a subsequence of the original sequence that would converge (and this cannot be). It is possible to find $d_n$ that are increasing in the index set $D$ of the subnet, but this is not a subnet of the subnet (as they will not be cofinal). -A subsequence of a sequence is a subnet as well, but a subnet need not have a cofinal subsequence.<|endoftext|> -TITLE: Is $\mathbb{R}[x,y,z]/(x^2+y^2+z^2)$ a UFD? -QUESTION [8 upvotes]: As the title says, - -I am curious as to whether $A =\mathbb{R}[x,y,z]/(x^2+y^2+z^2)$ is a UFD. - -I believe the answer is yes. -A thought I had was to apply Nagata's criterion, say by localizing at $z,$ to get $A_z = \mathbb{R}[x,y,z,z^{-1}]/((x/z)^2+(y/z)^2+1).$ Further, from that, I was hoping to maybe show that $A_z$ is isomorphic to $\mathbb{R}[x',y',z,z^{-1}]/(x'^2+y'^2+1)$ by sending $x/z$ to $x'$ and $y/z$ to $y'.$ However, for this I would still need to show that $\mathbb{R}[x',y']/(x^2+y^2+1)$ is an UFD, something I'm not quite sure how to do. -Any help would be welcome. - -REPLY [2 votes]: We have $$A_z\simeq\mathbb R[X,Y,Z,Z^{-1}]/\langle (XZ^{-1})^2+(YZ^{-1})^2+1\rangle.$$ -Since $\mathbb R[X,Y,Z,Z^{-1}]=\mathbb R[XZ^{-1},YZ^{-1},Z,Z^{-1}]$, we get $$A_z\simeq\mathbb R[XZ^{-1},YZ^{-1},Z,Z^{-1}]/\langle (XZ^{-1})^2+(YZ^{-1})^2+1\rangle.$$ -Now set $U=XZ^{-1}$, $V=YZ^{-1}$. Check that $U,V$ are algebraically independent over $\mathbb R$. Then $$A_z\simeq\mathbb R[U,V,Z,Z^{-1}]/\langle U^2+V^2+1\rangle.$$ -Now use that $\mathbb R[U,V]/\langle U^2+V^2+1\rangle$ is a UFD; see here.<|endoftext|> -TITLE: How to solve this equation manually: $(x^2+100)^2=(x^3-100)^3$? -QUESTION [7 upvotes]: Well, I was given a problem, -find $x$, if: -$$(x^2+100)^2=(x^3-100)^3$$ -I tried everything that I could, I even opened up the brackets which gave an ugly degree 9 equation, I also tried to plot the curves $y=\left(x^2+100\right)^2$ and $y=\left(x^3-100\right)^3$ and locate their point of intersection but it couldn't be done manually. -So, in the end I was forced to use hit and trial after doing which I got the answer, is their any way to solve this algebraically?? - -REPLY [20 votes]: It's obvious that $x^3>100$ so $x>0$ . -Consider the $6$-th root of the equation to get : -$$\sqrt[3]{x^2+100}=\sqrt{x^3-100}$$ -Now consider the function :$$f(x)=\sqrt[3]{x^2+100}$$ -This function is bijective from $(0,\infty$) to $(\sqrt[3]{100},\infty)$ . -Its inverse is :$$f^{-1}(x)=\sqrt{x^3-100}$$ -This means that the equation is now : -$$f(x)=f^{-1}(x)$$ -$$f(f(x))=x$$ -But $f$ is an increasing function so let's take two cases : - -If $f(x)>x$ then : - -$$x=f(f(x))>f(x)>x$$ a contradiction . - -If $f(x) -TITLE: Circular geodesics -QUESTION [6 upvotes]: Consider the tube of radius $a > 0$ around a unit-speed curve $\gamma$ in $\mathbb{R}^3$ $$\sigma (s, \theta) = \gamma (s) + a(\cos \theta \ n(s) + \sin \theta \ b(s))$$ -Show that the parameter curves on the tube obtained by fixing the value of $s$ are circular geodesics on $\sigma$. -$$$$ -Could you give me some hints how we could show that? -Do we maybe use the fact that any normal section of a surface is a geodesic? - -REPLY [3 votes]: You can see geometrically that the normal at the surface at the point $\sigma(s,\theta)$ is the vector $N_\sigma(s,\theta) = n(s)\cos \theta + b(s)\sin \theta$. If $\alpha(\theta) = \sigma(s_0,\theta)$, then you can check that $\alpha$ is parametrized by arc-length, so it suffices to check that $\alpha''(\theta)$ is parallel to $N_\sigma(s_0,\theta)$ and you're done.<|endoftext|> -TITLE: Cotangent summation (proof) -QUESTION [6 upvotes]: How to sum up this thing, i tried it with complex number getting nowhere so please help me with this,$$\sum_{k=0}^{n-1}\cot\left(x+\frac{k\pi}{n}\right)=n\cot(nx)$$ - -REPLY [8 votes]: Like Sum of tangent functions where arguments are in specific arithmetic series, -$$\cot(nx)=\dfrac1{\tan(nx)}=\dfrac{1-\binom n2\tan^2x+\cdots}{\binom n1\tan x-\binom n3\tan^3x+\cdots}=\dfrac{\cot^nx-\binom n2\cot^{n-2}x+\cdots}{\binom n1\cot^{n-1}x-\binom n3\cot^{n-3}x+\cdots}(\text{multiplying the N & D by }\cot^nx)$$ -If $\cot(nx)=\cot(nA)\iff\tan nx=\tan nA, nx=nA+m\pi$ where $m$ is any integer -$x=A+\dfrac{m\pi}n$ where $m\equiv0,1,\cdots,n-2,n-1\pmod n$ -So, the roots of $$\cot^nx-\binom n1\cot nA\cdot\cot^{n-1}x-\binom n2\cot^{n-2}x+\cdots=0$$ are $\cot\left(A+\dfrac{m\pi}n\right)$ where $m\equiv0,1,\cdots,n-2,n-1\pmod n$ -$$\implies\sum_{m=0}^{n-1}\cot\left(A+\dfrac{m\pi}n\right)=\binom n1\cot nA=n\cot nx$$<|endoftext|> -TITLE: Example for conjugate points with only one connecting geodesic -QUESTION [9 upvotes]: $\newcommand{\ga}{\gamma}$ -$\newcommand{\al}{\alpha}$ -I would like to find an example for a Riemannian manifold, that has -two conjugate points $p,q$ with only one connecting geodesic between them. -(This is the geodesic they are conjugate along) -Explanation: -Consider a parametrized family of geodesics starting from a fixed point $p$, i.e: -$\ga_s(t)=\ga(t,s), \ga_s(0)=\ga_0(0)=p$ where for each fixed $s$ , the path $t \to \ga_s(t)$ is a geodesic in $M$. -Then $J(t)= \frac{\partial \ga}{\partial s}(t,0)$ is a Jacobi field, along the geodesic $\ga_0$. -Moreover, every Jacobi field can be realized from such a variation of geodesics. -By definition, if $p,q$ are conjugate along some geodesic $\al$, there exsits a nonzero Jacobi field along $\ga$ that vanishes at $p,q$. This means there is some variation $\ga(t,s)$ of $\al$ ($\ga_0=\al$) where $J(t)= \frac{\partial \ga}{\partial s}(t,0)$. -Assume $\al(t_0)=q$. Then $0=J(t_0)= \frac{\partial \ga}{\partial s}(t_0,0)$, so one can say that "$\gamma_s(1)$, is the point $q$ only up to first order in $s$", but we cannot conclude there exists an $s \neq 0$ such that $\ga_s(t_0)=q$. -(Of course, if we knew that $\ga_s(t_0)=q$ for all $s \in (\epsilon,\epsilon)$ this would imply $J(t_0)=0$ but not vice-versa). -In the language of wikipdeia: -"Therefore, if two points are conjugate, it is not necessary that there exist two distinct geodesics joining them" - -REPLY [11 votes]: An example in here should do it: http://arxiv.org/pdf/math/0211091.pdf. Look on the first three pages or so. Basically a paraboloid is an example. Pick $p$ and travel along the meridian. If you track the minimizing geodesics joining $p$ to the point you're meeting along your travels, you'll see at first there's only one and then at some point that single minimal geodesic bifurcates into two. The bifurcation point is what you're looking for. Maybe it's easier to imagine the cone $z^2 = x^2 + y^2$ as a singular example of this bifurcation phenomenon. There the bifurcation point is easily identified as the vertex. -The whole paper I linked to is devoted to an in-depth analysis of when this phenomenon occurs.<|endoftext|> -TITLE: Underdetermined Linear Systems -QUESTION [5 upvotes]: I'm working through an introductory linear algebra textbook and one exercise gives the system -$2x+3y+5z+2w=0$ -$-5x+6y-17z-3w=0$ -$7x-4y+3z+13w=0$ -And asks why, without doing any calculations, it has infinitely many solutions. Now, a previous exercise gives the same system without the fourth column and asks why, without any calculation, you can tell it's consistent, and I realized that it's because it has the trivial solution (0,0,0). But I'm struggling to see how that implies that this new system has infinitely many solutions. -I did some research and found that if an underdetermined linear system has a solution then it has infinitely many, but the explanations of this seem to talk about rank and other things that I'm not familiar with. -So if someone could please explain why you can just tell without doing any calculation why this system has infinitely many solutions (I'm guessing it has something to do with the previous problem that's the same just without that fourth column of variables) from the authors perspective (i.e. they're only assuming we have algebra 2 at this early point in the book) it would be much appreciated. - -REPLY [3 votes]: Note that the system without the fourth column is not only consistent but also determined ( the rows are linearly independent), this means that also the system: -$$ -\begin{cases} -2x+3y+5z=-2\\ --5x+6y-17z=3\\ -7x-4y+3z=-13 -\end{cases} -$$ -is detrmined, i.e has one solution $(x,y,z)=(a,b,c)$. -Now, your system is -$$ -\begin{cases} -2x+3y+5z=-2w\\ --5x+6y-17z=3w\\ -7x-4y+3z=-13w -\end{cases} -$$ - so, by linearity, has the infinitely many solutions $(x,y,z)=(aw,bw,cw) \quad \forall w \in \mathbb{R}$<|endoftext|> -TITLE: Conjecture ${\large\int}_0^\infty\left[\frac1{x^4}-\frac1{2x^3}+\frac1{12\,x^2}-\frac1{\left(e^x-1\right)x^3}\right]dx=\frac{\zeta(3)}{8\pi^2}$ -QUESTION [30 upvotes]: I encountered the following integral and numerical approximations tentatively suggest that it might have a simple closed form: - -$${\large\int}_0^\infty\left[\frac1{x^4}-\frac1{2x^3}+\frac1{12\,x^2}-\frac1{\left(e^x-1\right)x^3}\right]dx\stackrel{\color{gray}?}=\frac{\zeta(3)}{8\pi^2}\tag{$\diamond$}$$ - (Update: I fixed a typo: replaced $4\pi^2$ with $8\pi^2$ in the denominator) - -I have only about $800$ decimal digits that agree with the conjectured value, calculated using Mathematica. Unfortunately, its numerical algorithms become unstable when I try to increase precision. Maple refuses to numerically evaluate this integral altogether. -Obviously, the first three terms of the integrand have elementary antiderivatives, but I was not able to find a closed-form antiderivative (either elementary or using known special functions) for the last one. -I'm asking for your help in proving (or disproving) the $(\diamond)$. - -REPLY [8 votes]: From the summation identity of zeta function: -$$ \boxed{ {\,}\\ \quad \color{Blue}{\sum_{n=0}^{\infty}\frac{\Gamma(n+s)\zeta(n+s)}{(n+1)!}=0 \qquad\colon\space Re\{s\}\lt1} \quad \\{\,} } $$ -$$ \begin{align} -\Gamma(s)\zeta(s) &= -\sum_{n=1}^{\infty}\frac{\Gamma(n+s)\zeta(n+s)}{(n+1)!} = -\int_{0}^{\infty}\frac{x^{s-2}}{e^x-1}\left(\sum_{n=1}^{\infty}\frac{x^{n+1}}{(n+1)!}\right)\,dx \\[2mm] -&= -\int_{0}^{\infty}\frac{x^{s-2}}{e^x-1}\left(e^x-1+x\right)\,dx = \int_{0}^{\infty}x^{s-2}\left(\frac{x}{e^x-1}-1\right)\,dx \\[4mm] -\Gamma(s-1)\zeta(s-1) &= -\frac{\Gamma(s)\zeta(s)}{2!}-\sum_{n=2}^{\infty}\frac{\Gamma(n+s)\zeta(n+s)}{(n+1)!} \\[2mm] -&= \int_{0}^{\infty}x^{s-3}\left(\frac{x}{e^x-1}-1+\frac{x}{2}\right)\,dx \qquad\cdots\,\implies -\end{align} $$ - -$$ \color{blue}{\Gamma(s-N)\zeta(s-N)=\int_{0}^{\infty}x^{s-N-2}\left[\frac{x}{e^x-1}-\left(\sum_{n=0}^{N}B_{n}\frac{x^n}{n!}\right)\right]\,dx} $$ -$$ {\small \,0\lt\,Re\{s\}\,\lt1 ,\quad N\in\{\,0,\,1,\,2,\,\cdots\,\} ,\quad B_{n}\,\,{Bernoulli\,Number} ,\quad B_{1}=-1/2} $$ - - -$$ \begin{align} -\color{red}{I} &= \int_{0}^{\infty}\left[\frac{1}{x^4}-\frac{1}{2\,x^3}+\frac{1}{12\,x^2}-\frac{1}{\left(e^x-1\right)\,x^3}\right]\,dx \\[3mm] -&= -\int_{0}^{\infty}x^{-4}\left[\frac{x}{e^x-1}-1+\frac{x}{2}-\frac{x^2}{12}\right]\,dx \\[3mm] -&= -\int_{0}^{\infty}x^{\color{red}{0-2}-2}\left[\frac{x}{e^x-1}-\left(1\frac{x^0}{0!}-\frac{1}{2}\frac{x^1}{1!}+\frac{1}{6}\frac{x^2}{2!}\right)\right]\,dx \\[3mm] -&= -\lim_{s\rightarrow0}\Gamma(s-2)\zeta(s-2)=-\frac{\zeta'(-2)}{2}=\color{red}{\frac{\zeta(3)}{8\pi^2}} -\end{align} $$<|endoftext|> -TITLE: Match off points into $N$ red/blue pairs with straight lines connecting pairs, so that none of lines we draw intersect -QUESTION [5 upvotes]: Suppose we are given $2N$ points in the plane (we may assume that no $3$ are collinear). Assume that $N$ of these points are colored red, and $N$ points are colored blue. Can we match off the points into $N$ red/blue pairs with straight lines connecting these pairs, so that none of the lines we draw intersect? If this matching exists, can we find it by an algorithm? - -REPLY [2 votes]: Yes, such a matching always exist. -The algorithm works recursively. You either -1) find a pair of red and blue point R and B such that the number of red points and blue points on the left side of the vector RB is the same. You match RB, and then you recursively find the matchings for all the points on each side of the vector separately. -or -2) find any line that divides the plane into two non-empty planes such that the number of red and blue points on each side of the line is equal. You recursively find matchings on these points on each side of the line separately. -We need to prove that either one of these two cases must happen. Suppose case 1) does not exists. Consider any pair of node R and B. Consider the vector RB. RB divides the remaining points into those that are on the left side of the vector, and those on the right side of the vector. Given a vector RB, let RB_left be the the number of red points on the left of RB substracted by the number of blue points on the left of RB. Since case 1 does not exist, RB_left is not zero. We also have RB_left = -BR_left. Now consider rotating the vector X=RB using until it eventually becomes BR, and consider what happens to X_left. Each time the vector is collinear with a point, then X_left will increase or decrease by 1 point. Since BR_left = -RB_right, there must be some time in which X_left = 0.<|endoftext|> -TITLE: $\mathbb{Q}(\sqrt[3]{17})$ has class number $1$ -QUESTION [11 upvotes]: Let $\alpha:=\sqrt[3]{17}$ and $K:=\mathbb{Q}(\alpha)$. We know that $$\mathcal{O}_K=\left\{\frac{a+b\alpha+c\alpha^2}{3}:a\equiv c\equiv -b\pmod{3}\right\}.$$ -I have to show that $K$ has class number $1$, i.e. $\mathcal{O}_K$ is a PID. The Minkowski bound $\lambda <9$, so we should consider the primes $2, 3, 5, 7$. It's easy to show that - -$2\mathcal{O}_K=\mathfrak{p}_1\mathfrak{p}_2$, with $\mathfrak{p}_1=(2, \alpha+1)$ and $\mathfrak{p}_2=(2, \alpha^2+\alpha+1)$ -$3\mathcal{O}_K=\mathfrak{p}_3^2\mathfrak{p}_4$ (I can't compute these primes explicitly) -$5\mathcal{O}_K=\mathfrak{p}_5\mathfrak{p}_6$, with $\mathfrak{p}_5=(5, \alpha+2)$ and $\mathfrak{p}_6=(5, \alpha^2+3\alpha-1)$ -$7\mathcal{O}_K=\mathfrak{p}_7$ - -Now, how can I show that, for example, $\mathfrak{p}_1$ and $\mathfrak{p}_5$ are principal ideals? (I can't find elements with norm $2$ or $5$). The situation for the prime $3$ is more complicated: the book suggests to find elements in $\mathcal{O}_K$ with norm $3$ that are coprime (this implies that $\mathfrak{p}_3$ and $\mathfrak{p}_4$ are principal), but I can't find these elements. -Note that $N_{K/\mathbb{Q}}(a+b\alpha+c\alpha^2)=a^3+17b^3+17^2c^3-3\cdot 17 abc$. - -REPLY [3 votes]: The first thing to do is get a clean basis without annoying congruence -conditions : put $\beta=\frac{\alpha^2-\alpha+1}{3}$. Then -$[1,\alpha,\beta]$ is a $\mathbb Z$-basis of $\mathcal{O}_K$. -We apply Franz Lemmermeyer's method and look for elements of the -form $x+y\alpha+z\beta$ with interesting norms and $x,y,z$ small. -A little inspection shows that -$$ -N(2+\alpha+\beta)=N(2-\beta)=2, N(1+\alpha+\beta)=N(1-\alpha+\beta)=3, -N(3-\alpha)=2 \times 5 -$$ -A few additional checks and computations from here then reveals that -$$ -\begin{array}{lclcl} -\mathfrak{p}_1 &=& (2+\alpha+\beta) &=& \Bigg( \frac{\alpha^2+2\alpha+7}{3}\Bigg) \\ -\mathfrak{p}_2 &=& (2-\beta) &=& \Bigg( \frac{-\alpha^2+\alpha+5}{3}\Bigg) \\ -\mathfrak{p}_3 &=& (1+\alpha+\beta) &=& \Bigg( \frac{\alpha^2+2\alpha+4}{3}\Bigg) \\ -\mathfrak{p}_4 &=& (1-\alpha+\beta) &=& \Bigg( \frac{\alpha^2-4\alpha+4}{3}\Bigg) \\ -\mathfrak{p}_5 &=& (\frac{3-\alpha}{2+\alpha+\beta}) &=& \Bigg( \frac{-2\alpha^2-\alpha+16}{3}\Bigg) \\ -\mathfrak{p}_6 &=& (\frac{5(2+\alpha+\beta)}{3-\alpha}) &=& -\Bigg( \frac{11\alpha^2+28\alpha+74}{3}\Bigg) \\ -\end{array} -$$<|endoftext|> -TITLE: Can $V$ only have well-orderings definable with respect to a parameter? -QUESTION [6 upvotes]: In this answer, Professor Hamkins gives a proof that for models $M$ of ZF, $M$ being a model of $\text{ZFC} + V = \text{HOD}$ is equivalent to there being a definable well-ordering of the universe: -https://mathoverflow.net/a/180734 -His argument easily extends to an equivalence of these properties to $M$ having a well-ordering of the universe definable with respect to an ordinal parameter. So, if there is well-ordering of the universe with respect to some parameter $p,$ but there is not a well-ordering of $V$ definable without parameters, then necessarily $p \not \in \text{OD}$. Is this situation possible? My intuition is that it shouldn't be possible, since I don't think a non-ordinal parameter should be able to define something so fundamental when an ordinal cannot do the same. - -REPLY [3 votes]: I just noticed this question, which I find quite interesting. -I thought I'd mention the following related result, which some readers may find interesting: -Theorem. The following are equivalent. - -The universe is HOD of a set: $\exists b\ (V=\text{HOD}(b))$. -The axiom V=HOD is forceable. -Somewhere in the generic multiverse, the universe is HOD of a set. -Somewhere in the generic multiverse, the axiom V=HOD holds. - -The proof is contained in my blog post, Being HOD-of-a-set is invariant throughout the generic multiverse. -In particular, it follows that the axiom V=HOD is a switch, in models for which $V=\text{HOD}(b)$, since it can be forced on and then off again as much as you like. If $V=\text{HOD}$ holds, then you can do the forcing in Asaf's answer, adding a Cohen real, and $V\neq\text{HOD}$ in the extension $V[c]$, but then you can force $V=\text{HOD}$ again in a further forcing extension. And furthermore, whenever $V=\text{HOD}(b)$, then you can force $V=\text{HOD}$, and the assertion $\exists b\ V=\text{HOD}(b)$ is invariant by forcing.<|endoftext|> -TITLE: Is $GL(E)$ dense in $L(E)$, when $\dim E=\infty$? -QUESTION [5 upvotes]: Let $E$ be a normed vector space (Banach space, if you like). -Is $GL(E)$, the set of invertible and continuous endomorphism of $E$, dense in $L(E)$, the set of continuous endomorphism of $E$? -I specify that I know the answer if $dim(E)<\infty$, with classical arguments about the spectrum of matrices, and, I know that $GL(E)$ is open in $L(E)$, even if $dim(E)=\infty$ (if $E$ is a Banach space), using the formula $(I-u)^{-1}=\sum_{n\in\mathbb{N}}u^n$ for $u$ small enough. -So the remaining question I would like to ask is about the density of $GL(E)$ in $L(E)$, and in the case it is not, about its closure. - -REPLY [4 votes]: For all classical infinite-dimensional Banach spaces invertible operators are not dense but there are instances when they are. -In a Banach algebra, invertible elements are dense if and only if left-invertible elements are dense and if this is so, left-invertible elements are already invertible. However, in the case of classical Banach spaces you always have non-invertible, left invertible elements (for example, isomorphisms onto subspaces of codimension 1). -This is explained in detail in Section 4.2 of my article with Sz. Draga, When is multiplication in a Banach algebra open?<|endoftext|> -TITLE: Probability that a quadratic equation with random coefficients has real roots -QUESTION [12 upvotes]: Consider quadratic equations $Ax^2 + Bx + C = 0$ in which $A$, $B$, and $C$ are - independently distributed $\mathsf{Unif}(0,1)$. What is the probability that the roots of such an equation are real? - -This problem is from Chapter 3 of Rice: Mathematical Statistics and Data Analysis (editions 1 through 3). Until recent printings of 3e, the incorrect answer 1/9 was given for this problem. -However, Horton (2015) http://www3.amherst.edu/~nhorton/precursors/precursors.pdf -points out that the correct answer is slightly above 1/4, as can be verified by a simple simulation. (Horton and his colleagues are concerned with elements of an undergraduate curriculum to prepare students in the mathematical sciences to cope with modern data science.) -In a somewhat more practical setting, one might consider a discrete version of this problem. -A program that produces random drill problems on quadratic equations $Ax^2 + Bx + C = 0,$ selects values for $A, B,$ and $C$ at random and independently from among the ten equally likely values $0.1, 0.2, \dots, 1.0$. What proportion of such equations have real roots? And what proportion have only one root? -The initial Answer sketches the exact analytic solution of the original problem and shows numerical and graphical results from simulation. A simulated result for the discrete version is also shown. -Additional answers using other methods or discussing related topics are welcome. - -REPLY [4 votes]: Someone asked almost this a few days ago. I did want to point out that a cube is not the most natural shape to consider for this problem, although it was the one chosen. Better, in some ways, to consider the ball $A^2 + B^2 + C^2 \leq R^2.$ In that case we take rotated coordinates -$$ u = B; v = (A - C)/ \sqrt 2; w = (A + C)/ \sqrt 2. $$ -Then the condition $B^2 \geq 4AC$ becomes $u^2 + 2 v^2 \geq 2 w^2.$ It makes no difference here whether we use volume or surface area, so we are asking for the total surface area of the two peculiar elliptical patches $u^2 + 2 v^2 \geq 2 w^2$ on the sphere $u^2 + v^2 + w^2 = 1,$ divided by $4 \pi.$ On second thought, the figure we want is one minus this, the funny annular region. -Not sure I know how to calculate this.<|endoftext|> -TITLE: if the inverse images of all closed balls are closed, is $f$ continuous? -QUESTION [12 upvotes]: Is the following statement true? (it is asked to be proved true) - -If $f: D \to\mathbb R^n$, and for every closed balls $B$ in $\mathbb R^n$, pre-image of $f$ of $B$ is closed in $D$, then $f$ is continuous on $D$. - -I know the analogue of the statement for the openness is true. Because of the fact that every open set is equivalent to the union of certain open balls. However, there is no theorem that any closed set can be written into intersection of closed balls. I am just confused. - -REPLY [4 votes]: Here is the "universal" counterexample. Let $D=\mathbb{R}^n$, with the topology which has as a subbasis the sets $\mathbb{R}^n\setminus B$ for all closed balls $B$. Then the identity map $i:D\to \mathbb{R}^n$ satisfies your condition, but it is not continuous (because, for instance, any nonempty open subset of $D$ contains the complement of a finite union of balls and hence is unbounded). This is universal in that map $f:X\to\mathbb{R}^n$ satisfies your condition iff there is a (necessarily unique) continuous map $g:X\to D$ such that $f=ig$ (namely, $g$ is $f$ considered as a map to $D$). -On the other hand, there are no counterexamples if you require the function to be bounded. To show this, we want to show that $i$ is continuous when restricted to any bounded set. Concretely, this means that if $A\subset\mathbb{R}^n$ bounded, $U\subseteq A$ is open in the usual topology, and $x\in U$, then we can find finitely many closed balls $B_1,\dots, B_m$ such that $x\in A\setminus(B_1\cup\dots B_m)\subseteq U$. To prove this, note that we may assume $A$ is closed and hence compact, so $A\setminus U$ is also compact. For each $y\in A\setminus U$, you can choose an open ball around $y$ whose closure does not contain $x$. Finitely many of these open balls then cover $A\setminus U$ by compactness, and you can take $B_1,\dots,B_m$ to be their closures.<|endoftext|> -TITLE: Most wanted reproducible results in computational algebra -QUESTION [24 upvotes]: I am interested in suggestions for major computational results obtained with the help of mathematical software but not easily verifiable using computers. -"Most wanted" could refer, for example, to the following: - -results which are highly cited/reused in other publications/computations -computational proofs of fundamental results -counterexamples to central conjectures in a field -checking correctness of various mathematical databases -producing open source implementations of computations previously performed using another open or closed source software, or when the old code is not available at all -landmark computations that one could be interested to reproduce (in the same way like a chemistry reaction from a textbook could be reproduced by mixing baking soda and vinegar in your kitchen). - -If the publication just says "this result was produced using the system X", it may be a long way to reproduce it. It may include a reference to exact version of the system, a link to the extra code to download, but again it may happen that that version has to be installed in some particular way to satisfy certain dependencies, the extra code is not well documented so it is unclear how to run it, some other special knowledge or non-trivial computational resources are needed, etc. -On the other side, having these results easier reproducible could be crucial for science. Hypothetically, one could e.g. download a virtual machine and re-run the whole experiment, or use the newest version of the system to check whether the experiment still runs with the same outcome. -I hope that making such a list of suggested experiments to reproduce will be useful to those interested in checking them twice ;-). For example, one could submit their findings to a journal like ReScience which "targets computational research and encourages the explicit replication of already published research, promoting new and open-source implementations in order to ensure that the original research is reproducible". -Remark: suggestions on computational verification of previously obtained theoretical results and pointers to existing reproducible experiments are also welcome. - -REPLY [7 votes]: I believe that enumeration of finite groups of a given order is definitely among most wanted reproducible experiments. Here "enumeration" means providing complete and non-redundant list of groups, "complete" means that no groups are missing in this list, and "non-redundant" means that groups from this list are non-isomorphic pairwise. Guaranteeing these properties is crucial for results that rely on checking all groups of a given order, or that refer to a particular group by its "catalogue number". -The most complete collection of groups of certain orders is available in the GAP system via several interconnected packages: - -Hans Ulrich Besche and Bettina Eick: GrpConst - Constructing the -Groups of a Given Order, Version 2.5 (2015), http://www.icm.tu-bs.de/~beick/so.html -Hans Ulrich Besche, Bettina Eick and Eamonn O'Brien: The Small Groups Library, http://www.icm.tu-bs.de/ag_algebra/software/small/ -Bettina Eick and Michael Vaughan-Lee: SglPPow - Database of groups of prime-power order for some prime-powers, -Version 1.1 (2014), http://www.icm.tu-bs.de/~beick/soft/sglppow/ -Heiko Dietrich: Cubefree - Constructing the Groups of a Given Cubefree Order, Version 1.15 (2015), -http://users.monash.edu.au/~heikod/cubefree.html - -Altogether, this provides some precomputed collections of groups for some orders as well as functions to construct all groups of a given order for some infinite series. None of this packages is actually a mere database which only stores certain groups, since even predominantly database-providing packages also implement algorithms for generic constructions of groups of order $p^n$ for some $p$ and $n$, and for groups of square-free orders. These packages are closely interconnected: for example, while GrpConst was used to construct some groups from the SmallGroups library, it also uses the SmallGroups library to enumerate groups of some other orders. -It is very important to have such results as much cross-checked and reproducible as possible: even if the database part of the libraries remains unchanged, that does not guarantee that any other changes in GAP and/or this packages will not break the code. Of course, a lot of cross-checks had been done, and this functionality is considered to be very reliable: - -The group numbers in the SmallGroups library are to a large extent cross-checked, being computed using different approaches and also compared with theoretical results, where available (see [Hans Ulrich Besche, Bettina Eick and Eamonn O'Brien. A MILLENNIUM PROJECT: CONSTRUCTING SMALL GROUPS. Int. J. Algebra Comput. 12, 623 (2002), http://dx.doi.org/10.1142/S0218196702001115], in particular 4.1. Reliability of the data). -The Cubefree package was cross-checked against the SmallGroups Library and IRREDSOL package as described at http://www.gap-system.org/Manuals/pkg/cubefree/htm/CHAP002.htm#SECT005. -The GAP standard test suite, which is run nightly and is a part of the release preparation workflow, includes tests of ConstructAllGroups from the GrpConst package. - -But modern tools permit us to do even more, and in particular improve situation for orders where precomputed collections of all groups are not available. Recently I have initiated a "Group numbers reproducibility project" (which was inspired by some questions under the 'groups-enumeration' tag here - see in particular this one). This project uses crowdsourcing approach to assemble a database of numbers of isomorphism types of finite groups. In other words, it fills in the table of the values of the function $gnu(n)$, returning the number of isomorphism types of finite groups of order n (so "gnu" stands for the "Group NUmber"). It puts together data from several sources, including values calculated at runtime using the packages listed above and numbers published by the AG Algebra und Diskrete Mathematik group of TU Braunschweig at http://www.icm.tu-bs.de/ag_algebra/software/small/number.html. Furthermore, it accepts reports on new values, not available in any of the above mentioned sources and on recomputation of previously known values. The data are added to the database only after they are replicated by the maintainer. The project also uses two other designations for the submissions: "reproduced", when the same result was obtained using another implementation, and "agrees with theory", when it corresponds to the theoretically proved result. -In the current version of the database, the value of $gnu(n)$ is available from the computer algebra system level (from GAP locally or remotely, and from any other SCSCP client remotely). Using the version control history, one could also access provenance information (runtime, versions of the software, etc). This could be useful to the researchers interested in producing the list of all groups locally, since they can check if someone else had already attempted to do this and how much time did they wait. Since the beginning of the project, almost 200 new entries has been submitted, replicated and added to the database, which now provides the most complete available table of known values of $gnu(n)$. -Further details could be found in the README.md file at https://github.com/alex-konovalov/gnu. See also my presentation "Computational Algebra meets Open Science: Group Numbers Reproducibility Project".<|endoftext|> -TITLE: How should I calculate the $n$th derivative of $f(x)=x^x$? -QUESTION [9 upvotes]: What would be the $n$th derivative of -$ f (x) = x ^ x$ -I have reached the fifth derivative, very long indeed but I see no pattern that will help me find a general expression. -\begin{align*} -\frac{df}{dx} &= x^x(1+\ln(x))\\ -\frac{d^2f}{dx^2} &= x^x\left(\frac{1}{x}+1+2\ln(x)+\ln(x)^2\right)\\ -\frac{d^3f}{dx^3} &= x^x\left( \frac{-1}{x^2}+\frac{3}{x} + \ln(x)^3 + 3 \ln(x)^2 + \frac{3\ln(x)}{x} + 3\ln(x) + 1 \right)\\ -\frac{d^4f}{dx^4} &= x^x\left( \frac{2}{x^3}-\frac{1}{x^2}-\frac{4\ln(x)}{x^2}+\frac{6}{x}+\ln(x)^4+4\ln(x)^3\right.\\ -&\qquad\qquad\left.+\frac{6\ln(x)^2}{x}+6\ln(x)^2+\frac{12\ln(x)}{x}+4\ln(x)+1 \right)\\ -\frac{d^5f}{dx^5} &= x^x\left( \frac{-6}{x^4}+\frac{10\ln(x)}{x^3}+\frac{5}{x^2}-\frac{10\ln(x)^2}{x^2}+\frac{10}{x}+\ln(x)^5+5\ln(x)^4\right.\\ -&\qquad\qquad\left.+\frac{10\ln(x)^3}{x}+10\ln(x)^3+\frac{30\ln(x)^2}{x}+10\ln(x)^2+\frac{30\ln(x)}{x}+5\ln(x)+1 \right) -\end{align*} - -REPLY [13 votes]: This approach follows an example (p. 139) of Advanced Combinatorics by L. Comtet. The elaboration here differs in minor aspects and is sometimes more detailed which was helpful to verify the example. - -Taylor series (part one): -The idea for this formula of the $n$-th derivative of $x^x$ with $x>0$ is based upon a clever Taylor series expansion. - -Recall a Taylor series expansion of a function $f(t)$ at a point $x$ is, assuming $f$ is sufficiently often differentiable -\begin{align*} -f(t)=\sum_{j=0}^\infty \frac{D_x^jf(x)}{j!}(t-x)^j -\end{align*} -Here we use the differential operator $D_x^j:=\frac{d^j}{dx^j}$ and we will also use the coefficient of operator $[x^k]$ to denote the coefficient of $x^k$ in a series. -Since -\begin{align*} -f(t+x)=\sum_{j=0}^\infty D_x^jf(x)\frac{t^j}{j!}\tag{1} -\end{align*} -we can describe the $n$-th derivative of a function $f$ at a point $x$ as -\begin{align*} -D_x^nf(x)=n![t^n]f(t+x)\tag{2} -\end{align*} - -Setting $f(x)=x^x, x>0$ we start according to the LHS of (1) with - \begin{align*} -f(x+t)=(x+t)^{x+t} -\end{align*} - and obtain with some elementary transformations and series expansion of the exponential function - \begin{align*} -f(x+t)&=(x+t)^{x+t}\\ -&=\exp({(x+t)\ln(x+t)})\\ -&=\exp\left({(x+t)\left(\ln(x)+\ln\left(1+\frac{t}{x}\right)\right)}\right)\\ -&=x^x\exp\left({t\ln(x)}\right)\exp\left({x\left(1+\frac{t}{x}\right)\ln\left(1+\frac{t}{x}\right)}\right)\\ -&=x^x\left(\sum_{i=0}^\infty\frac{\left(t\ln(x)\right)^i}{i!}\right) -\left(\sum_{j=0}^\infty\frac{\left(\left(1+\frac{t}{x}\right)\ln\left(1+\frac{t}{x}\right)\right)^j}{j!}x^j\right)\tag{3}\\ -\end{align*} - -In order to calculate the Taylor series expansion of the RHS of (1) we need to expand (3) in powers of $t$. We use the Ansatz -\begin{align*} -\frac{1}{j!}\left(\left(1+\frac{t}{x}\right)\ln\left(1+\frac{t}{x}\right)\right)^j -=\sum_{k=j}^\infty b_{k,j}\frac{\left(\frac{t}{x}\right)^k}{k!}\qquad\text{and}\qquad b_{0,0}=1 -\end{align*} -Note that since the series expansion of the logarithm has no constant term we start the series expansion with $\left(\frac{t}{x}\right)^j$, resp. with index $k=j$. - -Recurrence relation: $b_{k,j}$ -In order to determine $b_{k,j}$ we develop a recurrence relation. We set - \begin{align*} -B_j(z):=\frac{1}{j!}\left(\left(1+z)\ln(1+z\right)\right)^j=\sum_{k=j}^\infty b_{k,j}\frac{z^k}{k!}\quad\qquad (j\geq 0)\tag{4} -\end{align*} - -Differentiating the LHS gives -\begin{align*} -D_zB_j(z)&=\frac{1}{(j-1)!}\left((1+z)\ln(1+z)^{j-1}\right)(1+\ln(1+z))\\ -&=B_{j-1}(z)+\frac{j}{1+z}B_j(z) -\end{align*} -Differentiating the RHS gives -\begin{align*} -D_zB_j(z)=\sum_{k=j}^\infty b_{k,j}\frac{z^{k-1}}{(k-1)!} -\end{align*} -Equating both sides and multiplying with $1+z$ gives -\begin{align*} -(1+z)\sum_{k=j}^\infty b_{k,j}\frac{z^{k-1}}{(k-1)!}&=(1+z)B_{j-1}(z)+jB_j(z)\\ -&=(1+z)\sum_{k=j-1}^\infty b_{k,j-1}\frac{z^{k}}{k!}+j\sum_{k=j}^\infty b_{k,j}\frac{z^{k}}{k!} -\end{align*} -Comparing coefficients by calculating $\frac{1}{n!}[z^n]$ gives -\begin{align*} -b_{n+1,j}+nb_{n,j}=b_{n,j-1}+nb_{n-1,j-1}+jb_{n,j} -\end{align*} - -Collecting equal terms we finally get - \begin{align*} -b_{n+1,j}=(j-n)b_{n,j}+b_{n,j-1}+nb_{n-1,j-1}\qquad\qquad n,j\geq 1\tag{5} -\end{align*} - Since initial values can be easily calculated from (4) we get the following table for $b_{n,j}$ -\begin{array}{c|cccccc} -n\setminus k&1&2&3&4&5&6\\ -\hline -1&1\\ -2&1&1\\ -3&-1&3&1\\ -4&2&-1&6&1\\ -5&-6&0&5&10&1\\ -6&24&4&-15&25&15&1\\ -\end{array} - -Note: These values can be found in OEIS as A008296. They are called Lehmer-Comtet numbers and were stored in the archive by N.J.A.Sloane by referring precisely to the example we can see here. - -Taylor series (part two): -We are now ready to expand (3) further and obtain using (4) - \begin{align*} -f(x+t)&=(x+t)^{x+t}\\ -&=x^x\left(\sum_{i=0}^\infty\frac{\left(t\ln(x)\right)^i}{i!}\right) -\left(\sum_{j=0}^\infty\frac{\left(\left(1+\frac{t}{x}\right)\ln\left(1+\frac{t}{x}\right)\right)^j}{j!}x^j\right)\\ -&=x^x\left(\sum_{i=0}^\infty\frac{\left(t\ln(x)\right)^i}{i!}\right) -\left(\sum_{j=0}^\infty\sum_{k=j}^\infty b_{k,j}\frac{\left(\frac{t}{x}\right)^k}{k!}x^j\right)\tag{6}\\ -&=x^x\left(\sum_{i=0}^\infty\frac{\left(t\ln(x)\right)^i}{i!}\right) -\left(\sum_{k=0}^\infty\sum_{j=0}^k b_{k,j}x^j\frac{\left(\frac{t}{x}\right)^k}{k!}\right)\tag{7}\\ -&=x^x\sum_{l=0}^\infty\left( \sum_{{i+k=l}\atop{i,k\geq 0}}\frac{\left(\ln(x)\right)^i}{i!}\cdot\frac{1}{k!}\sum_{j=0}^kb_{k,j}x^{j-k}\right)t^l\tag{8}\\ -&=x^x\sum_{l=0}^\infty\left(\sum_{i=0}^l\binom{l}{i}\left(\ln(x)\right)^i\sum_{j=0}^{l-i}b_{l-i,j}x^{j-l+i}\right)\frac{t^l}{l!}\tag{9}\\ -&=x^x\sum_{l=0}^\infty\left(\sum_{i=0}^l\binom{l}{i}\left(\ln(x)\right)^i\sum_{j=0}^{l-i}b_{l-i,l-i-j}x^{-j}\right)\frac{t^l}{l!}\tag{10}\\ -\end{align*} - -Comment: - -In (6) we use the representation (4). -In (7) we exchange in the right hand double series the indices $k$ and $j$ respecting $0\leq j\leq k<\infty$. -In (8) we introduce the index $l$ to collect the terms according to powers of $t$. -In (9) we write the series in $t$ as exponential generating series by introducing $\binom{l}{i}$. -In (10) we exchange the order of the elements of the rightmost sum by letting $j\rightarrow l-i-j$. - - -$n$-th derivative of $x^x$: -Now it's time to harvest. We obtain the $n$-th derivative of $f(x)=x^x$ from (2) and (10). -The $n$-th derivative of $x^x$ is -\begin{align*} -D_x^n x^x=x^x\sum_{i=0}^n\binom{n}{i}(\ln(x))^i\sum_{j=0}^{n-i}b_{n-i,n-i-j}x^{-j}\tag{11} -\end{align*} - with $b_{n,j}$ the Lehmer-Comtet numbers given in (5). - -Example: $n=2$ -Let's look at a small example. Letting $n=2$ we obtain from (11) and the table with $b_{n,j}$: -\begin{align*} -D_x^2x^x&=x^x\sum_{i=0}^2\binom{2}{i}(\ln(x))^i\sum_{j=0}^{2-i}b_{2-i,2-i-j}x^{-j}\\ -&=x^x\left(\binom{2}{0}\sum_{j=0}^2b_{2,2-j}x^{-j}+\binom{2}{1}\ln(x)\sum_{j=0}^1b_{1,1-j}x^{-j}\right.\\ -&\qquad\qquad\left.+\binom{2}{2}\left(\ln(x)\right)^2\sum_{j=0}^0b_{0,0-j}x^{-j}\right)\\ -&=x^x\left(\left(b_{2,2}+b_{2,1}\frac{1}{x}+b_{2,0}\frac{1}{x^2}\right)+2\ln(x)\left(b_{1,1}+b_{1,0}\frac{1}{x}\right) -+(\ln(x))^2b_{0,0}\right)\\ -&=x^x\left(1+\frac{1}{x}+2\ln(x)+\left(\ln(x)\right)^2\right) -\end{align*} -in accordance with the result of Wolfram Alpha.<|endoftext|> -TITLE: Find all irreducible polynomials of degree $2$ over $\mathbb{Z}_5$ -QUESTION [5 upvotes]: Obviously if I write all the possible ones and try the roots I'd get a LOT of polynomials $(125)$ and I'd have to test $5$ roots for each of them, which would be a LOT. Is there any idea? -I must also do it for degree $\le 3$ over $\mathbb{Z}_3$ -Do you guys have any ideas to make it easier? -please remember that I'm on a ring theory course - -REPLY [2 votes]: There are a few tricks you can use: - -A polynomial $p(x)$ is irreducible if and only if $p(x-a)$ is irreducible. -A polynomial $p(x)=p_0+p_1x+\cdots+p_nx^n$ of degree $>1$ is irreducible if and only if its reciprocal -$$\tilde{p}(x)=x^np(\frac1x)=p_n+p_{n-1}x+\cdots+p_1x^{n-1}+p_0x^n$$ -is irreducible. -A polynomial $p(x)$ is irreducible if and only if $p(ax), a\neq0$, is irreducible. - -Proving these facts is easy (leaving it to you). The point is that these allow you to produce a lot of other irreducible polynomials if you have found one. Normally we are only interested in monic irreducible polynomials. The first trick takes a monic to a monic, but you do need to rescale a polynomial if you use one of the other tricks. -The case of quadratics over $\Bbb{F}_5$ becomes a quickie. You easily that both $x^2-2$ and $x^2-3$ are irreducible. Therefore the ten irreducible ones you are looking for are -$$ -p_a(x)=(x-a)^2-2,\quad q_a(x)= (x-a)^2-3 -$$ -with $a=0,1,2,3,4$. You do need to check for duplicates. In this case that is easy. The coefficients of linear terms within the two families are different. Observe that the polynomials $p_a$ all have discriminant $8\equiv3$ but the polynomials $q_a$ have discriminant $12\equiv2$. -If you are familiar with extension fields you can also think of them as follows. If we view $K=\Bbb{F}_{25}$ as $\Bbb{F}_5[\sqrt2]$, then the elements $a\pm\sqrt2$ have minimal polynomials $p_a(x)$, and because modulo five $\sqrt3=\sqrt8=2\sqrt2$ the polynomials $q_a(x)$ have $a\pm 2\sqrt2$ as their zeros. Those zeros account for all the elements in $K\setminus\Bbb{F}_5$ so we are done. -Finding all the irreducible cubics over $\Bbb{F}_3$ takes more tricks. For cubics you can test for irreducibility by checking the absence of zeros in $\Bbb{F}_3$. So you quickly see that -$$ -p(x)=x^3-x+1 -$$ -is irreducible. However, the first trick doesn't work, because $p(x-a)=p(x)$ for both $a=1,2$. The last trick does give us $q(x):=-p(-x)=x^3-x-1$ as another monic irreducible. We need the second trick as well. Do observe that the first trick does work with the reciprocals $\tilde{p}(x)$ and $\tilde{q}(x)$. -Can you show that there are exactly 8 monic irreducible cubics? Go find them!<|endoftext|> -TITLE: Prove the uniqueness of poisson equation with robin boundary condition -QUESTION [6 upvotes]: We have $\Delta u=f$ in $D$, and $\dfrac{\partial u}{\partial n}+au=h$ on boundary of D, where $D$ is a domain in three dimension and $a$ is a positive constant. $\dfrac{\partial u}{\partial n}=\triangledown u\cdot n$ ($n$ is normal vector). -My thoughts: Suppose there are $u_1$ and $u_2$, satisfis the above equations. Let $w=u_1-u_2$, then we have $\Delta w=0$ in $D$, and $\dfrac{\partial w}{\partial n}=-aw$ on boundary of D. Maximum modulus principle may be useful but I don't know where to put it in. And energy method seems not helpful in this question. -Any help would be appreciated! - -REPLY [10 votes]: $\newcommand{div}{\nabla \cdot} \newcommand{grad}{\nabla}$ -Start by taking $w$ as in your question so that $\Delta w = 0$ in $D$ and $\frac{\partial w}{\partial n} = -aw$ on $\partial D$ as you note. -Also note the following identity for the divergence operator for a scalar field $\phi$ and a vector field $\bf F$ -$$\div (\phi {\bf F}) = \phi \div {\bf F} + \grad \phi \cdot {\bf F}$$ -Then consider -$$\iiint_D \div(w \grad w) dV = \iiint_D w \Delta w + (\grad w)^2 dV = \iiint_D (\grad w)^2 dV \geq 0$$ since $\Delta w = 0$ in $D$. -But also by the divergence theorem -$$\iiint_D \div(w \grad w) dV = \iint_{\partial D} w \grad w \cdot {\bf dS} = \iint_{\partial D} w\grad w \cdot {\bf n} ds = \iint_{\partial D} -aw^2 ds \leq 0$$ -Hence $$\iiint_D (\grad w)^2 dV = 0 \implies \grad w = 0 \mbox{ in } D$$ -This gives us that $w$ is a constant so that solutions are unique up to a constant.<|endoftext|> -TITLE: Difference between $\frac{df}{dx}$, $\frac{\Delta f}{\Delta x}$, and $\frac{\partial f}{\partial x}$ -QUESTION [7 upvotes]: [Beginning calculus question.] I've just been introduced to a number of ways of representing changes in a function value with respect to some variable in multivariable calculus. -I don't get the difference between $\frac{df}{dx}$, $\frac{\Delta f}{\Delta x}$, and $\frac{\partial f}{\partial x}$. Are these all the same, or do they fall into different categories of objects, or something else? Does the meaning of these objects depend on the context in which they appear? Does the $\frac{\square}{\square}$ actually mean division in all three cases? -As far as I can tell, they all represent the rate at which $f$ changes as $x$ changes. What am I missing? - -REPLY [10 votes]: $\Delta f$ is the change in the function $f$ that corresponds to $\Delta x$, a change in the variable $x$. For example let $f=f(x)$ describe the height, in feet, of a sapling $x$ months after planting. Say $f(0)=4$ and $f(6)=6$. Then we know that in $\Delta x = 6-0 = 6$ months the plant grew a total of $\Delta f = f(6) - f(0) = 6-4 = 2$ feet. -So then you might say, "that's good to know. But how fast did the sapling grow during those 6 months?" The average rate of growth is simply $\frac {\Delta f}{\Delta x} = \frac {2 \text{ feet}}{6 \text{ months}} = \frac 13$ feet per month. This quantity is what the rate of growth of the tree would be if it grew at a constant rate over the entire 6 months. Notice that this is calculated exactly as the slope of a line is ($m=\frac{\Delta y}{\Delta x}$) -- that's not a coincidence. The number $\frac{\Delta f}{\Delta x}$ is called the average rate of change of $f$. -But, being the intelligent reader that you undoubtably are, you might then realize that the sapling may not have grown at a constant rate. This average rate of change doesn't tell us anything about how much the tree grew at the beginning of the 6 months vs the middle vs the end of that 6 months. Maybe there was a period of lots of rain for a few days at one point and you want to know how that affected the growth rate. The natural thing to do then is to break that total $\Delta x = 6$ into chunks. Maybe into 1 month chunks or 1 week chunks or even 1 day chunks and then you can find the average rate of growth over each of those smaller periods of time. That gives you more info on how the rate of growth changed over the total time period. -We see that the smaller the "chunks" we break the total period of time into, the more info we have on how the sapling grows during that time period. Physically speaking there is a smallest period of time where we can actually accurately measure a change in the growth of the tree, but mathematically speaking there's nothing to stop us from breaking the time periods into smaller and smaller chunks. This process is called a limit. We define the limit of an average rate of change (if it exists) as the derivative of $f$: $$\frac {df}{dx} := \lim_{\Delta x \to 0} \frac{\Delta f}{\Delta x}$$ When looking at the above definition remember that $\Delta f$ is a function of $\Delta x$ such that $\Delta f \to 0$ as $\Delta x\to 0$. This quantity $\frac{df}{dx}$ gives the instantaneous rate of change of the function $f$. Basically it tells you how fast the sapling is growing at a particular instant. -One thing to notice about the above is that I do not define $df$ and $dx$, but only $\frac{df}{dx}$. That's because (at least in beginning calculus) $df$ and $dx$ are not really independent objects. The notation $\frac{df}{dx}$ is just meant to remind you that the derivative function is defined as a limit of a fraction -- but that doesn't mean that $\frac{df}{dx}$ is a fraction itself. It's not. -Now for $\frac{\partial f}{\partial x}$. Consider a surface given by $f(x,y)$. Now we want to know something about the rate of change of the function $f$ at a particular point $(x,y,f(x,y))$. But what does it even mean to know the rate of change of a multivariable function? Well there's a precise meaning as a multivariable linear transformation, but let's just take an easier approach. Let's just find the rate of change of $x$ and $y$ of $f$. Here's how. -Cut the surface $f=f(x,y)$ by a plane that is parallel to both the $x$ and $z$ directions (thus it will be constant in the $y$ direction). Then the intersection of this plane and the surface will be a curve in the plane. That is an essentially single-variable function. Then we know how to take the derivative of a single variable function at a point. This number is called the partial derivative of $f$ with respect to $x$ (at the point $(x,y,f(x,y))$) and is denoted $\frac{\partial f}{\partial x}$. - -You can of course do the same thing in the $y$ direction to get $\frac{\partial f}{\partial y}$. The thing to notice here is that neither (nor in fact both) $\frac{\partial f}{\partial x}$ nor $\frac{\partial f}{\partial y}$ give you all of the information about the "rate of change" of $f$. That's why we call them partial derivatives.<|endoftext|> -TITLE: Polar equation of an ellipse given the origin coordinates and major and minor axis lengths? -QUESTION [6 upvotes]: I've been trying to create a polar equation that will give me all points on an ellipse with the independent variable being theta and the dependent variable being the radius, but I'm having a great deal of trouble wrapping my mind around how to accomplish such a feat. -The ellipse is going to be defined by it's origin's x and y positions, it's major axis length, and it's minor axis length. It can be assumed that in every case the major axis is perfectly vertical and the minor axis is perfectly horizontal. - -Not really relevant to the answer, but just a little more background information. I am trying to program hit detection for a game that uses ellipses for the hitbox boundaries. The reason I need an equation like this is that I am going to be determining whether an object is within the hitbox by comparing the distance from the ellipse's center to a point on the object with the distance from the ellipse's center to the point along the ellipse in the direction of the point on the object. - -Any feedback is appreciated. - -REPLY [3 votes]: The right way to check if a point is inside or outside the ellipse is to compute -$$\frac{(x-x_c)^2}{a^2}+\frac{(y-y_c)^2}{b^2}-1$$ and test the sign: negative inside, positive outside. -Polar coordinates aren't really helpful.<|endoftext|> -TITLE: Power automorphisms which is not inner -QUESTION [8 upvotes]: I want to know an example of a small order non-abelian $p$-group $G$ with a power automorphism which is not inner, i.e. an automorphism of the form $g\mapsto g^k$ for all $g\in G$, but non-inner. -In the examples I was initially considering, the maps were $g\mapsto g^{-1}$ which will never be automorphisms of non-abelian groups. -Any good example of this? Thanks for interest. - -REPLY [4 votes]: A simple GAP computation yields answers. I used this program: -isNonInnerPowerAuto := function(G, k) - local gens, imgs, hom; - gens := GeneratorsOfGroup(G); - imgs := List(gens, x->x^k); - hom := GroupHomomorphismByImages(G, G, gens, imgs); - if hom = fail then return false; fi; - if not IsBijective(hom) then return false; fi; - if IsInnerAutomorphism(hom) then return false; fi; - return ForAll(G, g -> g^hom = g ^ k); -end; - -for n in [6..127] do - if not IsPrimePowerInt(n) then continue; fi; - for i in [1..NrSmallGroups(n)] do - G := SmallGroup(n,i); - if IsAbelian(G) then continue; fi; - for k in [2..Exponent(G)-2] do - if isNonInnerPowerAuto(G,k) then - Print("for group ", IdGroup(G),", ",k, " is an auto, non-inner\n"); - fi; - od; - od; -od; - -Which resulted in this output: -for group [ 32, 5 ], 5 is an auto, non-inner -for group [ 32, 12 ], 5 is an auto, non-inner -for group [ 32, 17 ], 5 is an auto, non-inner -for group [ 32, 17 ], 13 is an auto, non-inner -for group [ 32, 38 ], 5 is an auto, non-inner -for group [ 64, 3 ], 5 is an auto, non-inner -for group [ 64, 4 ], 5 is an auto, non-inner -for group [ 64, 5 ], 5 is an auto, non-inner -for group [ 64, 17 ], 5 is an auto, non-inner -for group [ 64, 27 ], 5 is an auto, non-inner -for group [ 64, 27 ], 13 is an auto, non-inner -for group [ 64, 29 ], 5 is an auto, non-inner -for group [ 64, 29 ], 9 is an auto, non-inner -for group [ 64, 29 ], 13 is an auto, non-inner -for group [ 64, 30 ], 5 is an auto, non-inner -for group [ 64, 30 ], 13 is an auto, non-inner -for group [ 64, 31 ], 9 is an auto, non-inner -for group [ 64, 44 ], 5 is an auto, non-inner -for group [ 64, 44 ], 9 is an auto, non-inner -for group [ 64, 44 ], 13 is an auto, non-inner -for group [ 64, 51 ], 5 is an auto, non-inner -for group [ 64, 51 ], 9 is an auto, non-inner -for group [ 64, 51 ], 13 is an auto, non-inner -for group [ 64, 51 ], 21 is an auto, non-inner -for group [ 64, 51 ], 25 is an auto, non-inner -for group [ 64, 51 ], 29 is an auto, non-inner -for group [ 64, 86 ], 5 is an auto, non-inner -for group [ 64, 87 ], 5 is an auto, non-inner -for group [ 64, 89 ], 5 is an auto, non-inner -for group [ 64, 103 ], 5 is an auto, non-inner -for group [ 64, 105 ], 5 is an auto, non-inner -for group [ 64, 112 ], 5 is an auto, non-inner -for group [ 64, 114 ], 5 is an auto, non-inner -for group [ 64, 115 ], 5 is an auto, non-inner -for group [ 64, 116 ], 5 is an auto, non-inner -for group [ 64, 117 ], 5 is an auto, non-inner -for group [ 64, 126 ], 5 is an auto, non-inner -for group [ 64, 127 ], 5 is an auto, non-inner -for group [ 64, 184 ], 5 is an auto, non-inner -for group [ 64, 184 ], 13 is an auto, non-inner -for group [ 64, 185 ], 5 is an auto, non-inner -for group [ 64, 185 ], 9 is an auto, non-inner -for group [ 64, 185 ], 13 is an auto, non-inner -for group [ 64, 248 ], 5 is an auto, non-inner -for group [ 81, 3 ], 4 is an auto, non-inner -for group [ 81, 3 ], 7 is an auto, non-inner -for group [ 81, 4 ], 4 is an auto, non-inner -for group [ 81, 4 ], 7 is an auto, non-inner -for group [ 81, 6 ], 4 is an auto, non-inner -for group [ 81, 6 ], 7 is an auto, non-inner -for group [ 81, 6 ], 13 is an auto, non-inner -for group [ 81, 6 ], 16 is an auto, non-inner -for group [ 81, 6 ], 22 is an auto, non-inner -for group [ 81, 6 ], 25 is an auto, non-inner -for group [ 81, 14 ], 4 is an auto, non-inner -for group [ 81, 14 ], 7 is an auto, non-inner - -The smallest examples are of order 32. Note that $k$ is always coprime -to the group order, so if it induces an automorphism, it is certainly not inner. -To work with any of them by hand, you can ask GAP for -a presentation: -gap> G := SmallGroup(32,5); - -gap> StructureDescription(G); -"(C8 x C2) : C2" -gap> gfp:=Image(IsomorphismFpGroup(G)); - -gap> RelatorsOfFpGroup(gfp); -[ F1^2*F4^-1, F2^-1*F1^-1*F2*F1*F3^-1, F3^-1*F1^-1*F3*F1, F4^-1*F1^-1*F4*F1, - F5^-1*F1^-1*F5*F1, F2^2, F3^-1*F2^-1*F3*F2, F4^-1*F2^-1*F4*F2, - F5^-1*F2^-1*F5*F2, F3^2, F4^-1*F3^-1*F4*F3, F5^-1*F3^-1*F5*F3, F4^2*F5^-1, - F5^-1*F4^-1*F5*F4, F5^2 ] - -Or, ask for a power-conjugation presentation (this omits trivial conjugation relations, -e.g. g4^g3 = g4; look at the documentation of PrintPcpPresentation for details; ah yeah, -and note that it is part of the polycyclic package, which is, however, installed and -loaded by default in a regular GAP installation): -gap> PrintPcpPresentation(PcGroupToPcpGroup(G)); -g1^2 = g4 -g2^2 = id -g3^2 = id -g4^2 = g5 -g5^2 = id -g2 ^ g1 = g2 * g3 - -Or ask for an isomorphic permutation group (I give two, one is the -regular representation, one a somewhat smaller one: -gap> H := Image(IsomorphismPermGroup(G));; -gap> SmallGeneratingSet(H); -[ (1,18,26,17,6,29,14,28)(2,13,30,23,10,3,20,31)(4,27,16,7,15,32,5,19)(8,24,22, - 12,21,11,9,25), (1,24)(2,28)(3,15)(4,13)(5,31)(6,11)(7,21)(8,19)(9,32)(10, - 17)(12,26)(14,25)(16,23)(18,30)(20,29)(22,27) ] -gap> SmallGeneratingSet(Image(SmallerDegreePermutationRepresentation(H))); -[ (1,2,4,7,5,8,11,14)(3,6,9,12,10,13,15,16), (2,6)(7,12)(8,13)(14,16), - (1,3)(2,6)(4,9)(5,10)(7,12)(8,13)(11,15)(14,16) ]<|endoftext|> -TITLE: Why is $\sqrt {12} = 2 \sqrt 3$? -QUESTION [6 upvotes]: Why $\sqrt {12} = 2 \sqrt 3$? It is obvious? If we considered the function $f(s) = s^2 $ it is injective on positive numbers so we obtain the conclusion. But in the same time it is an equality between irrational numbers. Suppose that we know just to compute the square roots. - -REPLY [2 votes]: Let's consider -\begin{aligned} -f : \mathbf{R_+} &\to \mathbf{R_+}\\ -x &\to x^2 -\end{aligned} -then $f$ is bijection, for it is strictly increasing. -$f(\sqrt{12}) = f(2\sqrt{3}) = 12$, so $\sqrt{12} = 2\sqrt{3}$. -Note that $\sqrt{12}$ is just a notation, there is no problem to denote one number with different notations. For example, we can also use -$[3, \overline{[2,6]}]$ which means to be a continued fraction to denote $\sqrt{12}$.<|endoftext|> -TITLE: Prove the identity $\binom{2n+1}{0} + \binom{2n+1}{1} + \cdots + \binom{2n+1}{n} = 4^n$ -QUESTION [8 upvotes]: I've worked out a proof, but I was wondering about alternate, possibly more elegant ways to prove the statement. This is my (hopefully correct) proof: -Starting from the identity $2^m = \sum_{k=0}^m \binom{m}{k}$ (easily derived from the binomial theorem), with $m = 2n$: -$2^{2n} = 4^n = \binom{2n}{0} + \binom{2n}{1} + \cdots + \binom{2n}{2n-1} + \binom{2n}{2n}$ -Applying the property $\binom{m}{k} = \binom{m}{m-k}$ to the second half of the list of summands in RHS above: -$4^n = \binom{2n}{0} + \binom{2n}{1} + \cdots + \binom{2n}{n-1} + \binom{2n}{n} + \underbrace{\binom{2n}{n-1} +\cdots \binom{2n}{1} + \binom{2n}{0}}_{\binom{m}{k} = \binom{m}{m-k} \text{ has been applied}}$ -Rearranging the above sum by alternately taking terms from the front and end of the summand list in RHS above (and introducing the term $\binom{2n}{-1} = 0$ at the beginning just to make explicit the pattern being developed): -$4^n = (\binom{2n}{-1} + \binom{2n}{0}) + (\binom{2n}{0} + \binom{2n}{1}) + \cdots + (\binom{2n}{n-1} + \binom{2n}{n})$ -Finally, using the property $\binom{m}{k} + \binom{m}{k-1} = \binom{m+1}{k}$ on the paired summands, we get the desired result: -$4^n = \binom{2n+1}{0} + \binom{2n+1}{1} + \cdots + \binom{2n+1}{n}$ - -REPLY [2 votes]: Let $A$ be the powerset (i.e., the set of subsets of) of $\{1,\ldots,2n\}$. Let $B$ the set of all subsets of $\{1,\ldots,2n+1\}$ at most $n$ elements. Then "clearly" $|A|=2^{2n}=4^n$ and $|B|=\sum_{k=0}^n{2n+1\choose k}$. -Define the following map $f\colon B\to A$: -$$f(S)=\begin{cases}S&\text{if }2n+1\notin S\\\{1,\ldots,2n+1\}\setminus S&\text{if }2n+1\in S\end{cases} $$ -and define $g\colon A\to B$ by -$$g(S)=\begin{cases}S&\text{if }|S|\le n\\\{1,\ldots,2n+1\}\setminus S&\text{if }|S|>n\end{cases} $$ -Finally, verify that $f$ and $g$ are inverse of each other, hence $|A|=|B|$.<|endoftext|> -TITLE: Metric spaces and normed vector spaces -QUESTION [8 upvotes]: Studying I learned that there are some theorems and definitions that need a metric structure on the space in which we are working, for example the definition of local maximum needs a metric space or the theorems that states the equivalence of local and global maxima of concave functionals needs a normed vector space. -I know that every normed vector space has a metric structure and that distances can be generated by norms, so which are the differences between this two concepts? -Is there a hierarchy between them, i.e. normed vector spaces is the general concept, whereas metric space the particular one? -How should I choose where to work when dealing with a problem? - -REPLY [10 votes]: Metric spaces are much more general than normed spaces. Every normed space is a metric space, but not the other way round. This can happen for two reasons: - -Many metric spaces are not vector spaces. Since a norm is always taken over a vector space, these can't be normed spaces. -Even if We're dealing with a vector space over $\mathbb{R}$ or $\mathbb{C}$, the metric structure might nor "play nice" with the linear structure. For example, you might take the discrete metric on $\mathbb{R}$. This is a metric but is certainly not induced by a norm. - -In terms of what to choose when dealing with a specific problem... As stated above, if you're not working in a vector space you have no hope of finding a norm. If you are, then norms are usually more useful because they allow you to take advantage of the linear structure when dealing with distances. But often it's actually more useful to forget this structure, in which case metrics are fine... Really depends on the application.<|endoftext|> -TITLE: How to prove $\sum_{i=1}^ki^k(-1)^{k-i}\binom {k+1}{i} =(k+1)^k$ -QUESTION [9 upvotes]: How to prove $\sum_{i=1}^ki^k(-1)^{k-i}\binom {k+1}{i} =(k+1)^k$ -where k is a positive integer. -Any hints can help. - -REPLY [2 votes]: Suppose we seek to verify that -$$\sum_{k=0}^n k^n (-1)^{n-k} {n+1\choose k} = -(n+1)^n.$$ -Re-write this as -$$\sum_{k=0}^{n+1} k^n (-1)^{n-k} {n+1\choose k} = 0.$$ -Introduce -$$k^n = -\frac{n!}{2\pi i} -\int_{|z|=\epsilon} -\frac{1}{z^{n+1}} \exp(kz) \; dz.$$ -This yields for the sum -$$\frac{n!}{2\pi i} -\int_{|z|=\epsilon} -\frac{1}{z^{n+1}} -\sum_{k=0}^{n+1} (-1)^{n-k} {n+1\choose k} -\exp(kz) \; dz -\\ = \frac{n!}{2\pi i} -\int_{|z|=\epsilon} -\frac{1}{z^{n+1}} (\exp(z)-1)^{n+1} \; dz.$$ -This is -$$[z^n] (\exp(z)-1)^{n+1} = 0$$ -because $$\exp(z)-1 = z + \frac{z^2}{2} + \frac{z^3}{6} +\cdots$$ -This is essentially the same as the answer by @MarkusScheuer which I upvoted.<|endoftext|> -TITLE: What's between the finite and the infinite? -QUESTION [44 upvotes]: I'm wondering if there are any non-standard theories (built upon ZFC with some axioms weakened or replaced) that make formal sense of hypothetical set-like objects whose "cardinality" is "in between" the finite and the infinite. In a world like that non-finite may not necessarily mean infinite and there might be a "set" with countably infinite "power set". - -REPLY [4 votes]: Let me make a few remarks about the constructive aspects. The standard definition is the following: a set $X$ is finite if there is a natural number $n$ and a bijection between $X$ and $\{ i \in \mathbb{N} : i < n \}$. Some of the expected properties are true: - -The disjoint union of two finite sets is finite. -The product of two finite sets is finite. -The set of maps between two finite sets is finite. - -On the other hand, there are some strange facts: - -Subsets of finite sets may not be finite. -Quotients of finite sets may not be finite. - -For example, given a proposition $\varphi$, $\{ i \in \mathbb{N} : \varphi \land i < 1 \}$ is finite if and only if $\varphi \lor \lnot \varphi$ holds. (This is because equality in $\mathbb{N}$ is decidable.) Thus one is tempted to look for weaker notions of finiteness. -Here is one alternative. The class of Kuratowski-finite sets is defined inductively as follows: - -The empty set is Kuratowski-finite. -Every singleton set is Kuratowski-finite. -The union of two Kuratowski-finite sets is Kuratowski-finite. - -It is true that the quotient of a Kuratowski-finite set is automatically Kuratowski-finite. Indeed, every Kuratowski-finite set is in bijection with the quotient of some finite set – thus, one might call them finitely generated sets. In particular, Kuratowski-finiteness is strictly more general than finiteness. On the other hand, subsets of Kuratowski-finite sets may not be Kuratowski-finite.<|endoftext|> -TITLE: What's true in $\mathbb{R}^4$, false in $\mathbb{R}^3$ and uninteresting in $\mathbb{R}^5$? -QUESTION [12 upvotes]: What are some interesting and easy-to-understand (for non-differential geometers) facts about subobjects of $\mathbb{R}^4$ that are not only false in $\mathbb{R}^3$, but also specific to the structure of $\mathbb{R}^4$ and maybe do not easily or naturally generalize to higher dimensions? - -REPLY [6 votes]: Only for $n=4$ does there exist an open set $U\subseteq\mathbb{R}^n$ that is homeomorphic to $\mathbb{R}^n$ but not diffeomorphic to $\mathbb{R}^n$ (a small exotic $\mathbb{R}^4$). What this means is not too difficult to explain (no need to explain what a manifold is, only what a homeomorphism and a diffeomorphism are between open subsets of $\mathbb{R}^n$). I don't think it qualifies as "uninteresting for $\mathbb{R}^5$", though (it's definitely not a triviality in any dimension other than $1$), but you seemed to say "false" was also OK.<|endoftext|> -TITLE: Slice of opposite category equivalent to coslice of category? -QUESTION [7 upvotes]: Let $\mathcal{C}$ be some category, and $A,B\in\mathcal{C}$. -We have the notions of the slice category $\mathcal{C}/A$ whose objects are morphisms $A'\to A$ and the coslice category $A/\mathcal{C}$ whose objects are morphisms $A\to A'$. -I am pretty sure that $$\mathcal{C}^{\mathrm{op}}/A\equiv A/\mathcal{C}$$ but I'm worried that I might have got an arrow the wrong way around at some point, and that the actual result is $$\mathcal{C}^{\mathrm{op}}/A\equiv (A/\mathcal{C})^{\mathrm{op}}.$$ -The first one is what I hope to be true, since I would like it to be the case that a functor $$\mathcal{C}^{\mathrm{op}}/B\to \mathcal{C}^{\mathrm{op}}/A$$ gives a functor $$B/\mathcal{C}\to A/\mathcal{C}$$ and vice versa, but obviously (I hope) if the second statement is the true one then this functor will flip and be $A/\mathcal{C}\to B/\mathcal{C}$. - -REPLY [4 votes]: Guess your out of luck indeed it is true that $(A/\mathcal C)^\text{op} \cong \mathcal C^\text{op}/A$ -The isomorphism between these two categories is given by -$$F \colon (A/\mathcal C)^\text{op} \to C^\text{op}/A$$ - -where -$$F(f) = f$$ -for every $f \colon A \to X$ in $\mathcal C$ -and -$$F(\alpha)=\alpha$$ -for every $\alpha \colon f \to g$ in $(A/\mathcal C)[f,g]$, with $f \colon A \to X$ and $g \colon A \to Y$. - -This functor is clearly well defined on objects because $f \colon A \to X$, that is an object of $A/\mathcal C$, is also an object of $\mathcal C^\text{op}/A$. -On the other hand if $\alpha \in (A/\mathcal C)[f,g]$, with $f \colon A \to X$ and $g \colon A \to Y$ in $\mathcal C$, then $\alpha \colon X \to Y$ and $g=\alpha\circ f$ in $\mathcal C$. -From this, since -$$g=\alpha\circ f=f \circ^\text{op} \alpha$$ -it follows that $\alpha \in (\mathcal C^\text{op}/A)[g,f]$ (this shows that the map is contravariant, that is that $F$ is a map from the graph $(A/\mathcal C)^\text{op}$ to $\mathcal C^\text{op}/A$). -Functoriality follows by simple computations and this is an isomorphism since it is bijective both on the objects and the morphisms.<|endoftext|> -TITLE: Does the integral test only hold for a continuous, monotone decreasing function? -QUESTION [6 upvotes]: In other words, does the inequality, -$$\int_1^{i+1}f(x) \, dx \leq \sum_{n=1}^i a_n \leq \int_1^n f(x) \, dx+a_1,$$ -only hold when $f(x)$ is continuous, monotone and decreasing? -Also $f(n)=a_n$. - -REPLY [8 votes]: A theorem of Hardy states that for a nonnegative function $f$ (which is not necessarily monotone), the sum $\sum_{j=1}^nf(j)$ and the integral $\int_0^nf(t)\, dt $ converge or diverge together if $f$ has a continuous derivative satisfying -$$\int_0^{\infty}|f'(t)|dt < \infty.$$ -The integral test also holds under weaker conditions where it is sufficient only that the total variation of $f$ on $[0,n]$ is bounded for all $n$. -A proof of Hardy's result is as follows: -Let $\{t\}$ denote the fractional part of $t$. Then -$$\int_0^n \{t\}f'(t) \, dt = \sum_{j=1}^n \int_{j-1}^j(t-j+1)f'(t) \,dt.$$ -Integrating by parts, -$$\int_0^n \{t\}f'(t) \, dt = \sum_{j=1}^n \left[(t-j+1)f(t)|_{j-1}^j-\int_{j-1}^jf(t) \,dt \right] \\ = \sum_{j=1}^n f(j)-\int_{0}^nf(t)\,dt.$$ -Hence, for all $n$ -$$|\sum_{j=1}^n f(j)-\int_{0}^nf(t)\,dt| \leqslant \int_0^n|\{t\}f'(t)| \, dt \leqslant \int_0^{\infty}|f'(t)| \, dt < \infty.$$ -From this final inequality, we can make comparisons to show that the integral and sum must converge or diverge together.<|endoftext|> -TITLE: Prove that $X,Y,Z$ lie on a single line. -QUESTION [8 upvotes]: Let $ABCD$ be a convex quadrilateral such that no two opposite sides are parellel to each other. Denote by $Q$ the intersection of lines $AD$ and $BC$ and by $R$ the intersection of lines $AB$ and $CD$. Let $X,Y,Z$ be midpoints of $AC, BD$ and $QR$ respectively. Prove that $X,Y,Z$ lie on the same line. -I am not getting any approach to solve this question. Please help. - -REPLY [4 votes]: This is also known the Newton-Gauss line of $ABCD$. The usual proof is either by area considerations or by Menelaus' theorem. However, I present a purely synthetic approach. -Let $L,M,N$ be the midpoints of $\overline{AB},\overline{AR},\overline{BR}$. By considering midpoints in $\triangle ABC$, we have $XL\parallel BC$, and with $\triangle BQR$, we have $BQ\parallel NZ\implies XL\parallel NZ$. In a similar way, we can obtain $LY\parallel MZ$ and $MX\parallel NY$. But now Pappus' theorem on the hexagon $XMZNYL$ implies that $X,Y,Z$ are collinear.<|endoftext|> -TITLE: Ramification of primes in number fields in terms of valuations -QUESTION [5 upvotes]: Let $L/K$ be a number field extension. Is there a definition of ramification of primes(both infinite and finite) in terms of the valuations induced? -This answer gives a definition for infinite primes: - -Now fix an infinite place $v$ on $K$, let $L$ be a finite field extension of $K$, and let $w$ be an extension of $v$ to $L$. The extension is said to ramify at $w$ iff $\#\{\tau\in Gal(L,K)\mid w\circ\tau=w\}>1$. But in reality this all simplifies to what Keenan said. The only possibilities for $\tau$ satisfying $w\circ\tau=w$ are the identity map and complex conjugation. - -The natural way to extend this to finite places does not quite seem to work(unless I am making a mistake): -Let $L = \Bbb Q[i], K = \Bbb Q$ and the prime $p = (3)$. This is inert in the extension and is fixed under the automorphism(complex conjugation) of $L/K$. According to the above definition, this would seem to imply that $(3)$ is ramified in $L/K$. -It is not very hard to find a definition for the case of finite places but I am having a little trouble finding one that works for both infinite and finite places. I feel like a small modification should make everything work in a unified way for both infinite and finite places. - -REPLY [5 votes]: I'm not sure if there is a "nice" completely parallel definition for ramification in both the infinite and finite cases. The finite places are very different to the infinite ones - in particular, for each finite place, we have a valuation ring with a uniformiser and a residue field. However, I'll try to explain some of the ideas in this answer. - -Viewpoint 1: Valuations -Let $L/K$ be an extension of number fields, and let $w$ be a (normalised) place of $L$ above a place $v$ of $K$. Choose $x\in K$ such that $|x|_v \ne 1$. -We say that $w\mid v$ is unramified if $$|x|_w = |x|_v.$$ -In the finite case, since any $x\in K$ can be written as $u\pi_K^m$ for some $u$ with $|u|_v = 1$ and $\pi_K$ a uniformiser, this is equivalent to saying $$v(\pi_K) = w(\pi_K).$$ -In the infinite case, recall that if $w$ is complex, then $|x|_w = |x|_{\mathbb C}^2$, so $w\mid v$ is ramified if and only if $v$ is real and $w$ is complex. - -Viewpoint 2: The Inertia Group -Since your definition implicitly assumes that $L/K$ is a Galois extension, I will do the same. -Let $G = \mathrm{Gal}(L/K)$ be the Galois group of $L/K$, and for each place $w$ (infinite or finite) of $L$ lying over a place $v$ of $K$, let $G_w\le G$ be the Galois group of $L_w/K_v$. -In fact, we can describe $G_w$ explicitly: -$$G_w = \{\tau \in G:w\circ \tau = w\}$$ -When $w$ is a finite prime, the group $G_w$ is the decomopostion group of $w\mid v$. Note that in the infinite case, this is the set you described in your definition of ramification. $L_w$ and $K_v$ are either $\mathbb R$ or $\mathbb C$, and the extension will be ramified if and only if $K_v = \mathbb R$ and $L_w=\mathbb C$. In this case, -there isn't much more we can say. -The finite case is much more interesting. Let $k_v$ and $k_w$ be the residue fields associated to $v$ and $w$. The action of any $\tau\in G_w$ descends to an action of $k_w/k_v$ giving a (surjective) group homomorphism -$$G_w\to \mathrm{Gal}(k_w/k_v)$$ -Let $I_w$ be the kernel of this map. $I_w$ is called the inertia group of $w\mid v$. We say that $w\mid v $ is unramified if and only if $$\#I_w = 1.$$ -In the infinite case, we can define $I_w$ to just be $G_w$, and again the definitions are the same.<|endoftext|> -TITLE: How can I recover a sequence of numbers given a corrupted version of it? -QUESTION [6 upvotes]: I have an unknown sequence of real numbers $x_i$ and a known sequence of real numbers $y_i$; $y_i$ is a corrupted version of $x_i$, i.e., -$$y_i=x_i+n_i$$ -where $n_i$ is a random number distributed according to a known probability distribution function $f(x,\boldsymbol{\theta})$, $\boldsymbol{\theta}$ is known and it is the set of the parameters of the distribution; for example when $f$ is a normal distribution $\boldsymbol{\theta}=(\mu,\sigma)$, $\mu$ is the mean and $\sigma$ is the standard deviation. -Given $y_i$, $f$, $\boldsymbol{\theta}$ is it possible to recover $x_i$? -Update: -Since the problem seems too broad (see the comments 1 and 2) I would like to add the constraint that $x_i>x_{i+1},\forall{i}$. - -REPLY [2 votes]: Let us assume that $x = n$ ($n$ being unknown). Without further assumption, no way to tell if $y$ is $0.5n+0.5n$ or $0.7n+0.3n$. However, a whole field is dealing with producing acceptable estimates of $y$, given partial knowledge of properties on $x$ or $n$. -It is called signal (or immage) processing (subfields: filtering, smoothing, source separation). As the topic is too broad to be answered here, I am dropping a few current trends: Bayesian modeling, non-linear risk estimation, sparse approximmations. -For further details, have a look at StackExchange -Signal Processing: SE.DSP. A related question is being asked and answered in Does time series data always contain noise?<|endoftext|> -TITLE: Efficient Way to Evaluate the Proximal Operator of $ f \left( x \right) = {\left\| x \right\|}_{2} + {I}_{\geq 0} \left( x \right) $ -QUESTION [5 upvotes]: Is there an efficient way to evaluate the proximal operator of the function $f:\mathbb R^n \to \mathbb R \cup \{ \infty \}$ defined by -\begin{equation} -f(x) = \| x \|_2 + I_{\geq 0}(x), -\end{equation} -where $I_{\geq 0}$ is the indicator function of the nonnegative orthant: -\begin{equation} -I_{\geq 0}(x) = \begin{cases} 0 & \quad \text{if } x \geq 0,\\ -\infty & \quad \text{otherwise.} -\end{cases} -\end{equation} -(The inequality $x \geq 0$ is interpreted componentwise.) -In other words, given $\hat x \in \mathbb R^n$, is there an easy way to solve the optimization problem -\begin{align} -\text{minimize} & \quad \|x\|_2 + \frac{1}{2} \| x - \hat x \|_2^2 \\ -\text{subject to} & \quad x \geq 0. -\end{align} -The variable in this optimization problem is $x \in \mathbb R^n$. -Ideally I'd like a closed-form solution to this optimization problem, or else a way to compute the solution extremely quickly (without having to rely on an iterative algorithm). -Thoughts: the term $\| x \|_2$ doesn't depend on the direction of $x$. If not for the nonnegativity constraints on $x$, we would pick $x$ to point in the same direction as $\hat x$, which simplifies the optimization problem enough that we can now solve it easily. Perhaps there's a similar way to think about the problem when we have nonnegativity constraints on $x$. - -REPLY [2 votes]: I think there is a simple way to look at it without using duality. -Our goal is to find the minimizer for the problem -\begin{align} -\tag{$\spadesuit$} \text{minimize} & \quad \| x \|_2 + \frac{1}{2t} \| x - \hat x \|_2^2 \\ -\text{subject to} & \quad x \geq 0. -\end{align} -First note that if $\hat x_i < 0$, then there is no benefit from -taking $x_i$ to be positive. If $x_i$ were positive, then both terms in the objective could be reduced just by setting $x_i = 0$. -It remains to select values for the other components of $x$. -This is a smaller optimization problem, with one unknown for -each positive component of $\hat x$. The negative components -of $\hat x$ are irrelevant to the solution of this reduced problem. -Thus, we would still arrive at the same final answer if the negative -components of $\hat x$ were set equal to $0$ at the very beginning. -In other words, problem ($\spadesuit$) is equivalent to the problem -\begin{align} -\text{minimize} & \quad \| x \|_2 + \frac{1}{2t} \| x - \max(\hat x ,0)\|_2^2 \\ -\text{subject to} & \quad x \geq 0, -\end{align} -which in turn is equivalent to the problem -\begin{align} \text{minimize} & \quad \| x \|_2 + \frac{1}{2t} \| x - \max(\hat x ,0)\|_2^2 -\end{align} -(because there would be no benefit from taking any component of $x$ to be negative). -This shows that $\text{prox}_{tf}(\hat x) = \text{prox}_{t \| \cdot \|_2}(\max(\hat x, 0))$.<|endoftext|> -TITLE: Is representability of Zariski sheaves local on the base? -QUESTION [8 upvotes]: Let $F: \mathsf{Sch_{/S}}^{op} \to \mathsf{Set}$ be a Zariski sheaf on the category of $S$-schemes. $F$ being a sheaf means it satisfies the following property: - -Sheaf condition: For every $S$-scheme $X$ and every open cover $\{U_j\} \subset Open_S(X)$ of $X$ by open $S$-subschemes. The following diagram is an - equalizer: -$$F(X) \rightarrow \prod_{i} F(U_i) {{{} \atop \longrightarrow}\atop{\longrightarrow \atop {}}} \prod_{i, j} F(U_i \times_X U_j)$$ - -The following theorem gives a necessary and sufficient condition for $F$ to be representable by an $S$-Scheme: - -Theorem: $F$ is representable by an $S$-scheme iff $F$ has an open cover by representable subfunctors. - -This is pretty satisfying, but let's try something even bolder. - -Is representability local on the base? : Let $F: \mathsf{Sch_{/S}}^{op} \to \mathsf{Set}$ be a Zariski sheaf and $\{U_j\} \subset Open(S)$ an open cover of $S$ satisfying that all pullbacks $F \times_S U_j$ are representable. Must $F$ be representable? - -The just glue attitude doesn't seem to work for me here. I've played for hours with pullback cubes and the like without getting anywhere. -Unless I'm misunderstanding something (which I probably do) this version of "locality" is used implicitly in a lot of arguments I've encountered. Is it really true? If so, why? -If not, why not and is there perhaps a different locality principle which does hold? - -REPLY [3 votes]: Perhaps I'm missing something, but isn't this just the classical case of gluing schemes along open subschemes? -Since $F\times_S U_j$ is representable, say by a scheme $X_j$ over $U_j$, then for any $i,j$, $X_j|_{U_i\cap U_j}$ is an open subscheme of $X_j$, and $X_i|_{U_i\cap U_j}$ is an open subscheme of $X_i$. Since they're both pullbacks of $F$ along the same morphism $U_i\cap U_j\rightarrow S$, they are "uniquely" isomorphic, and the associated isomorphisms for triples of indices $i,j,k$ must satisfy the cocycle condition (again by uniqueness of isomorphisms), so now you have a family of schemes $X_i$ and various open subschemes and isomorphisms between the open subschemes, so you're in the classical gluing situation (see, for example, Hartshorne chapter II, Exercise 2.12), from which you can construct a scheme $X$ over $S$, whose functor of points must coincide with $F$ because of "the fully faithfulness of pullback functors of descent data".<|endoftext|> -TITLE: Prove that $\frac{1}{1+a_1+a_1a_2}+\frac{1}{1+a_2+a_2a_3}+\cdots+\frac{1}{1+a_{n-1}+a_{n-1}a_n}+\frac{1}{1+a_n+a_na_1}>1.$ -QUESTION [7 upvotes]: If $n > 3$ and $a_1,a_2,\ldots,a_n$ are positive real numbers with $a_1a_2\cdots a_n = 1$, prove that $$\dfrac{1}{1+a_1+a_1a_2}+\dfrac{1}{1+a_2+a_2a_3}+\cdots+\dfrac{1}{1+a_{n-1}+a_{n-1}a_n}+\dfrac{1}{1+a_n+a_na_1}>1.$$ - -I find it hard to use any inequalities here since we have to prove $>1$ and most inequalities such as AM-GM and Cauchy-Schwarz use $\geq 1$. On the other hand it seems that if I can prove that each fraction is $>1$ that might help, but I am unsure. - -REPLY [9 votes]: Let $a_i=\frac{x_{i+1}}{x_i}$, where $x_{n+1}=x_1$, $x_{n+2}=x_2$ and all $x_i>0$. -Hence, $\sum\limits_{i=1}^{n}\frac{1}{1+a_i+a_ia_{i+1}}=\sum\limits_{i=1}^{n}\frac{x_i}{x_i+x_{i+1}+x_{i+2}}>\sum\limits_{i=1}^{n}\frac{x_i}{x_1+x_2+...+x_n}=1$<|endoftext|> -TITLE: Did Hardy prove that there are countably, or uncountably many zeros on the line Re$(s)=1/2$ of $\zeta(s)$? -QUESTION [5 upvotes]: It's known that Hardy proved that there are infinitely many zeros of $\zeta(s)$ on the line Re$(s)=\frac{1}{2}$, but did he prove it's countably infinite? Or uncountable? - -REPLY [6 votes]: There's a nifty property of meromorphic functions: they have isolated zeros. Construct a ball of radius $\delta$ around each zero such that no two zeros of $\zeta$ belong to the same ball. The set comprising all these neighborhoods must have countably many connected components, since each open neighborhood contains $x+iy$ with $x$ and $y$ rational; thus each component has a rational representative, and there are countably many rational complex numbers. - -REPLY [4 votes]: By the identity theorem and the fact that uncountable subsets of $\mathbb{R}^n$ must have at least one limit point, any holomorphic function having uncountably many zeroes must vanish identically on its domain. $\zeta$ is holomorphic on a region excluding the origin, so it must not have uncountably many zeroes.<|endoftext|> -TITLE: Does the multiplication of countably infinite many numbers between $0$ and $1$ equal $0$? -QUESTION [9 upvotes]: Suppose every term of a countably infinite sequence $x_1,x_2,\dots$ is between $0$ and $1$, i.e. $0 0$ iff $\sum a_n < \infty.$ The proof is a nice exercise in taking logs and using $\log (1+u) = u + o(u).$<|endoftext|> -TITLE: Is there a function on a compact interval that is differentiable but not Lipschitz continuous? -QUESTION [6 upvotes]: Consider a function $f:[a,b]\rightarrow \mathbb{R}$, does there exist a differentiable function that is not Lipschitz continuous? -After discussing this with friends we have come to the conclusion that none exist. However there is every chance we are wrong. If it is true that none exist how could we go about proving that? It is true that if $f$ is continuously differentiable then $f$ is Lipschitz, but what if we don't assume the derivative is continuous? - -REPLY [8 votes]: The map $f : [0,1] \to \mathbb{R}$, $f(0) = 0$ and $f(x) = x^{3/2} \sin(1/x)$ is differentiable on $[0,1]$ (in particular $f'(0) = \lim_{x \to 0^+} f(x)/x = 0$), but it is not Lipschitz (the derivative $f'(x)$ is unbounded).<|endoftext|> -TITLE: Why are there no $16$ by $32$ Hadamard circulant matrices? -QUESTION [7 upvotes]: Two rows of a matrix are orthogonal if their inner product equals zero. Call a matrix with all rows pairwise orthogonal an orthogonal matrix. A circulant matrix is one where each row vector is rotated one element to the right relative to the preceding row vector. We will only consider matrices whose entries are either $-1$ or $1$. -For number of columns $n= 4,8,12,16,20,24,28, 36$ there exist $n/2$ by $n$ orthogonal circulant matrices. - -Why are there no circulant matrices with $16$ rows and $32$ columns which are orthogonal? - -Or to phrase it differently, is it possible to prove they don't exist without enumerating them all? -Example 6 by 12 matrix -\begin{pmatrix} - -1 &\phantom{-}1 &\phantom{-}1 & -1 & -1 &\phantom{-}1 & -1 &\phantom{-}1 & -1 & -1 & -1 & -1\\ - -1 & -1 &\phantom{-}1 &\phantom{-}1 & -1 & -1 &\phantom{-}1 & -1 &\phantom{-}1 & -1 & -1 & -1\\ - -1 & -1 & -1 &\phantom{-}1 &\phantom{-}1 & -1 & -1 &\phantom{-}1 & -1 &\phantom{-}1 & -1 & -1\\ - -1 & -1 & -1 & -1 &\phantom{-}1 &\phantom{-}1 & -1 & -1 &\phantom{-}1 & -1 &\phantom{-}1 & -1\\ - -1 & -1 & -1 & -1 & -1 &\phantom{-}1 &\phantom{-}1 & -1 & -1 &\phantom{-}1 & -1 &\phantom{-}1\\ - \phantom{-}1 & -1 & -1 & -1 & -1 & -1 &\phantom{-}1 &\phantom{-}1 & -1 & -1 &\phantom{-}1 & -1\\\end{pmatrix} - -REPLY [2 votes]: These matrices are known as circulant partial Hadamard matrices and a good reference for these, along with recent results, is $\textit{Circulant partial Hadamard matrices}$ by Craigen, Faucher, Low, and Wares, Lin. Alg. Appl. 439. -Denote by $r\mbox{-}H(k\times n)$ a $k\times n$ circulant Hadamard matrix in which a row (and hence all) has sum $r$. The authors compile a table of the maximum values of $k$ for $n\le 64$ and all values of $r$. You can see that the $16\times 32$ matrix doesn't exist along with the $22\times 44$ matrix. -One of the first results in the paper is that if $r\mbox{-}H(k\times n)$ exists then $n$ is divisible by 4. This is why your column numbers are all multiples of 4. Another result is that if Ryser's conjecture is true then $k\le \frac{n}{2}$. The authors show also that there is empirical evidence that the maximum value of $k=\frac{n}{2}$ is attained almost always for $r=2$. A conjecture of Delsarte, Goethals, and Seidel is that a $2\mbox{-}H(k\times 2k)$ exists if and only if $k-1$ is an odd prime power. These two results combined would explain why the $16\times 32$ and $22\times 44$ cases don't exist. It also indicates that the next non-existent case could be $34\times 68$.<|endoftext|> -TITLE: What is the colon operator between matrices? -QUESTION [7 upvotes]: While reading Robust Quasistatic Finite Elements and Flesh Simulation by Teran et al., I have seen in several equations a colon operator used between matrices. -Here are some examples, using matrices $F$, $P$ and $C$: -$\delta F : \delta P > 0$ -$\delta F : (\partial P / \partial F) : \delta F > 0$ -$i2 = C : C$ -The only hint I have is that I believe that $C$ is a diagonal matrix with diagonal elements $[\sigma_1^2, \sigma_2^2, \sigma_3^2]$, and the result of $C : C$ is $\sigma_1^4 + \sigma_2^4 + \sigma_3^4$. -Does anybody know what this operator represents? - -REPLY [4 votes]: Since the paper deals with tensors etc, I think it's the "double dot product" as described here: -https://en.wikipedia.org/wiki/Dyadics - -Double dot product -$$ A:B = \sum_{j} \sum_{i} (a_i\cdot d_j)(b_i\cdot c_j), $$ -or -$$ A:B = \sum_{j} \sum_{i} (a_i\cdot c_j)(b_i\cdot d_j) $$ - -where $A = \sum_{i} a_i b_i$ and $B = \sum_j c_j d_j$.<|endoftext|> -TITLE: P is a natural number. 2P has 28 divisors and 3P has 30 divisors. How many divisors of 6P will be there? -QUESTION [7 upvotes]: While answering Aptitude questions in a book I faced this question but not able to find the solution So I searched Google and got two answers but didn't get an idea how the answer came. -Question: -P is a natural number. 2P has 28 divisors and 3P has 30 divisors. How many divisors of 6P will be there? -Solution 1: -2P is having 28(4*7) divisors but 3P is not having a total divisors which is divisible by 7, so the first part of the number P will be 2^5. -Similarly, 3P is having 30 (3*10) divisors but 2P does not have a total divisors which is divisible by 3. So 2nd part of the number P will be 3^3. So, P = 2^5*3^3 and the solution is 35. -Solution 2: -2P has 28 divisors =4x7, -3P has 30 divisors -Hence P=2^5 3^3 -6p =2^6 3^4 -Hence 35 divisors -I have been trying to understand the steps but not able to get. - -REPLY [4 votes]: First, we want to know how to easily calculate the number of divisors of any number. If we have $n=p_1^{e_1}p_2^{e_2}\cdots p_k^{e_k}$ where all $p_i$ are distinct, then to construct a divisor we have $e_1+1$ choices for the number of factors $p_1$ in our divisor, $e_2+1$ choices for the number of factors $p_2$, etc., making the number of divisors equal to $(e_1+1)(e_2+1)\cdots (e_k+1)$. So for example, $12=2^2\cdot 3$ so $12$ has $(2+1)(1+1)=6$ divisors. -Let's look at the first hint now, $2P$ having 28 divisors. Let's write $2P=p_1^{e_1}p_2^{e_2}\cdots p_k^{e_k}$. The number of divisors is now $(e_1+1)(e_2+1)\cdots (e_k+1)=28$. So, we can say that one power, say $e_1$, must be either $6,13$ or $27$ (since 28 has a factor 7, and we assumed it was contained in $e_1+1$. Our options are now: -$$2P=p_1^6\cdot p_2\cdot p_3$$ -$$2P=p_1^6\cdot p_2^3$$ -$$2P=p_1^{13}\cdot p_2$$ -$$2P=p_1^{27}$$ -The second hint says that 3P has 30 divisors. Since this does not contain a factor 7, we know that the $2$ in $2P$ must be responsible for this (notice that follows $e_1$ is the power of 2 in $2P$). Thus we know that $p_1=2$. Now we can, by our previously stated options, calculate P. -$$P=2^5\cdot p_2\cdot p_3$$ -$$P=2^5\cdot p_2^3$$ -$$P=2^{12}\cdot p_2$$ -$$P=2^{26}$$ -In the third option, the number of divisors of $3P$ must be divisible by $12+1=13$, and in the last case, the number of divisors of $3P$ must be divisible by $27$. We conclude the last two are not possible, since 30 is not divisible by either 13 or 27. In the first case, the number of divisors of $3P$ is divisible by $6$ because of the amount of factors 2, and we also know that $3P$ there has (at least) one prime factor that it only contains one time, so we have another factor 2 in the number of divisors of $3P$. Now the number of divisors of $3P$ is divisible by $12$, which is also impossible ($12$ does not divide $30$). We conclude that $P$ must be of the form $p_1^5\cdot p_2^3$. We also know that if $p_2\neq 3$, then the number of divisors of $3P$ is divisible by $6$ because of the factors 2 and by $4$ because of the factors $p_2$. This is again impossible. -Finally, we must have $p_2=3$ so $P=2^5\cdot 3^3$. Now we can easily calculate the number of divisors of $6P$; it must be $(6+1)(4+1)=35$. -Hope this helped!<|endoftext|> -TITLE: Negation of injectivity -QUESTION [7 upvotes]: I'm having some problems understanding the negation of injectivity. -Take the function $f: \mathbb{R} \rightarrow \mathbb{R}$ given by $f(x) = x^2$. The formal definition of injectivity is $f(a)=f(b) \implies a = b$. Therefore the function $f(x)$ is not injective because $-1 \neq 1$ while $f(-1)=f(1)=1$. -But when I try to specify the negation of the statement "f is injective", I run into problems. I know that the negation of "P implies Q" is "P but not Q" so the formal definition of non-injectivity should be $f(a)=f(b) \implies a\neq b$, right? The problem is this statement doesn't hold for the function $f(x)=x^2$, because $f(1) = f(1)$ while it's not true that $1 \neq 1$. -What am I doing wrong? - -REPLY [2 votes]: "P but not Q" so the formal definition of non-injectivity should be - $f(a)=f(b) \implies a\neq b$, right? - -Wrong, but close. You said "P but not Q" (which really means "P and not Q") and then you wrote the equivalent of "P implies not Q". These are different. -You also have to be careful how to take negation inside a quantifier. The definition of injectivity really is "for all $x,y$ something", which is negated as "there exists $x,y$ for which NOT something". Substituting "P implies Q" for "something", and using the rule for negating an implication, we get that the negation of "for $x,y$ P implies Q" is "there exists $x,y$ such that P and not Q".<|endoftext|> -TITLE: Extreme of $\cos A\cos B\cos C$ in a triangle without calculus. -QUESTION [13 upvotes]: If $A,B,C$ are angles of a triangle, find the extreme value of $\cos A\cos B\cos C$. - -I have tried using $A+B+C=\pi$, and applying all and any trig formulas, also AM-GM, but nothing helps. -On this topic we learned also about Cauchy inequality, but I have no experience with it. -The answer according to Mathematica is when $A=B=C=60$. -Any ideas? - -REPLY [2 votes]: We know from geometry that for any triangle ABC the distance between its circumcenter $O$ and its orthocenter $H$ can be given by the following formula: -$$OH^2=R^2(1-8\cos A\cos B\cos C)$$ -R being the circumradius. -Besides that, we know that orthocenter and circumcenter coincide only if the triangle is an equilateral one. -Therefore, $\cos A\cos B\cos C$ attains a maximum value of $\frac 18$ when $A=B=C=\pi/3$. -No Calculus needed.<|endoftext|> -TITLE: "the only odd dimensional spheres with a unique smooth structure are $S^1$, $S^3$, $S^5$, $S^{61}$" -QUESTION [41 upvotes]: This (long) paper, - -Guozhen Wang, Zhouli Xu. - "On the uniqueness of the smooth structure of the 61-sphere." - arXiv:1601.02184 [math.AT]. - -proves that - -the only odd dimensional spheres with a unique smooth structure are $S^1$, $S^3$, $S^5$, $S^{61}$. - -The new result is for $S^{61}$. -Is it possible to give some intuition on this remarkable result, for those -not steeped in algebraic and differential geometry, and so not intimately familiar with homotopy groups of spheres? Any attempt would be welcomed. - -REPLY [43 votes]: Results of this form, and my intuition from them, come from the Kervaire-Milnor paper on exotic spheres. (There was never a homotopy spheres II. The purported content of that unpublished paper appears to be summarized in these notes, though I haven't read them.) I'm going to need to jump into the algebra here; personally, I couldn't tell you the difference between $S^{57}$ and $S^{61}$ without it. - -For $n \not\equiv 2 \bmod 4$, there is an exact sequence $$0 \to \Theta_n^{bp} \to \Theta_n \to \pi_n/J_n \to 0.$$ For $n=4k-2$, instead we have the exact sequence $$0 \to \Theta_n^{bp} \to \Theta_n \to \pi_n/J_n \xrightarrow{\Phi_k} \Bbb Z/2 \to \Theta_{n-1}^{bp} \to 0.$$ - -Let's start by introducing the cast of characters. -$\Theta_n$ is the group of homotopy $n$-spheres. It's smooth manifolds, up to diffeomorphism, which are homotopy equivalent (hence by Smale's h-cobordism theorem, and in low dimensions Perelman's and Freedman's work, homeomorphic) to the $n$-sphere $S^n$. (Actually, we identify $h$-cobordant manifolds. Because $h$-cobordism is now known in all dimensions at least 5, it changes nothing for high-dimensional manifolds; but it explains why $\Theta_4=1$ is possible even though it's an open problem, suspected to be false, that the 4-sphere admits a unique smooth structure. In any case, this is not an important aside.) The group operation is connected sum. The data we're really after is $|\Theta_n|$ - the number of smooth structures. -$\Theta_n^{bp}$ is the subgroup of those $n$-spheres which bound parallelizable manifolds. This subgroup is essential, because it's usually the fellow forcing us to have exotic spheres in the other dimensions. -This group is always cyclic (Kervaire and Milnor provide an explicit generator). As a rough justification for this group: the way this goes is by taking an arbitrary element, writing down a parallelizable manifold it bounds, and using the parallelizability condition to do some simplifying algebra until this bounding manifold is particularly simple - at which point you identify it as a connected sum of standard ones, hence that $\Theta_n^{bp}$ is cyclic generated by the standard one. I (or rather, Milnor and Kervaire) can tell you its order: If $n$ is even, $|\Theta_n^{bp}| = 0$; if $n=4k-1$, $$|\Theta_n^{bp}|=2^{2k-2}(2^{2k-1}-1) \cdot \text{the numerator of }\frac{4B_k}{k}$$ is sort of nasty, but in particular always nonzero when $k>1$; and for $n=4k-3$, it is either 0 or $\Bbb Z/2$, the first precisely if $\Phi_k \neq 0$ in the above exact sequence. -$\pi_n/J$, and the map $\Theta_n \to \pi_n/J$, is a bit harder to state; $\pi_n$ is the stable-homotopy group of spheres, $J$ is the image of a certain map, and the map from $\Theta_n$ sends a homotopy 7-sphere, which is stably parallelizable, to its "framed cobordism class". The real point, though, is that this term $\pi_n/J$ is entirely the realm of stable homotopy theory. This is precisely why people now say that the exotic spheres problem is "a homotopy theory problem". (To give the slightest bit more detail: The Thom-Pontryagin construction gives that $\pi_n = \Omega_n^{fr}$, the framed cobordism group, whose elements are equivalence classes of manifolds with trivializations of the "stable tangent bunde". Every homotopy sphere is stably trivial, and the image of $J$ is precisely the difference between any two stable trivializations.) This map $\Theta_n \to \pi_n/J$ might motivate the introduction of $\Theta_n^{bp}$ - since that is, more or less obviously, the kernel. The fact that this map is not always surjective - the obstruction supplied by $\Phi_k$ - is the statement that not every framed manifold is framed cobordant to a sphere. I find it somewhat surprising that so many actually are! -The last thing you should know is about the map $\Phi_k$. It's known as the Kervaire invariant. It's known to be nonzero in dimensions $k=1,2,4,8,16$, and might be nonzero in dimension $32$, but that's open. The remarkable result of Mike Hill, Mike Hopkins, and Doug Ravenel is that $\Phi_k = 0$ for $k > 32$. I don't have much to say about this, other than that it's there. Summing up what we have so far: - -For dimensions $n=4k-1>3$, there are always exotic spheres coming from $\Theta_n^{bp}$ - lots of them! For dimensions $n=4k-3$, $\Theta_n^{bp} = \Bbb Z/2$ unless $k=1,2,4,8,16,32$. So the only possible odd-dimensional spheres with a unique smooth structure are $S^1$, $S^3, S^5, S^{13}, S^{29}, S^{61}$, and $S^{125}$. - -Now to deal with special cases. It is classical that $S^1$ and $S^3$ have a unique smooth structure ($S^3$ is due to Moise); $S^5$ is dealt with by 1) finding a 6-manifold of nonzero Kervaire invariant, showing that $\Phi_2 \neq 0$ and hence that $\Theta_5^{bp}=0$; and then 2) calculating that $\pi_5$, the fifth stable homotopy group of spheres, is zero. You can do this with Serre's spectral sequence calculations. (It was pointed out to me that this means that three different field's medalists work went into getting $\Theta_5 = 1$ - Milnor, Serre, Smale. It is worth noting that there is a differential topological proof, coming from the explicit classification of smooth, simply-connected 5-manifolds, but it isn't substantially easier or anything.) -For $S^{13}$ and $S^{29}$, these are disqualified by the homotopy theory calculation that $\pi_{13}/J$ and $\pi_{29}/J$ are not zero. I do not know how these calculations are done - probably the Adams spectral sequence and a lot of auxiliary spectral sequences, which seems to be how a lot of these things are done. Maybe someone else can shed some light on that. -For $S^{125}$, the paper itself sketches why: There's a spectrum known as $tmf$, and the authors are able to write down a homomorphism $\pi_n/J \to \pi_n{tmf}$ and find a class in $tmf$ that's hit when $n=125$. -So what we know now is that $\pi_{61}/J \cong \Theta_n$. The content of the paper you're talking about is precisely the calculation that $\pi_{61}/J = 0$. The authors access it through the Adams spectral sequence, as far as I can tell (I am a non-expert). Adams SS is notoriously hard to calculate anything with - mostly the entire content of the paper is identifying of a single differential in the whole spectral sequence. Once this is done, they're able to finish the calculation, but it's hard work. If you want a sketch of how this is done, I found the introduction to their paper readable - see section 3 of the paper.<|endoftext|> -TITLE: Greatest value of $\frac{1}{4\sin \theta-3 \cos \theta+6}$ -QUESTION [5 upvotes]: State the greatest value of $$\frac{1}{4\sin \theta-3 \cos \theta+6}$$ -Can anyone give me some hints on this? - -REPLY [3 votes]: $$4\sin\theta-3\cos\theta$$ is the dot product of the vector $(-3,4)$ with the unit vector in direction $\theta$. This dot product is minimized when the two vectors are antiparallel, and equals minus the product of the norms, i.e. $-5$. -The requested maximum is $$\frac1{-5+6}.$$<|endoftext|> -TITLE: How are groupoids richer structures than sets of groups? -QUESTION [14 upvotes]: This has been bugging me for quite some time: My intuition with categories is, that I can simply identify isomorphic objects. It does for example not matter, whether the entries in a sudoku are the numbers $1,2,\dots,9$ or letters $a,b,\dots,i$ (this shows, that you can simply identify isomorphic sets). -I heard groupoids are important objects, possibly even more fundamental then categories (I even dealt with them before). But this seems to contradict my intuition, for you could mentally identify isomorphic objects in a groupoid and end up with just a set of groups. There has to be something wrong with this view and I suppose it has to do with the fact, that there are usually many isomorphisms between isomorphic objects (It is well known from linear algebra, that choices of bases "matter"). I realize that this is a very imprecise question, but: - -How can I think about groupoids, such that there are more interesting - or richer in structure than just sets of groups? - -REPLY [10 votes]: The first short answer is that in order to identify a groupoid with a set of groups you need to pick a basepoint in each connected component (in more categorical terms, a representative of each isomorphism class), and there are various situations where you don't want to (analogous to why you often don't want to pick bases of vector spaces). -The second short answer is that there are many reasons to consider groupoids with extra structure, which can be considerably more interesting than sets of groups with extra structure. -Here is an example where both of these considerations apply. Suppose a group $G$ acts on a space $X$. Does this induce an action on the fundamental group? The answer is no: in order to get such an action, $G$ must fix a basepoint of $X$. But it can happen that $G$ fixes no basepoint (even in a homotopical sense). However, $G$ will always act on the fundamental groupoid of $X$. -For example, let $X$ be the configuration space of $n$ ordered points in $\mathbb{R}^2$. This space has fundamental group the pure braid group $P_n$, which fits into a short exact sequence -$$1 \to P_n \to B_n \to S_n \to 1$$ -where $B_n$, the braid group, is the fundamental group of the configuration space of $n$ unordered points. Now, it's clear that $S_n$ acts on $X$ by permuting points. But this action cannot be upgraded to an action on $P_n$, because the above short exact sequence does not split. -This is not an isolated example. It's part of the reason why the $E_2$ operad can be described as an operad in groupoids, but not as an operad in groups, even though its underlying spaces (homotopy equivalent to the configuration spaces above) are all Eilenberg-MacLane spaces. -There are lots of other things to say here. For example, groupoids form a 2-category, groupoids are cartesian closed, topological groupoids are richer than topological groups... the list goes on and on. Here is a slightly cryptic slogan: - -You cannot really identify isomorphic objects. The space of objects isomorphic to a fixed object $X$ is not a point, it is the classifying space $B \text{Aut}(X)$.<|endoftext|> -TITLE: Coupon collector's problem using inclusion-exclusion -QUESTION [5 upvotes]: Coupon collector's problem asks: - -Given n coupons, how many coupons do you expect you need to draw with replacement before having drawn each coupon at least once? - -The well-known solution is $E(T)=n \cdot H_n$, where T is the time to collect all n coupons(proof). -I am trying to approach another way, by calculating possible arrangements of coupons using inclusion-exclusion(Stirling's numbers of the second kind) and that one coupon should only be collected at last and other coupons should be collected at least once: -$$P(T=k)=\frac{n!\cdot{k-1\brace n-1}}{n^k}\\ -=\frac{\sum\limits_{i=1}^{n-1}(-1)^{n-i-1}\cdot{n-1\choose i}\cdot i^{k-1}}{n^{k-1}}\\ -E(T)=\sum\limits_{k=n}^{\infty}k\cdot P(T=k)\\ -=\sum\limits_{k=n}^{\infty}k\cdot\frac{\sum\limits_{i=1}^{n-1}(-1)^{n-i-1}\cdot{n-1\choose i}\cdot i^{k-1}}{n^{k-1}}\\ -=\sum\limits_{i=1}^{n-1}(-1)^{n-i-1}\cdot{n-1\choose i}\cdot\sum\limits_{k=n}^{\infty}k\cdot (\frac i n)^{k-1}\\ -=\sum\limits_{i=1}^{n-1}(-1)^{n-i-1}\cdot{n-1\choose i}\cdot(\frac i n)^{n-1}\cdot(\frac 1 {1-\frac i n})\cdot(n-1+\frac 1 {1-\frac i n})$$ -Calculation of first 170 terms yields same results. -Are two formulas same? - -REPLY [4 votes]: By way of enrichment here is a proof using Stirling numbers of the -second kind which encapsulates inclusion-exclusion in the generating -function of these numbers. - -First let us verify that we indeed have a probability distribution -here. We have for the number $T$ of coupons being $m$ draws that -$$P[T=m] = \frac{1}{n^m} \times -n\times {m-1\brace n-1} \times (n-1)!.$$ -Recall the OGF of the Stirling numbers of the second kind which says -that -$${n\brace k} = [z^n] \prod_{q=1}^k \frac{z}{1-qz}.$$ -This gives for the sum of the probabilities -$$\sum_{m\ge 1} P[T=m] -= (n-1)! \sum_{m\ge 1} \frac{1}{n^{m-1}} {m-1\brace n-1} -\\ = (n-1)! \sum_{m\ge 1} \frac{1}{n^{m-1}} -[z^{m-1}] \prod_{q=1}^{n-1} \frac{z}{1-qz} -\\ = (n-1)! \prod_{q=1}^{n-1} \frac{1/n}{1-q/n} -= (n-1)! \prod_{q=1}^{n-1} \frac{1}{n-q} = 1.$$ -This confirms it being a probability distribution. -We then get for the expectation that -$$\sum_{m\ge 1} m\times P[T=m] -= (n-1)! \sum_{m\ge 1} \frac{m}{n^{m-1}} {m-1\brace n-1} -\\ = (n-1)! \sum_{m\ge 1} \frac{m}{n^{m-1}} -[z^{m-1}] \prod_{q=1}^{n-1} \frac{z}{1-qz} -\\ = 1 + (n-1)! \sum_{m\ge 1} \frac{m-1}{n^{m-1}} -[z^{m-1}] \prod_{q=1}^{n-1} \frac{z}{1-qz} -\\ = 1 + (n-1)! \sum_{m\ge 2} \frac{m-1}{n^{m-1}} -[z^{m-1}] \prod_{q=1}^{n-1} \frac{z}{1-qz} -\\ = 1 + \frac{1}{n} (n-1)! \sum_{m\ge 2} \frac{1}{n^{m-2}} -[z^{m-2}] \left(\prod_{q=1}^{n-1} -\frac{z}{1-qz}\right)' -\\ = 1 + \frac{1}{n} (n-1)! -\left.\left(\prod_{q=1}^{n-1} -\frac{z}{1-qz}\right)'\right|_{z=1/n} -\\ = 1 + \frac{1}{n} (n-1)! -\left. \left(\prod_{q=1}^{n-1} -\frac{z}{1-qz} -\sum_{p=1}^{n-1} \frac{1-pz}{z} \frac{1}{(1-pz)^2} -\right)\right|_{z=1/n} -\\ = 1 + \frac{1}{n} (n-1)! -\prod_{q=1}^{n-1} \frac{1/n}{1-q/n} -\left. \sum_{p=1}^{n-1} \frac{1}{z} \frac{1}{1-pz} -\right|_{z=1/n} -\\ = 1 + \frac{1}{n} (n-1)! -\prod_{q=1}^{n-1} \frac{1}{n-q} -\sum_{p=1}^{n-1} \frac{n}{1-p/n} -\\ = 1 + \frac{1}{n} -\sum_{p=1}^{n-1} \frac{n^2}{n-p} -= 1 + n H_{n-1} = n \times H_n.$$ -What we have here are in fact two annihilated coefficient extractors (ACE) more of which may be found at this MSE link. Admittedly the EGF better represents inclusion-exclusion than the OGF and could indeed be used here where the initial coefficient extractor would then transform it into the OGF.<|endoftext|> -TITLE: Prove that $\mathbb{Z}/(p^n)$ is indecomposable -QUESTION [5 upvotes]: Could anyone please give me a hint about how to prove that? -I guess I should show that the direct sum is not a cyclic group and get a contradiction but I'm not sure how to start. -thanks - -REPLY [4 votes]: Assuming you're talking about $\mathbb Z$-modules, your guess is right. -Let $M=\mathbb Z/(p^n)$. -If $M = A \oplus B$ is a non-trivial decomposition, then $A$ and $B$ are finite groups of order $p^a$ and $p^b$, with $0< a,b p^c$.<|endoftext|> -TITLE: Why is there only a complex conjugate, but no real conjugate? -QUESTION [5 upvotes]: In mathematics one often uses the complex conjugate -$$ -\Bbb C\to\Bbb C,\quad z=a+b\cdot\mathrm{i}\;\;\mapsto\;\; \bar z=a-b\cdot\mathrm{i} -$$ -This is often described as a a reflection along the real axis. -But in analogy one could also define a real conjugate -$$ -\Bbb C\to\Bbb C,\quad z=a+b\cdot\mathrm{i}\;\;\mapsto\;\; \tilde z=-a+b\cdot\mathrm{i} -$$ -This would be a reflection along the imaginary axis. -However real conjugation is never used. Why is it that complex conjugation is so useful, but real conjugation is not? - -REPLY [6 votes]: One definition of conjugate arises from the factoring of -$a^2 - b^2$ into $(a + b)(a - b)$. -But that does not answer the question of why we can obtain the complex conjugate of a complex number only by negating the imaginary part -and never by negating the real part. -(For that matter, it does not explain why we separate the number into -real and imaginary parts in order to obtain a conjugate in the first place.) -But there is another, somewhat different notion of conjugate. -Quoting one writer: - -Two elements $\alpha, \beta$ of a field $K$, which is an extension field of a field $F$, are called conjugate (over $F$) if they are both algebraic over $F$ and have the same minimal polynomial. - -(Barile, Margherita. "Conjugate Elements." From MathWorld--A Wolfram Web Resource, created by Eric W. Weisstein. http://mathworld.wolfram.com/ConjugateElements.html) -If we take $K$ to be the complex numbers, and $F$ to be the real numbers, -then we can verify that $a + bi$ and $a - bi$ ($a, b$ both real) -are the two roots of a certain polynomial over $z$, -specifically, the solutions for $z$ in the equation -$$ z^2 - 2az + a^2 + b^2 = 0, $$ -in which the left-hand side is a polynomial -over the real numbers (that is, over $F$). -That is, $a + bi$ and $a - bi$ fit perfectly the definition of -conjugate elements of $\mathbb C$, -viewed as an extension field of $\mathbb R$. -What about $a + bi$ and $-a + bi$? These are solutions of the -equation $(z - a - bi)(z + a - bi) = 0$; multiplying out the -left-hand side, we see that $a + bi$ and $-a + bi$ -are solutions for $z$ in -$$ z^2 - (2bi)z - a^2 - b^2 = 0. $$ -The coefficients of the polynomial on the left-hand side are not -all real numbers unless $b = 0$, so it seems that $-a + bi$ cannot be -a conjugate of $a + bi$ in the same interesting way that $a - bi$ can. - -Historically, complex numbers arose in the process of trying to -solve polynomials with real-valued coefficients. -Eventually, people decided that complex numbers were actually acceptable -roots of such a polynomial. -When you have a polynomial with integer coefficients, it always has -a factorization into polynomials of the form $(ax + b)$ -or $(ax^2 + bx + c)$, where all the coefficients $a, b, c$ are real numbers. -Of course $(ax + b)$ has just one real root (and no other roots), -but the roots of $(ax^2 + bx + c)$ are precisely a pair of -complex conjugates.<|endoftext|> -TITLE: Finding a given group in groups twice as large. -QUESTION [7 upvotes]: Given a group $H$ with order $n$, can we determine how many groups $G$ of order $2n$ contain $H$ as a subgroup, and perhaps find these groups? For example, $\mathbb{Z}_4$ is contained in $\mathbb{Z}_8$, $\mathbb{Z}_4 \times \mathbb{Z}_2$, $D_4$, and $Q_8$. -I'm curious if we can find a constant upper bound (or upper bound related to $n$) on the number of groups $G$ that satisfy my constraints for any $H$. I'm not very familiar with group theory, so methods to approach this might be over my head. I'm particularly interested in the case where we only consider cyclic $H$. Feel free to generalize! Thanks. - -REPLY [9 votes]: Subgroups of index $2$ are normal, so equivalently you want to classify short exact sequences -$$1 \to H \to G \to \mathbb{Z}_2 \to 1.$$ -In other words, you want to classify extensions of $\mathbb{Z}_2$ by $H$. If $H$ has odd order then such a short exact sequence must split (this generalizes to the Schur-Zassenhaus theorem, but in this case we can just appeal to Cauchy's theorem), so $G$ must be a semidirect product -$$G \cong H \rtimes \mathbb{Z}_2$$ -and now it suffices to classify actions of $\mathbb{Z}_2$ on $H$. If $H = \mathbb{Z}_n$ where $n$ is odd then write $n = \prod p_i^{e_i}$ where the $p_i$ are odd primes and $k$ primes appear. Then -$$H \cong \prod_i \mathbb{Z}_{p_i^{e_i}}$$ -so there are $2^k$ actions of $\mathbb{Z}_2$ on $H$, given by acting by $\pm 1$ on each factor. The corresponding extensions take the form -$$G \cong \mathbb{Z}_a \times D_b$$ -where $D_b$ is the dihedral group $\mathbb{Z}_b \rtimes \mathbb{Z}_2$ and $ab = n$. -If $H$ has even order then the answer is more complicated and involves group cohomology. To give some indication of how complicated the answer must be, every finite $2$-group is an iterated extension of copies of $\mathbb{Z}_2$. There are more than 49 billion groups of order $1024 = 2^{10}$, and they make up almost all groups of order less than $2000$. By contrast, there are 10 million groups of order $512$. This means at least one group of order $512$ is a subgroup of at least $5000$ groups of order $1024$. In general, it's known that the number of groups of order $2^n$ is asymptotically -$$2^{\frac{2}{27} n^3 + O(n^{8/3})}$$ -so at least one group of order $2^n$ is a subgroup of somewhere around $2^{\frac{2}{9} n^2}$ groups of order $2^{n+1}$; note that this is faster than polynomial growth in the order. -If $H = \mathbb{Z}_n$ where $n$ may be even then this is how the classification begins. First, you still need to classify all actions of $\mathbb{Z}_2$ on $\mathbb{Z}_n$. I'll leave this to you as a nice exercise to figure out how to handle the powers of $2$ dividing $n$. Second, fixing such an action, you need to compute the cohomology group -$$H^2(\mathbb{Z}_2, \mathbb{Z}_n)$$ -(which depends on the action; unfortunately this is suppressed by the notation). There is one extension for every pair of an action and a class in this cohomology group. If the class vanishes, then the extension is a semidirect product, but in general it won't.<|endoftext|> -TITLE: Do projections preserve closed subspaces -QUESTION [5 upvotes]: Let $H$ be a Hilbert space and let $\pi \colon H \to H$ be an orthogonal projection. Let $E \subset H$ be a closed subspace of $H$. -My question: Is there any hope that one can conclude that $\pi(E)$ is closed? - -REPLY [4 votes]: This is not true. Let's take $H=L^2(\mathbb R)$ and $E=PW_1$ as the subspace of all $f$ whose Fourier transform is supported by $[-1,1]$ (the Paley-Wiener space). Let $\pi$ be the projection onto $L^2(-1,1)$. -Then $f=\chi_{(-1,1)}(x)$ is in $\overline{\pi(E)}$: this follows because -$$ -\frac{\sin ax}{ax} = \frac{1}{2a}\int_{-a}^a e^{itx}\, dt \in E -$$ -for all small $a>0$, and clearly these functions converge to $f$ uniformly on $(-1,1)$ as $a\to 0$. -However, $f\notin\pi(E)$ because the functions from $E$ are entire, but the holomorphic continuation of $f$ from $(-1,1)$ to $\mathbb R$ is identically equal to $1$ and thus not in $L^2(\mathbb R)$.<|endoftext|> -TITLE: Looking for a function $g(x)$ such that $g(2x+2) = g(x) + 2x+2$ -QUESTION [6 upvotes]: So recently I got bored in maths class (I'm in tenth grade) and made up a little equation that looked something like this: -$$g(f(x)) = g(x) + f(x) $$ -My original goal was to find different $g(x)$ to fulfill this equation for $f(x) = 2x, 2x+1$ and $2x+2$. I found solutions for the first two cases, but the third one keeps hiding its secrets from me and it's slowly taking my sleep. -So if anyone of you guys out there who have some actual knowledge about the topic (I tried and asked my maths teacher and he didn't even know an answer for $f(x) = 2x+1$) could sort out how to find a fitting $g(x)$, please tell me. (There might be a general way to solve for any given f$(x)$?) -I tried proving that a polynomial of some sort could solve the equation, but I stopped at third degree because it simply was too much work to write down all the formulas and I don't know how to handle all those terribly professional math programs like Octave etc. (I'm 15) -Looking forward to some informative replies. -Edit: I've already gained some really helpful insights here, but I still don't know why my original solution for $f(x)=2x+1$ wasn't correct. So, if anybody could give me a hint, that would be very appreciated. Here is how I got there: -I assumed that the function looked like $f(x)=$ $(ax^2+bx+c) \over (x+1)$ because I found out that $g(x)$ had no value at $x=-1$. By just putting this function into our initial equation, we get -$${a(2x+1)^2+b(2x+1)+c \over x+1}={ax^2+bx+c \over x+1}+2x+1,$$ after simplifying -$$3ax^2+4ax+a+bx+b=2x^2+3x+1$$ -We can now see that $a$ and $b$ have to be solutions to the following three equations in order to be valid parameters for our function $g$: -$$1: 3a = 2$$ -$$2: 4a+b = 3$$ -$$3: a+b = 1$$ -When working out the solutions, one will easily find $a$ to be $2 \over 3$ and $b$ to be $1 \over 3$, thus our function $g(x)$ is defined as ${{2 \over 3}x^2+{1 \over 3}x}\over x+1$. -For the test, we just put this function into our starting problem again: -$${{2 \over 3}(2x+1)^2+{1 \over 3}(2x+1) \over x+1} = {{2 \over 3}x^2+{1 \over 3}x \over x+1} + 2x+1.$$ -Multiplying with $x+1$ (but still keeping in mind that $x=-1$ may never hold), we get: -$${2 \over 3}(2x+1)^2+{1 \over 3}(2x+1) = {2 \over 3}x^2+{1 \over 3}x + (2x+1)(x+1)$$ -$${2 \over 3}(4x^2+4x+1)+{1 \over 3}(2x+1) = {2 \over 3}x^2+{1 \over 3}x + 2x^2+3x+1$$ -$${8 \over 3}x^2+{8 \over 3}x+{2 \over 3}+{2 \over 3}x+{1 \over 3} = {2 \over 3}x^2+{1 \over 3}x+2x^2+3x+1$$ -$${8 \over 3}x^2+{10 \over 3}x+1 = {8 \over 3}x^2+{10 \over 3}x+1, $$ -which holds for all $x \in \Bbb R \ | \ x \neq -1$. -I really don't know where my error is, and I am very grateful for every piece of help. - -REPLY [3 votes]: So, let's start off with $g(2x+2)=g(x)+2x+2$. I suspect that the reason that the $f(x)=2x$ case came more easily was because the arguments of each of the $g$ terms were similar - let's see if we can do the same thing here, by tweaking the $x$ to $x+a$ and comparing: -$$g(2x+2a+2)=g(x+a)+2x+2a+2$$ -If we choose $a=-2$, this solves $2a+2=a$, which means that the $g$ terms will be $g(2x-2)$ and $g(x-2)$ respectively. This is good news, because we've pushed the terms of the equation to look a bit more similar to one another. Indeed, if we introduce the notation $h(x)=g(x-2)$ into our equation, we get: -$$h(2x)=h(x)+2x-2$$ -and this is a bit more like the first case you solved. Now, with recurrence equations in general (where functions are defined in terms of other values of the function, broadly speaking), one common way of approaching is to try to push your recurrence into the form $F(x+1)=F(x)+h(x)$, for some $F$ and $h$. The reason this is so popular is because it allows us to make use of telescoping sums, which facilitate viewing recursions similarly to sums. For now, let's try to get our recurrence into a form involving only $x$ and $x+1$ as inputs. -Now, as things stand, the arguments of (inputs to) $g$ are $x,2x$, which means that instead of adding $1$ between steps of computing, we're multiplying by $2$. This might hint at us to consider the powers of 2. Indeed, replacing $x$ by $2^x$ in our recurrence for $h$, we obtain: -$$h(2^{x+1})=h(2^x)+2^{x+1}-2$$ -Now, this is better! We see an $x+1$ on one side, and an $x$ on the other, which is what we were shooting for. Let's call $j(x)=h(2^x)$ to take charge of this, and we land at: -$$j(x+1)=j(x)+2^{x+1}-2$$ -This is good, because it lets us move from $x$ to $x+1$, which means that if we know 1 value of $j$, we know an infinity of them! Observe: -$$j(x+2)=j(x+1)+(2^{x+2}-2)=j(x)+(2^{x+1}-2)+(2^{x+2}-2)$$ -If we keep proceeding with this, we arrive at (for integer $n$), that: -$$j(x+n)=(2^{x+1}-2)+(2^{x+2}-2)+...+(2^{x+n}-2)+j(x)$$ -If you've come across geometric series, you'll realise that we can collapse each of these sums: -$$2^{x+1}+2^{x+2}+...+2^{x+n}=2^{x+n+1}-2^{x+1}$$ -$$(-2)+(-2)+...+(-2)=-2n$$ -So, we have the recurrence that $j(x+n)=j(x)+(2^{x+n+1}-2^{x+1})-2n$. Writing $y=x+n$, this gives that for any $x,y$, that $j(y)=j(x)+2^{y+1}-2^{x+1}-2(y-x)$, provided that the difference between $x,y$ is an integer. -This is good, because each of the variables only appears in unmixed terms - that is, there's no $xy,x/y$ terms or anything like that. So we can isolate them as: -$$j(y)-2^{y+1}+2y=j(x)-2^{x+1}+2x$$ -The key here is that this holds whenever the difference between $x,y$ is anything, so we can say that it is constant. -[n.b. technically it says that it's a constant + a 1-periodic function, but we'll gloss over this for now] -So, we might now say that $j(x)=2^{x+1}-2x+C$, where $C$ is a constant. Let's now make the long journey back to $g(x)$: -$j(x)=2^{x+1}-2x+C \implies h(2^x)=2^{x+1}-2x+C \implies h(x)=2x-2\log_2{x}+C$ -$$ h(x)=2x-2\log_2{x}+C \implies g(x-2)=2x-2\log_2{x}+C$$ -$$\implies g(x)=2x+4-2\log_2(x+2)+C=2x-2\log_2(x+2)+C'$$ -noting that $C$ could have been any constant, whence $C+4=C'$ is just any constant. -[for those keeping track, our full solution is $g(x)=2x-2\log_2(x+2)+p(\log_2(x+2))$, where $p$ is any 1-periodic function] -So, we have a general solution! It does come with the downside that, due to the nature of logarithms, we need to specify our domain as $\{x \vert x>-1\}$, so that anything we take the logarithm of is positive. But otherwise, we have a nice, continuous function which satisfies the functional equation we'd like it to. Hopefully this also hints at how you could approach the general case of $f(x)=2x+b$, or even $f(x)=ax+b$.<|endoftext|> -TITLE: Orientation on a manifold as a sheaf -QUESTION [5 upvotes]: I am thinking about orientation of a connected manifold $M$ of dim $n$ as a sheaf. -There are two definitions I could use, the first is the sheaf associated to the presheaf -$$U\mapsto H_n(M,M-U;R).$$ -The second is the sheaf of sections of generators of the fibration $R^*\to \tilde{M}_R\to M$, where $R$ is a ring and $R^*$ is the discrete group of units of $R$ and $\tilde{M}_R$ is the $R$-orientable cover of $M$. - -I have the following questions: - 1. Are the two definitions the same? - 2. Theorem 3.26 of Hatcher seems to translate to that if $M$ is closed, then it is orientable iff the orientation sheaf has a global section that generate stalk-wise, i.e. it is a principal $\underline{R}$-module. - 3. Lemma 3.27 seems to be saying if $M$ is closed, then the presheaf above is already a sheaf. - -Are these correct? Feel free to tell me more things or give me references. - -REPLY [4 votes]: (1) Your two definitions can't be the same because one of them restricts to units in $R$ and the other does not. To be more precise, let $F_0$ be the presheaf of $R$-modules $U\mapsto H_n(M,M-U;R)$ and let $F$ be its sheafification. Let $G$ be the sheaf of continuous sections of the bundle of $R$-modules on $M$ whose fiber at $x$ is $H_n(M,M-\{x\};R)$, and let $G^*\subset G$ be the subsheaf of sections which generate every fiber. Then $F$ is your first sheaf, and $G^*$ is your second sheaf. But these are not isomorphic; rather, $F\cong G$. To get this isomorphism, note that there is a canonical map $F_0\to G$ (given an element of $H_n(M,M-U;R)$, restrict it to $H_n(M,M-\{x\};R)$ for each $x\in U$), and this map induces an isomorphism on stalks (since any point has arbitrarily small neighborhoods on which both $F_0$ and $G$ evaluate to $R$, with the map being the identity). This map thus induces an isomorphism after sheafifying, giving an isomorphism $F\to G$. It is $G^*$ which is normally referred to as the "sheaf of $R$-orientations", not $F$. If you like, you can identify $G^*$ as a subsheaf $F^*$ of $F$ via the isomorphism; it can be described as the subsheaf of sections which generate every stalk as an $R$-module (or, more elegantly, as sheaf of isomorphisms of $\underline{R}$-modules $\operatorname{Iso}_{\underline{R}}(\underline{R},F)$). While $F\cong G$ canonically has the structure of a sheaf of $R$-modules, $F^*\cong G^*$ is merely a sheaf of $R^*$-sets. -(2) Theorem 2.36(a) says that if $M$ is closed, connected, and $R$-orientable, then there is a global section of $F_0$ that makes $F$ a principal $\underline{R}$-module, i.e. it is (noncanonically) isomorphic to the constant sheaf $\underline{R}$ as a sheaf of $R$-modules. Note that Hatcher's definition of "$R$-orientable" is exactly that $G$ is a principal $\underline{R}$-module (or equivalently, that $G^*\cong\operatorname{Iso}_{\underline{R}}(\underline{R},G)$ has a global section), so it is trivial that in that case $F\cong G$ is also principal. The nontrivial content of Theorem 2.36(a) is to say that the global generating section is actually already a section of the presheaf $F_0$. -(3) Lemma 2.37 doesn't quite say that $F_0$ is a sheaf if $M$ is closed, because $A$ is required to be a compact set, rather than an open set. Here is an instructive example. Take $M=S^1$ and let $U\subset M$ be an open set whose complement is countably infinite. Then $U$ is a disjoint union of countably infinitely many open intervals, so $F(U)\cong R^\mathbb{N}$ (since $F(V)=R$ for any open interval $V\subset S^1$). But $F_0(U)$ can be computed directly to be a direct sum of countably infinitely many copies of $R$ (the key point being that $H_0(M-U)$ is the free $R$-module on $M-U$, which is countable). So $F_0(U)\not\cong F(U)$, so $F_0$ is not a sheaf.<|endoftext|> -TITLE: Possible all-Pentagon Polyhedra -QUESTION [11 upvotes]: If a polyhedron is made only of pentagons and hexagons, how many pentagons can it contain? With the assumption of three polygons per vertex, one can prove there are 12 pentagons. -Let's not make that assumption, and only use pentagons. -12 pentagons: dodecahedron and tetartoid. -24 pentagons: pentagonal icositetrahedron. -60 pentagons: pentagonal hexecontahedron. -72 pentagons: dual snub truncated octahedron. -132 pentagons: 132-pentagon polyhedron. -180 pentagons: dual snub truncated icosahedron. -Here's what the 132 looks like. - -In that range of 12 to 180, what values are missing? For values missing here where an all-pentagon polyhedron exists, what is the most symmetrical polyhedron for that value? -Edit: According to Hasheminezhad, McKay, and Reeves, there are planar graphs that lead to 16, 18, 20, and 22 pentagonal faces, but I've never seen these polyhedra. -16 would be the dual of the snub square antiprism. -20 would be the dual of this graph: - -REPLY [3 votes]: There is an easy way to get a polyhedron with $10n+2$ pentagons only. -Start with a regular dodecahedron. Take a congruent dodecahedron and merge it face to face with the first one, removing the merged faces to get an increment of $10$ faces. Repeat as desired with additional regular dodecahedra. -Yes, it's ugly, in that we lack convexity and don't have elegant symmetry (for $n \ge 2). But it systematically makes infinitely many numbers of faces work, and the faces themselves are regular. -Addendum: -Following up on @Kundor's answer, we can interpret this in terms of graph theory. When we merge an additional dodecahedron into the figure, we are dividing one of the pentagonal faces of the graph into $11$ faces, such that the adjacent faces are undisturbed (they remain pentagonal). Such a division can be applied to any "base" ployhedron, so for example a base polyhedron with $16$ faces guarantees polyhedra with $10n+6$ faces, $n\ge 2$, as well. -Combining this result with @Kundor's implies that almost any even number of faces $\ge 12$ can be accessed. Only $14$ and $18$, which are not multiples of $4$ and too small to be reached via the $10$-face incrementation, require further analysis. For $18$ we have a planar graph corresponding to a polyhedron (basically a trigonal bipyramid where each face is divided in thirds) but for $14$ there are no planar graphs and thus no solutions! -So the possible numbers of faces turn out to be $12$ and all even numbers greater than or equal to $16$.<|endoftext|> -TITLE: What exactly are those "two irrational numbers" $x$ and $y$ such that $x^y$ is rational? -QUESTION [6 upvotes]: It's possible to prove nonconstructively that there exists irrational numbers $x$ and $y$ such that $x^y$ is rational, but that proof only proves that such numbers exist and does not specify what they are. -What is a constructive proof that there are two irrational numbers $x$, $y$ such that $x^y$ is rational, i.e. what are those numbers? - -REPLY [20 votes]: Let $x=3^{1/2}$ and $y=\log_{3}(4)$. Then $x^y=2$. -The proof that $x$ is irrational is familiar. For $y$, suppose $y=p/q$ where $p$ and $q$ are positive integers. Then $3^{p/q}=4$, so $3^p=4^q$. This is impossible, since $4^q$ is even and $3^p$ is odd.<|endoftext|> -TITLE: Proving $\pi^3 \gt 31$ -QUESTION [6 upvotes]: $$\large \pi^3 \gt 31$$ -Using a calculator, $\pi^3/31 \approx 1.0002$, so I thought this may be challenging to do by hand. -It is extremely easy with the use of any calculator, so I was wondering now: - -Can you prove the above inequality without the use of calculator or advanced computation in an elegant manner? - -REPLY [2 votes]: From -$$\sum_{k=0}^\infty \frac{960}{(2k+1)^6} = \pi^6$$ -we have -$$\sum_{k=1}^\infty \frac{960}{(2k+1)^6} = \pi^6-960$$ -and -$$\sum_{k=2}^\infty \frac{960}{(2k+1)^6} = \pi^6-961-\frac{77}{243}$$ -Since -$$960<961<961+\frac{77}{243}$$ -we can form a new series for $\pi^6-961$ as a weighted sum of the two truncations. -Solving the equation -$$\left(\pi^6-960\right)a+\left(\pi^6-961-\frac{77}{243}\right)b=\pi^6-961$$ -for rational $a$ and $b$ yields -$$a=\frac{77}{320}$$ -$$b=\frac{243}{320}$$ -Finally, -$$\pi^6-961=(\pi^3-31)(\pi^3+31)=3\sum_{k=0}^\infty \left(\frac{77}{(2k+3)^6}+\frac{243}{(2k+5)^6}\right)$$ -so - -$$\pi^3-31=\frac{3}{\pi^3+31}\sum_{k=0}^\infty \left(\frac{77}{(2k+3)^6}+\frac{243}{(2k+5)^6}\right)$$ - -is positive because the series contains only positive terms.<|endoftext|> -TITLE: What is the geometric meaning of representability? -QUESTION [12 upvotes]: Representable functors play a large role in algebraic geometry when developed through the 'functor of points' approach. One finds schemes represent Zariski sheaves and this gives access to the great power of sheaf theory and topos theory. -My problem is I don't really understand representability, especially geometrically. Formally speaking, knowing some object $X$ represents a functor $F$ says that $F$ "probes" $X$ by giving at every object $Y$ the $Y$-points of $X$. But I just can't appreciate the (especially geometric) significance behind this. -What are some instructive geometric examples of representability? - -REPLY [16 votes]: Although I agree with Zhen Lin's and Qiaochu's comments, I thought it might be useful to give some classical examples where you can write down a functor first, and then ask whether it's representable. (Of course, you could always write down something that is already representable to begin with, but I doubt you're interested in that.) -Example. Here are some down-to-earth examples of representable functors: - -The functor $X \mapsto \Gamma(X,\mathcal O_X)$ is represented by $\mathbb A^1$ (also denoted $\mathbb G_a$ in this setting). -The functor $X \mapsto \Gamma(X,\mathcal O_X)^\times$ is represented by $\mathbb A^1 \setminus \{0\}$ (also denoted by $\mathbb G_m$). -The functor $X \mapsto \operatorname{GL}_n(\Gamma(X,\mathcal O_X))$ is represented by an open subscheme of $\mathbb A^{n \times n}$: the nonvanishing locus of the determinant. It is denoted simply by $\operatorname{GL}_n$. The case $n = 1$ gives $\mathbb G_m$. - -Arguably, the cleanest approach to linear algebraic groups, and especially if you want to consider more general group schemes, is by considering the fppf sheaves they define. For example, a sequence of algebraic groups -$$1 \to G_1 \to G_2 \to G_3 \to 1$$ -is exact if it is so as fppf sheaves. Giving a definition in more geometric terms is awkward to say the least. -Example. To give some more interesting geometric examples of representable functors: - -The Picard scheme $\operatorname{\mathbf{Pic}}_{X/k}$ represents a suitable¹ Picard functor. This generalises the notion of the Jacobian of a curve: for any smooth projective variety, the Picard group now has a continuous part ($\operatorname{\mathbf{Pic}}_{X/k}^0$; the Jacobian of $X$) and a discrete part ($\operatorname{Pic}(X)/\operatorname{Pic}^0(X)$, the Néron–Severi group of $X$; for higher-dimensional varieties this need not be just $\mathbb Z$). -The Hilbert scheme represents a functor that, loosely speaking, associates to a variety its family of subvarieties. Sometimes, you want to add some numerical conditions, e.g. one can consider Hilbert schemes of $n$-tuples of points in $X$ (including fat points, counted with multiplicity), which is birational to $\operatorname{Sym}^n X$: they are isomorphic on the part where the $n$ points are distinct. -Any moduli problem is a functor, and one can ask if it is representable. This is often not the case, until you pull the French trick of defining a larger class of objects (algebraic spaces or algebraic stacks) where this is true. For example, you can ask for the moduli space of curves of genus $3$. The functor assigns to each scheme $X$ the set of isomorphism classes of families $\mathscr C \to X$ whose fibres are smooth projective curves of genus $3$. - -Remark. Finally, observe that representability of functors is by no means a quality that's reserved for algebraic geometry! In any category, you can ask whether a functor on it is representable. It's a good exercise to keep your eyes open for any representable functors around, especially when you're dealing with easy categories (like abelian groups, $R$-modules, sets, or other categories that are relatively easy to describe). -Example. The forgetful functor $\operatorname{\underline{Ab}} \to \operatorname{\underline{Set}}$ is represented by $\mathbb Z$. -Example. The forgetful functor $\operatorname{\underline{Ring}} \to \operatorname{\underline{Set}}$ is represented by $\mathbb Z[x]$. (Compare with the very first example I gave above). -Example. The dualisation functor $\operatorname{\underline{Vect}}_k^{\operatorname{op}} \to \operatorname{\underline{Vect}}_k$ is represented by $k$. This is a slight abuse of language, since representable functors technically have to go to $\operatorname{\underline{Set}}$. -Most dualities are given by representable functors, often by definition. For a less trivial example, see Hartshorne's definition of Serre duality: the functor is $H^n(X, (-))^*$, and the representing object is $\omega_X^\circ$. -Exercise. One of my favourite examples is the functor $\operatorname{\underline{Top}}^{\operatorname{op}} \to \operatorname{\underline{Set}}$ that associates to a topological space $(X, \mathcal T)$ the topology $\mathcal T$, and to a continuous map $X \to Y$ the inverse image map $\mathcal T_Y \to \mathcal T_X$. Try to write down a topological space that represents this functor (it exists!). -(I think Zhen Lin might have told me this example when I was learning about representable functors.) - -¹Defining the correct functor is not so obvious, and there are multiple different things people might mean by the Picard scheme. The weakest notion is the representability of the fppf sheafification of the presheaf $U \mapsto \operatorname{Pic}(U)$.<|endoftext|> -TITLE: Example of an uncountable dense set with measure zero -QUESTION [12 upvotes]: As stated in the title, I am trying to find an example of an uncountable dense subset of $[0,1]$ that has measure zero. My intuition is that such a subset cannot exist, but I do not have a proof of this. -Currently, I can construct an uncountable dense subset that has arbitrarily small measure. Also, it is easy to construct an uncountable subset that has zero measure. -Thanks in advanced! - -REPLY [5 votes]: You can even construct a set $S \subset \mathbb{R}$ such that $S \cap U$ is uncountable for every open $U \subseteq \mathbb{R}$ and still $m(S \cap U) = 0$ where $m$ is the Lebesgue measure. -To do this, we start with the Cantor set $C \subset [0, 1]$ and create "denser" sets by gluing together scaled down copies of $C$: -$$\begin{align} -S_n & := \bigcup \{ 3^{-n} (x+k) \, | \, x \in C, \, k \in \mathbb Z \} \\ -S & := \bigcup_{n=0}^{\infty} S_n \\ -\end{align}$$ -Since $S_n$ is a countable union of nullsets (sets of measure $0$), also $S_n$ will be a nullset. In the same way $S$ will be a nullset. -The numbers in $S$ will have a ternary "decimal" expansion with only a finite number of ones.<|endoftext|> -TITLE: If $L$ is a line bundle on a scheme $X$, what is the ring $\oplus_{n \geq 0} \Gamma(X, L^{ \otimes n})$? -QUESTION [5 upvotes]: If $L$ is a line bundle on a scheme $X$, what is the ring $A = \oplus \Gamma(X, L^{ \otimes n})$? This ring comes up in an exercise that I am struggling with right now, and I would like some insight into what this ring ... "is." What is the motivation for considering it? Maybe there is some geometric insight that I am missing? How should I think about it? -And also, if $F$ is a quasi-coherent sheaf, $M = \oplus \Gamma(X, F \otimes L^{\otimes n})$ is an $A$ module. Again, what is the geometric meaning of this construction? -My thoughts: If $X$ is projective space, and $L$ a line bundle of positive degree, then $A$ is some veronese subring of the homogeneous coordinate ring. What is $M$ in that case? How does $\tilde{M}$ relate to the sheaf $F$? -(Sorry, I know this is a poorly posed question. I am just getting really confused with the corresponding exercise 13.3H in Ravi's notes - I don't want to ask for a solution to that though. I think I know how a proof should go, but the objects involved are confusing me.) - -REPLY [11 votes]: I'm not sure this constitutes a complete answer, but at least let me give you some examples and remarks that indicate the importance of this construction (and, hopefully, therefore also a little bit of the geometric intuition behind it). -It's probably useful to play around with it a bit more, beyond the examples I give. What happens for example if $X$ is affine? -Example. Let $\mathscr L$ is a torsion line bundle (say on a smooth projective variety $X$), e.g. $\mathscr L^{\otimes 2} \cong \mathcal O$, but $\mathscr L \not\cong \mathcal O$. Then $\Gamma(X, \mathscr L^{\otimes n})$ is $0$ for all $n$ odd, and $1$-dimensional for $n$ even. This is generated by a nonzero section of $\Gamma(X,\mathscr L^{\otimes 2})$. -Example. If $X$ is a projective scheme and $\mathscr L = \mathcal O(1)$ is some very ample line bundle, then -$$A = \bigoplus_{n = 0}^\infty \Gamma(X,\mathscr L^{\otimes n})$$ -is the affine coordinate ring of $X$ with respect to the given projective embedding. We know that this ring might depend on the embedding; however, it is always finitely generated (even if $\mathscr L$ is merely ample as opposed to very ample: exercise). -Erratum. The above example is not quite correct, as pointed out by Daniele A. The ring $A$ only equals the affine coordinate ring of $X$ with respect to the given embedding if the embedding is projectively normal. In general, they are isomorphic in large enough degree, hence the ring $A$ is still finitely generated (but not necessarily in degree $1$). -Remark. On the other hand, one can turn this around and ask if we can use this ring to define a projective embedding of $X$, if we have no idea what $X$ is. This is an idea that has proven very useful in the minimal model programme (MMP): if we let $\mathscr L = \omega_{X/k} = \Omega_{X/k}^n$, then we could try to ask whether the natural morphism -$$X \to \operatorname{Proj} \bigoplus_{n=0}^\infty \Gamma(X, \omega_{X/k}^{\otimes n})$$ -is an isomorphism, or at least a birational map. The right-hand side is called the canonical model of $X$. -Example. If $X = \mathbb P^n$, then this is obviously not true. Indeed $\omega_X = \mathcal O(-n-1)$, none of whose tensor powers has any sections. That is, the right hand side is just a point. -There are examples where the right hand side (if finitely generated) can have any dimension between $0$ and $n$ (the dimension of $X$); this is then the Kodaira dimension of $X$ (although this is not literally the definition of Kodaira dimension). -Remark. On the other hand, it is a celebrated (and very recent!) theorem of Birkar–Cascini–Hacon–McKernan that the canonical ring is finitely generated, whenever $X$ is smooth and projective. This result was obtained independently by Siu using analytic methods in the case where $X$ is of general type. (Don't worry, I don't understand any of the words the two papers use either.) -Reid had already proven (in 1980): if $X$ is smooth, proper, and of general type, under the assumption of finite generation of the canonical ring, the morphism $X \to \operatorname{Proj} \bigoplus \Gamma(X,\omega_{X/k}^{\otimes n})$ is birational. -These two results together more or less solve the minimal model programme for varieties of general type. -Remark. Mark has indicated below that there exist examples of line bundles for which the ring $\bigoplus \Gamma(X,\mathscr L^{\otimes n})$ is not finitely generated. A place to read about this example is Lazarsfeld's Positivity in Algebraic Geometry I, Example 2.3A (p.158). -The idea is to construct a divisor $D$ on a surface $X$ such that the base locus of $mD$ always contains the same curve $C$, but $mD - C$ is base-point free. Then the ring $\bigoplus \Gamma(X,\mathcal O(mD))$ cannot be finitely generated, because otherwise the multiplicity of $C$ in the base locus would go to infinity.<|endoftext|> -TITLE: Is the metric on the circle, induced from the plane, not a flat one? -QUESTION [6 upvotes]: My question concerns the highlighted part posted below, from Wikipedia article. (Link to the revision at the time of this post.) -I'd say I can't detect the curvature of the unit circle if I go along the curved path within ${\mathbb R}^2$. The induced metric only measures the one-dimensional length and forgets about the plane. -So why would the induced metric not be flat? - -REPLY [9 votes]: It depends on which kind of "metric" you're looking at. -If we view the unit circle and the plane as Riemannian manifolds, then the Riemannian metric on the circle induced by its embedding in the plane is indeed flat. Here we're speaking about Riemannian metrics, which work only locally. -However, if you take those two Riemannian manifolds and derive a global distance function on each of them as the geodesic distance between two points, you make them into metric spaces. In the case of the plane, this produces the usual Euclidean distance. -However, as metric spaces, the metric on the unit circle is not the same as the metric it gets as a subset of $\mathbb R^2$. -Two opposite points on the unit circle has distance $\pi$ in the metric of geodesic distance within the circle, whereas their distance in the metric inherited from the metric space $\mathbb R^2$ is $2\neq \pi$.<|endoftext|> -TITLE: What is the cardinality of the set of the empty set? -QUESTION [8 upvotes]: Given set $ A = \{ \emptyset \}$ what is $|A|$? I just learned about sets today in class and I'm unsure how to answer this. - -REPLY [3 votes]: In a way it is the start of the construction of natural numbers (cardinalities of finite sets) - - - 0=| -∅ - |, -1=|{ -∅ -}|, -2=|{ -∅ - ,{ -∅ - } }|, -3=|{ -∅ - ,{ -∅ - },{ -∅ - ,{ -∅ - } } }|,... - - -as you see the first set (between |.|) has no elements, the second one has one element, the third one has two distinct elements as the empty set and the set whose only element is an empty set are different. The (usually confusing) ... just adds the nested braces.<|endoftext|> -TITLE: Trouble understanding Borel sets definition -QUESTION [8 upvotes]: The definition in my book is: - -The collection $B$ of Borel sets of real numbers is the smallest - $\sigma$-algebra of sets of real numbers that contains all of the open - sets of real numbers. - -First, I'm a bit confused by the wording here. -Does this mean that if a collection $B$ is a $\sigma$-algebra and it contains all of the open sets of real numbers, then the sets in $B$ are called Borel sets? -I'm reading through some of the other answers to this question now, but it seems like different texts use different definitions, so if anyone could shed some light on the definition I provided, that would be helpful. - -REPLY [2 votes]: First of all, for a Borel set, you need to have a topological space. Now by looking at the definition you have provided it seems that you are considering real line with standard topology. -Explanation for your definition: A set $\beta $ is said to be a borel sigma algebra if the following two conditions are satisfied : - -It contains all the open sets. -It is a sigma algebra and if $C$ is anny other sigma algebra containing all the open sets then $\beta \subset C$. (that is $\beta$ is smallest such set.) - -We will call elements of such a set to be borel sets.(observe that $\beta$ is a collection of sets)(for the intution you can think of it as if we are adding compliments and countable intersections of given open sets to the existing topology.) -In case of $\mathbb{R}$ with standard topology it is not that easy to find a set which is not a borel set. most of the sets which you can think of are all the borel sets. If you study measure theory you will eventually get a set which is not borel. -If you are comfortable with topological spaces, and if you have got that borel sigma algebra depends on the topology, then i can give you a simple example of a non borel set in a different topological space just for understanding purpose. Pls comment to let me know.<|endoftext|> -TITLE: If these two expressions for calculating the prime counting function are equal, why doesn't this work? -QUESTION [10 upvotes]: So I've seen some different explanations of how the zeros of the zeta function can predict the prime counting function. The common example is that -$$\pi(x)=\sum_{n=1}^\infty \frac{\mu(n)}{n}J(x^{1/n})$$ -where $\mu(n)$ is the möbius function and -$$J(x)=Li(x)+\sum_{\rho}Li(x^{\rho})-\ln(2)+\int_{x}^\infty\frac{1}{t(t^2-1)\ln(t)}dt$$ -For future ease let's call -$$m(x)=-\ln(2)+\int_{x}^\infty\frac{1}{t(t^2-1)\ln(t)}dt$$ -I've also seen that -$$\pi(x)=R(x)-\sum_{\rho}R(x^{\rho})-\sum_{n=1}^\infty R(x^{-2n})$$ -The sums over the $\rho$ are for the complex nontrivial zeroes and the last term above is accounting for the trivial zeros and -$$R(x)=\sum_{n=1}^\infty\frac{\mu(n)}{n}Li(x^{1/n})$$ -My thought process was that I have two expressions for $\pi(x)$. They must simply be different forms of the same thing, otherwise I could equate them and get something new. So I equated them to see what would happen. (On a side note, I'm not really sure where the offset logarithmic integral is defined and where it is not. I initially assumed they were all the offset Li, but I'm not sure if that's accurate so please correct me if that's' wrong or if it even matters. Anyway let's take the first form of $\pi(x)$ and convert it a bit. -\begin{align}\pi(x)&=\sum_{n=1}^\infty\frac{\mu(n)}{n}J(x^{1/n})\\&=\sum_{n=1}^\infty\frac{\mu(n)}{n}Li(x^{1/n})+\sum_{n=1}^\infty\frac{\mu(n)}{n}\sum_{\rho}Li(x^{\rho/n})+\sum_{n=1}^\infty\frac{\mu(n)}{n}m(x^{1/n})\\&=\sum_{n=1}^\infty\frac{\mu(n)}{n}Li(x^{1/n})+\sum_{\rho}\sum_{n=1}^\infty\frac{\mu(n)}{n}Li(x^{\rho/n})+\sum_{n=1}^\infty\frac{\mu(n)}{n}m(x^{1/n})\\&=R(x)+\sum_{\rho}R(x^\rho)+\sum_{n=1}^\infty\frac{\mu(n)}{n}m(x^{1/n})\end{align} -We know from the other form that $\pi(x)$ also equals -$$R(x)-\sum_{\rho}R(x^\rho)-\sum_{n=1}^\infty R(x^{-2n})$$ -Immediately the fact that one expression has a positive sum over the zeros and the other has a negative sum over the zeros hints to me that something's not right. If we continue on, equating the two then substituting back to the $\pi(x)$ equation, we would get -$$\sum_{\rho}R(x^{\rho})=-\frac{1}{2}\left(\sum_{n=1}^\infty\frac{\mu(n)}{n}m(x^{1/n})+\sum_{n=1}^\infty R(x^{-2n})\right)$$ -And from that, we get to -$$\pi(x)=R(x)+\frac{1}{2}\sum_{n=1}^\infty\left(\frac{\mu(n)}{n}m(x^{1/n})-R(x^{-2n})\right)$$ -Interestingly, because -$$\sum_{n=1}^\infty\frac{\mu(n)}{n}m(x^{1/n})=-\sum_{n=1}^\infty R(x^{-2n})$$ -We circle back around, attaining only the approximation -$$\pi(x)=R(x)-\sum_{n=1}^\infty R(x^{-2n})$$ -Thus the result is seemingly trivial, however all of my steps seemed legitimate. What is wrong here? - -REPLY [9 votes]: The short answer was already given "The $+$ sign for $\sum_{\rho}Li(x^{\rho})$ in your definition of $J(x)$ is wrong" so let's use this opportunity to : - -provide a glimpe of the derivation of the explicit formulas (a really fascinating subject after all!) and -consider the next traps in this game... - -(what follows is from a sketch of the derivation of $\;\pi^*(x)=R(x)-\sum_{\rho} R(x^{\rho})\,$ in this answer, -see Edwards' excellent "Riemann's Zeta Function" for detailed proofs. -As usual $\,s\in\mathbb{C}\,$ will be written as $\;s:=\sigma+it\;$ and every $\,p$ supposed prime) -$$-$$ -Let's start with the famous Euler product : -$$\tag{1}\displaystyle\zeta(s)=\prod_{p\ \text{prime}}\frac 1{1-p^{-s}}\quad\text{for}\ \ \Re(s)=\sigma>1$$ -When we apply Perron's formula to the derivative of the logarithm of $\zeta(s)$ (considered as a Dirichlet series) we get for $\,c>1$ and $\,x$ any positive real value : -$$\tag{2}-\frac1{2\pi i}\int_{c-i\infty}^{c+i\infty}\frac{\zeta'(s)}{\zeta(s)}\frac{x^s}s\,ds=\sum_{p^k\le x}^{*}\log(p)=\sum_{n\le x}^{*}\Lambda(n)=:\psi^*(x)$$ -Where $\psi$ is the second Chebyshev function that was proved equivalent to $x$ (as $\,x\to\infty$) by the P.N.T (and $\psi^*$ its slight variation where the values of $\psi(x)$ at the discontinuities are replaced by the mean value of their limit at the left and right). -From the poles of the integrand in $(2)$ (supposing $x\ge c\;$ i.e. $\;x>1$) we obtain the residues at $\displaystyle s=0\mapsto -\frac{\zeta'(0)}{\zeta(0)}=-\log(2\pi),\ 1\mapsto x^1,\ \rho\mapsto -\frac{x^\rho}{\rho}$ for $\rho$ any zero of $\zeta(s)$ -(I don't distinguish the trivial and non trivial zeros at this point...). -This gives us directly the first explicit (von Mangoldt) formula: -$$\tag{3}\boxed{\displaystyle\psi^*(x)=-\log(2\pi)+x-\sum_{\rho} \frac {x^{\rho}}{\rho}}\quad(x>1)$$ -(for $x<2$ we must have $\,\psi^*(x)=0\,$ from its definition $(2)$) -(more details by M. Watkins and an animation with increasing number of non trivial zeros) -The next step is to integrate by parts the derivative of $\,\psi^*(t)$ divided by $\log(t)$ (see here) to get the Riemann prime-counting function $\Pi^*$ (noted too $\Pi_0$ or $J_0$ or $J$ or $f$ by Riemann...) : -$$\tag{4}\int_0^x\frac{\psi^{*\,'}(t)\ dt}{\log\,t}=\int_2^x\frac{\psi^{*\,'}(t)\ dt}{\log\,t}=\sum_{n\le x}^{*}\frac {\Lambda(n)}{\log\,n}=\sum_{p^k\le x}^{*}\frac 1k=:\Pi^*(x)$$ -(more rigorous derivations are needed here. Edwards' exposition of van Mangoldt's proof and Landau's 1908 paper "Nouvelle démonstration pour la formule de Riemann..." may help) -Let's define the logarithmic integral by $\ \displaystyle\operatorname{li}(x):=P.V.\int_0^x \frac{dt}{\log\,t}\,$ then we may combine $(4)$ and the integrals of $\ \displaystyle\operatorname{li}(t^{\rho})'=\frac{t^{\rho-1}}{\log\,t}\;$ from $0$ to $x\,$ to get Riemann's explicit formula for $x > 2$ : -$$\tag{5}\boxed{\displaystyle\Pi^*(x)=\operatorname{li}(x)-\sum_{\rho} \operatorname{li}(x^{\rho})}\quad(x>2)$$ -(for $x<2$ we must have $\;\Pi^*(x)=0\,$ from its definition $(4)\;$) -To deduce formally (I don't know a convergence proof) the last formula let's revert $\;\displaystyle\Pi^*(x)=\sum_{p^k\le x}^{*}\frac 1k=\sum_{k>0} \frac{\pi^{*}\bigl(x^{1/k}\bigr)}k\;$ using the Möbius inversion formula $\ \displaystyle\pi^{*}(x)=\sum_{n=1}^{\infty} \frac{\mu(k)}k \Pi^*\bigl(x^{1/k}\bigr)\;$ to get : -$$\tag{6}\boxed{\displaystyle\pi^*(x)=R(x)-\sum_{\rho} R(x^{\rho})},\quad(x>1)$$ -We started $(1)$ with a function 'encoding' the primes $\,\zeta\,$ and end with the primes 'counted' using $\zeta$'s zeros $(6)$ ! -(Matthew Watkins' "encoding"of the distribution of prime numbers by the nontrivial zeros with an animation) -$$-$$ -Your steps were thus formally right and the confusion came only from the typo in the $J(x)$ formula. In your final expression corresponding to $(6)$ you considered too correctly all the zeros and not only the nontrivial ones. -Should you wish to evaluate $\operatorname{R}(x)$ using the Gram series : $\;\displaystyle\operatorname{R}(x^{\rho})=1+\sum_{m=1}^\infty \frac{(\rho\log(x))^m}{m!m \zeta(m+1)}\;$ then take care to precision (the terms become rather large before decreasing again). -You may use the fact that $\Pi^*(x)=0$ for $x<2$ to reduce the evaluation of $\ \displaystyle\pi^{*}(x)=\sum_{n=1}^{\infty} \frac{\mu(k)}k \Pi^*\bigl(x^{1/k}\bigr)\;$ and the individual $\,\operatorname{R}(x^{\rho})$ terms in $(5)$ to $\left\lceil\dfrac{\log x}{\log 2}\right\rceil$ terms (I'll have to reverify this part). -Concerning the offset logarithmic integral $\operatorname{Li}$ versus the "standard definition $\operatorname{li}$" used I think that the integral defined by Riemann should start at $0$ for a proper transition from $(4)$ to $(5)$. -This matters of course in $(5)$ and your expression for $J$ (from the $\,\operatorname{li}(2)\approx 1.045$ difference for any of the infinite terms) but doesn't matter in the final $(6)$ since $\;\displaystyle\sum_{n=1}^{\infty} \frac{\mu(n)}n=0$ which was proved by Landau (this appears formally as the limit of $\,\displaystyle\frac 1{\zeta(s)}=\sum_{n=1}^{\infty} \frac{\mu(n)}{n^s}\;$ as $s\to 1$ but the convergence proof is, according to Hardy, as deep as the PNT). -Now that we have the correct bound at $0$ for $\,\operatorname{li}\,$ and the correct sign in your expression for $J(x)$ can we use it for numerical evaluation? -In fact No because for $x>1$ the phase of $x^{\rho}$ is important (we are integrating the reverted logarithm of $t$ up to $x^{\rho}$ but once evaluated $x^{\rho}$ can't be distinguished from $x^{\rho+2k\pi i/\ln x}$!). -Fortunately we can replace $\operatorname{li}$ by the exponential integral $\operatorname{Ei}$ using $\operatorname{Li}(x)=\operatorname{Ei}(\log\;x)$ -and will obtain the correct results by replacing $\operatorname{li}(x^{\rho})$ with $\operatorname{Ei}(\rho\,\log\;x)$. -Further there is a neat continued fraction allowing easy evaluation of $\,\operatorname{Ei}(\sigma+it)\,$ for $\sigma$ small and $t$ large (use the c.f. of $\operatorname{E1}(-s)$ and $\,\operatorname{Ei}(s)=-\operatorname{E1}(-s)+\pi i$ for $\Im(s)>0$). -If you prefer to invoke directly the $\,\operatorname{Ei}$ function of some software you should be warned that some of them note Ei what in reality is the E1 function (see A&S $5.1.7$ for a conversion, A&S $5.1.22$ for the continued fraction of $\,\operatorname{E1}$ ($n=1$), approximations and graphics) -Result for $(6)$ using the Gram series and the $100$ first non trivial zeros in $(2..100)$ :<|endoftext|> -TITLE: How to recover the topology of a topological ring using Yoneda lemma -QUESTION [9 upvotes]: Consider the category of topological rings. By the Yoneda embedding, suppose $A$ is a topological ring, if the functor $\mathrm{Hom}(-,A)$ is given, then we can recover the topological ring $A$ from this functor $\mathrm{Hom}(-,A)$. My question is, how to determine the topology of $A$? I know how to recover the set structure of $A$, and the addition and multiplication law on it. -[EDIT] More precisely, as a set, $\mathrm{Hom}(\mathbb Z[X],A)$ is isomorphic to $A$, where $\mathbb Z[X]$ is with discrete topology. In this way we can recover the set structure of $A$. My question is how to determine the topology of $\mathrm{Hom}(\mathbb Z[X],A)$ only using the information of the functor $\mathrm{Hom}(-,A)$, such that the natural map $A\to\mathrm{Hom}(\mathbb Z[X],A)$, $a\mapsto(X\mapsto a)$ is a homeomorphism of topological spaces? -@crystalline said that we can consider the compact-open (?) topology on $\mathrm{Hom}(\mathbb Z[X],A)$, yes it recovers the topology on $A$, but I'm not satisfied with this answer, because if we give the compact-open topology to $\mathrm{Hom}$ sets, then for $A\to B$ a continuous ring homomorphism, I think in general the induced maps $\mathrm{Hom}(B,R)\to\mathrm{Hom(A,R)}$ and $\mathrm{Hom}(R,A)\to\mathrm{Hom}(R,B)$ are not continuous. -[EDIT2] All $\mathrm{Hom}$ here are continuous ring homomorphism (i.e. the morphism in the category of topological rings). Sorry for the confusion. - -REPLY [8 votes]: There is always a trivial way to recover the topology from $\operatorname{Hom}(-,A)$: it is the finest topology that makes every element of $\operatorname{Hom}(B,A)$ continuous for every topological ring $B$ (to prove this, take $B=A$ and consider the identity map). Of course, this is rather unsatisfying because it requires us to already know about the topology on all possible $B$s that we might plug into the functor (including $A$ itself!). Ideally, we would like to have a single $B$ (or some small number of $B$s) which we can understand easily and use to recover the topology of $A$. -Unfortunately, this is not possible. More precisely, there is no small subcategory $C$ of the category of topological rings such that the topology on a topological ring $A$ is determined by the restriction of the functor $\operatorname{Hom}(-,A)$ to $C$. -Here is a sketch of a proof. Fix a regular cardinal $\kappa$ and consider two topologies on the ordinal $\kappa+1$. The first topology, which gives a space I will call $X$, is the usual order topology. The second topology, which gives a space I will call $Y$, is the refinement of the order topology obtained by declaring $\{\kappa\}$ to be open. Note that both $X$ and $Y$ are locally compact Hausdorff, and the identity map $i:Y\to X$ is continuous. Furthermore, $i$ is a homeomorphism when restricted to any subset of cardinality $<\kappa$ (this follows from the regularity of $\kappa$). -We can take the free topological rings $F(X)$ and $F(Y)$ on $X$ and $Y$; the topology on these spaces turns out to be not too hard to understand because $X$ and $Y$ are locally compact Hausdorff (namely, they are just colimits of finite powers of $X$ and $Y$ representing all possible formal sums and products of elements). Now we have an induced continuous bijective homomorphism $F(i):F(Y)\to F(X)$, whose inverse is not continuous. However, $F(i)$ is a homeomorphism onto its image when restricted to any subset of $F(Y)$ of cardinality $<\kappa$, since $i$ is, and any subset of $F(Y)$ of cardinality $<\kappa$ involves fewer than $\kappa$ elements of $Y$. It follows that for any topological ring $B$ of cardinality $<\kappa$, a homomorphism $f:B\to F(Y)$ is continuous iff $F(i)f$ is continuous. That is, $F(i)$ induces an isomorphism between the functors $\operatorname{Hom}(-,F(Y))$ and $\operatorname{Hom}(-,F(X))$ when restricted to the subcategory of topological rings of cardinality $<\kappa$. Since $F(i)$ is not actually a homeomorphism, it follows that the topologies of $F(X)$ and $F(Y)$ cannot be recovered from the functor $\operatorname{Hom}(-,F(X))\cong \operatorname{Hom}(-,F(Y))$ restricted to topological rings of cardinality $<\kappa$. - -That negative answer aside, here is the closest thing I can see to a positive answer. The topology on a space $A$ is determined by the convergence of ultrafilters on $A$ (or nets, or filters, if you prefer). Given an ultrafilter $U$ on $A$, consider the space $A_U=A\cup\{\infty\}$, topologized by saying every subset of $A$ is open and a set containing $\infty$ is open iff its intersection with $A$ is in $U$. Note that the space $A_U$ depends only on the underlying set of $A$, not the topology of $A$. The ultrafilter $U$ then converges to a point $x\in A$ iff the map $A_U\to A$ which is the identity on $A$ and sends $\infty$ to $x$ is continuous. -Now if $A$ is a topological ring, we can describe its convergent ultrafilters in terms of the functor $\operatorname{Hom}(-,A)$ as follows. An ultrafilter $U$ on $A$ converges to a point $x\in A$ iff the unique ring homomorphism $F(A_U)\to A$ from the free topological ring on $A_U$ to $A$ which is the identity on $A$ and sends $\infty$ to $x$ is in $\operatorname{Hom}(F(A_U),A)$. (This is a satisfying description because the topological ring $F(A_U)$ can be constructed from knowing only the underlying set of $A$, which can be described as the Hom-set $\operatorname{Hom}(\mathbb{Z}[X],A)$, and we can tell what an element of $\operatorname{Hom}(F(A_U),A)$ does on points by considering the induced map $\operatorname{Hom}(\mathbb{Z}[X],F(A_U))\to \operatorname{Hom}(\mathbb{Z}[X],A)$. The map $F(A_U)\to A$ which we're trying to realize depends only on the ring structure of $A$, which you say you already know how to recover.) -Of course, this description is probably not useful very often, since the topological rings $F(A_U)$ are not very easy to think about.<|endoftext|> -TITLE: Integrating sine with Monte Carlo / Metropolis algorithm -QUESTION [5 upvotes]: I'm learning Monte Carlo / Metropolis algorithm, so I made up a simple question and write some code to see if I really understand it. The question is simple: integrating sine over 0 to PI. The integral can be calculated analytically. If my code is correct, the integral should be 2. -According to Monte Carlo, we can approximate an integral with N samples: -$$ -\int _{a}^{b} f(x)dx \approx \frac {1}{N} \sum _{i=1}^{N} \frac {f(X_i)}{p(X_i)} -$$ -where in my sine integral case: -$$ -f(x) = sin(x)\\ -a = 0\\ -b = \pi\\ -$$ -I'm using uniform distribution to sample $X \in [0, \pi]$, so -$$ -p(x) = \frac {1}{\pi} -$$ -Metropolis algorithm also requires an "acceptance probability", which is the probability that whether we should accept a transition from the current location $X$ to a new location $X'$. The acceptance probability is defined as: -$$ -a(X, X') = min(1, \frac {f(X')}{f(X)}) -$$ -My code is at http://bl.ocks.org/eliangcs/6e8b45f88fd3767363e7. Every time you refresh your browser, it makes 100 samples and shows the integral solution. But it always seems to give me a way larger value ($\approx 2.5$) instead of the correct solution: 2. Why? I guess I made a mistake on the PDF $p(x) = 1/\pi$, which does not consider the acceptance probability $a(X, X')$. If that is the case, how should I adjust $p(x)$ then? - -REPLY [3 votes]: I see that your code is "fixed" now, but that it's using independent samples each time which make it regular monte carlo integration, not markov chain monte carlo integration. -I hit the same problem you did so looked into it and it turns out that using the Metropolis algorithm for integration of a single term function like this is not straightforward. -Like me, you thought that of each sample, the $x$ component was $f(x)$ and that the $y$ component was p(x) so that you could get an estimate with $\frac{x}{y}$ aka $\frac{f(x)}{p(x)}$. -It turns out that is not correct. One reason is that the $y$ component is not from a normalized PDF and the normalization constant is unknown. For more information, check this answer out: https://stats.stackexchange.com/a/248697 -I've come across three ways to use the Metropolis algorithm for integration: - -Using a "Harmonic Mean Estimator" which is also known as "The worst monte carlo method ever" and is not reliable as it has infinite variance. -Use clever math tricks so that you don't need to know the normalization constant because it cancels out being on both the top and bottom of a division. The harmonic mean estimator does this I believe. -Ultimately you need to know the normalization constant to be able to turn $f(x)$ into $p(x)$ by division of that constant. One way to calculate the normalization constant would be to count how many samples fell into a small interval $[a,b]$ and get that as a percentage of all samples gotten, calling that $C$. If you integrate the function $y=f(x)$ over that same $[a,b]$ interval, and call it $D$, the normalization constant can be estimated as $\frac{D}{C}$. A smaller interval helps for calculating $D$ but is worse for calculating $C$. An alternate idea may be to keep a histogram of the samples and use the interval of the histogram bucket that has the highest count of samples in it, with the assumption that higher counts are more accurate.<|endoftext|> -TITLE: Is there an easy way to compute the determinant of matrix with 1's on diagonal and a's on anti-diagonal? -QUESTION [5 upvotes]: \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & a \\ 0 & 1 & 0 & 0 & a & 0 \\0 & 0 & 1 & a & 0 & a \\0 & 0 & a & 1 & 0 & a \\0 & a & 0 & 0 & 1 & 0 \\a & 0 & 0 & 0 & 0 & 1 \\ \end{bmatrix} -Thanks - -REPLY [15 votes]: We have the matrix -$$A= \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & a \\ 0 & 1 & 0 & 0 & a & 0 \\0 & 0 & 1 & a & 0 & a \\0 & 0 & a & 1 & 0 & a \\0 & a & 0 & 0 & 1 & 0 \\a & 0 & 0 & 0 & 0 & 1 \\ \end{bmatrix}$$ -Elininating the $a$s below the diagonal by adding multiples of the first, second and third line of $A$, we obtain the upper triangular matrix $A'$: -$$A'= \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & a \\ 0 & 1 & 0 & 0 & a & 0 \\0 & 0 & 1 & a & 0 & a \\0 & 0 & 0 & 1-a^2 & 0 & a-a^2 \\0 & 0 & 0 & 0 & 1-a^2 & 0 \\0 & 0 & 0 & 0 & 0 & 1-a^2 \\ \end{bmatrix}$$ -Since adding a mulitple of a row to another row does not alter the determinant, we can say that $\det(A) = \det(A')$. Furthermore, the determinant of an upper triangular matrix is the product of its diagonal entries. Thus, we have: -$$\det(A) = \det(A') = (1-a^2)^3$$ -Alternatively, eliminate the $a$s above the diagonal by adding multiples of the fourth, fifth and sixth row. The determinant of a lower triangular matrix can be obtained in the same way.<|endoftext|> -TITLE: Upper bound for partial sum of binomial coefficients: $\sum\limits_{i=0}^k \binom{n}{i} \le (n+1)^k$ -QUESTION [5 upvotes]: I am familiar with the proof of the upper bound $\sum_{i=0}^k \binom{n}{i} \le (ne/k)^k$, but I was told that the worse bound $$\sum_{i=0}^k \binom{n}{i} \le (n+1)^k$$ -has a simple combinatorial proof, but I cannot see it. I know the left-hand side is the number of ways to select $\le k$ objects from $n$ objects, but I am having trouble with the right-hand side. Any hints or insights would be helpful! - -REPLY [2 votes]: Define $E:=\{1,...,n\}$ to be a set of cardinal $n$ : -$$X:=\{A\subseteq E\mid |A|\leq k\}$$ -$$Y:=\{f:\{1,...k\}\rightarrow E\cup\{0\}\}$$ -Now : -$$\psi : X\rightarrow Y $$ -$$A:=\{a_1<... -TITLE: The oriented Grassmannian $\widetilde{\text{Gr}}(k,\mathbb{R}^n)$ is simply connected for $n>2$ -QUESTION [6 upvotes]: I saw this result mentioned a lot in many references, but it is always stated as a fact or an exercise. -My approach would be to see the oriented grassmannian as the quotient $$\frac{SO(n)}{(SO(k)\times SO(n-k))},$$ but then I'm unsure how fundamental groups behave under quotient. I've proved that it is a $2$-covering of the classical grassmaniann and I think it should represent its orientation cover (because I read that it is orientable), but proving that it is simply connected would be slightly more and would imply (together with the fact that it is a $2$-covering) the two facts. -Can someone provide me a solution, a reference or some hints? -There is a related question here, but the answer didn't provide any detail in the case I'm interested in. - -REPLY [9 votes]: I'll try to fill in some of the missing details, hopefully this will be enough. Let the unoriented Grassmanian be $X = \widetilde{\mathrm{Gr}}(k, \mathbb{R}^n) \cong SO(n) / (SO(k) \times SO(n-k))$. Assume $0 < k < n$ (otherwise there's not much to prove). There is thus a fiber bundle $SO(n) \to X$, with fiber $SO(k) \times SO(n-k)$. Since $SO(n)$ is path connected, so is $X$. This fiber bundle then induces a homotopy long exact sequence: -$$\dots \to \pi_1(SO(k) \times SO(n-k)) \to \pi_1(SO(n)) \to \pi_1(X) \to 1.$$ -One must see that $\pi_1(SO(k) \times SO(n-k)) \to \pi_1(SO(n))$ is surjective to prove the claim about $X$. It is sufficient to prove that $\pi_1(SO(k)) \to \pi_1(SO(n))$ is surjective when $2 \le k \le n$ (because if $k = 1$, then $n-k = n-1 \ge 2$). -Let $S^{n-1} \subset \mathbb{R}^{n}$ be the standard $(n-1)$-sphere, and let $e_n = (0,\dots,0,1) \in S^{n-1}$ be the last standard basis vector. The application $SO(n) \to S^{n-1}$, $A \mapsto A \cdot e_n$, is a fiber bundle. Its fiber $F$ over $e_n$ is the subgroup of $SO(n)$ consisting of block matrices of the type $$\begin{pmatrix} A' & 0 \\ 0 & 1 \end{pmatrix}$$ -where $A'$ is an $(n-1) \times (n-1)$ matrix. This subgroup is isomorphic to $SO(n-1)$. You thus get a homotopy long exact sequence associated to $SO(n-1) \to SO(n) \to S^{n-1}$. There are two cases: - -when $n \ge 4$, $S^{n-1}$ is 2-connected, and the LES tells you that $\pi_1(SO(n-1)) \to \pi_1(SO(n))$ is an isomorphism. -when $n = 3$, there is a short exact sequence $0 \to \mathbb{Z} \to \pi_1(SO(2)) \to \pi_1(SO(3)) \to 0$. In particular $\pi_1(SO(2)) \to \pi_1(SO(3))$ is surjective. (We do not need this, but this short exact sequence is isomorphic to $0 \to \mathbb{Z} \to \mathbb{Z} \to \mathbb{Z}/2\mathbb{Z} \to 0$). - -By induction, you find that $\pi_1(SO(k)) \to \pi_1(SO(n))$ is surjective for $k \ge 2$ (in fact it's an isomorphism when $k \ge 3$). Going back to the long exact sequence of the beginning, this implies that $\pi_1(SO(k) \times SO(n-k)) \to \pi_1(SO(n))$ is surjective for $1 \le k \le n$ and $n \ge 3$, and this $\pi_1(X) = \pi_1(\widetilde{\mathrm{Gr}}_k(\mathbb{R}^n)) = 0$<|endoftext|> -TITLE: Is this possible? AB- BA=I -QUESTION [12 upvotes]: I have just started linear functionals when I faced the following problem: -If $A$ and $B$ are $n \times n$ complex matrices, show $AB - BA=\Bbb{I}$ is impossible. -Can someone help me? - -REPLY [14 votes]: For a matrix $A=[a_{ij}]$ of size $n\times n$, its trace $Tr(A)$ is defined by -$$ Tr(A)=\sum_{i=1}^n a_{ii} $$ . -You can verify it yourself that $$ Tr(AB)=Tr(BA)$$ -and that $$ Tr(A+B)=Tr(A)+Tr(B) $$ -Therefore if $AB-BA = \Bbb I$, then we have -$$n=Tr(\Bbb I)= Tr(AB-BA)= Tr(AB)-Tr(BA) = 0 -$$ -which is impossible. - -REPLY [5 votes]: you can see for example that -$$\mathrm{Tr}(AB) - \mathrm{Tr}(BA) =0\neq \mathrm{Tr}(\mathrm{I}_n)=n$$<|endoftext|> -TITLE: Probability that $2^a+3^b+5^c$ is divisible by 4 -QUESTION [11 upvotes]: If $a,b,c\in{1,2,3,4,5}$, find the probability that $2^a+3^b+5^c$ is divisible by 4. - -For a number to be divisible by $4$, the last two digits have to be divisible by $4$ -$5^c= \_~\_25$ if $c>1$ -$3^1=3,~3^2=9,~3^3=27,~3^4=81,~ 3^5=243$ -$2^1=2,~2^2=4,~2^3=8,~2^4=16,~2^5=32$ -Should I add all possibilities? Is there a simpler method? - -REPLY [2 votes]: $2^{\color\red{a}}+3^\color\green{b}+5^\color\magenta{c}\equiv0\pmod4\iff$ - -$\Big(\big(\color\red{a}=1\big)\wedge\big((\color\green{b}=2)\vee(\color\green{b}=4)\big)\Big)\vee$ -$\Big(\big(\color\red{a}\neq1\big)\wedge\big((\color\green{b}\neq2)\wedge(\color\green{b}\neq4)\big)\Big)$ - - -Therefore, the probability is $\dfrac{\color\red1\cdot\color\green2\cdot\color\magenta5+(5-\color\red1)\cdot(5-\color\green2)\cdot\color\magenta5}{5\cdot5\cdot5}=\dfrac{14}{25}$<|endoftext|> -TITLE: why shifting left 1 bit is the same as multiply the number by 2 -QUESTION [5 upvotes]: I have recently faced a problem .The problem is here.We know that if we represent -a decimal number in binary and move left all the bits by one. The left most bit is lost! and at the rightmost, a zero is added. -The above bit operation actually produce a number that is result of multiplication of the given number and 2. -For example, -$0001001101110010 ⇒ a = 4978(16 bit)$ ----------------- << 1 (SHIFT LEFT the bits by one bit) -$0010011011100100 ⇒ 9956$ -My question is that why it happens? Can anyone explain what the reason behind it? - -REPLY [16 votes]: There is a direct analogous when you work with base $10$. -Take the number $3$ in base $10$. Shift it left: you get $30$, which is $3 \cdot 10$ (and the factor $10$ appears because you are working with base $10$). -The same applies to base $2$. Shifting left is the same as multiplying by $2$. -This comes from the use of positional notation to denote numbers (https://en.wikipedia.org/wiki/Positional_notation). -In base $b$ ($b>1$) the second digit from the right counts $b$ times more than the first digit from the right, the third from the right counts $b$ times more than the second from the right (or $b^2$ times more than the first from the right), and so on. -When you write a number like this -$$ a_n a_{n-1} \dots a_2 a_1 a_0 $$ -(in base $b$), what you actually mean is the following -$$ a_n \cdot b^n + a_{n-1} \cdot b^{n-1} + \dots + a_2 \cdot b^2 + a_1 \cdot b^1 + a_0 \cdot b^0.$$ -With this in mind one can show that the two operations of shifting left and multiplying by $b$ are actually the same: -$$ a_n a_{n-1} \dots a_2 a_1 a_0 0 = \\ = a_n \cdot b^{n+1} + a_{n-1} \cdot b^{n} + \dots + a_2 \cdot b^3 + a_1 \cdot b^2 + a_0 \cdot b^1 + 0 \cdot b^0 =\\= b \cdot \left(a_n \cdot b^n + a_{n-1} \cdot b^{n-1} + \dots + a_2 \cdot b^2 + a_1 \cdot b^1 + a_0 \cdot b^0 \right) =\\= b \cdot (a_n a_{n-1} \dots a_2 a_1 a_0). $$<|endoftext|> -TITLE: How to decompose a matrix into the outer product of two vectors? -QUESTION [8 upvotes]: I have a matrix $M$ and I would like to find two vectors $u$ and $v$, that minimize -$$ \sum_{i,j} (M_{i,j}-u_iv_j)^2 $$ -How can I do this (numerically)? -Actually this is very simplified compared to my actual problem. For instance, I dont really have a full matrix, but only values for some range of $(i,j)$. However, I think once I know how to solve it in the general case, it should be straightforward to skip certain entries. - -REPLY [9 votes]: First, note that $\Sigma_{ij} (M_{ij}-u_iv_j)^2 = \|M-uv^T\|_F^2$, where $\|\cdot\|_F$ denotes the Frobenius norm of a matrix. If you're doing a Google search that may be helpful. -In the unconstrained case, there is an analytic solution given by the singular value decomposition. That is: let $M=U\Sigma V^T$ be the SVD of $M$, with the singular values arranged in nonincreasing order along the diagonal of $\Sigma$. Then the triple $(\sigma_1,u_1,v_1)$ formed from the first singular value, the first column of $U$, the first column of $V$, respectively, can be used to compute your $u$ and $v$. For instance, $u=\sigma_1 u_1$, $v=v_1$ would work, but really any $u=\alpha\sigma_1 u_1$, $v=\alpha^{-1} v_1$ for $\alpha\neq 0$ will work as well. -In the constrained case, unfortunately, you have a nonconvex optimization problem on your hands. Presumably you have a problem like this: -\begin{array}{ll} -\text{minimize} & \|M-uv^T\|_F^2 \\ -\text{subject to} & L_{ij} \leq M_{ij} \leq U_{ij} \quad \forall i,j -\end{array} -where $L$ and $U$ are lower and upper bound matrices (possibly with infinite entries where you have missing information). This is an extreme (e.g., rank-1) version of a low-rank approximation problem. -There is a variety of literature about this, but again, it's nonconvex, which means that it's not easy to solve in practice. A common technique is to alternate between minimizing over $u$ (holding $v$ fixed) and minimizing over $v$ (holding $u$ fixed). There is no guarantee this gives you the best solution though. -EDIT: A simpler way to look at the problem where you know only a portion of the elements of $M$ is that you're minimizing only over the indices that you know: that is, -$$\text{minimize} ~ \sum_{(i,j)\in\mathcal{I}} (M_{ij}-u_iv_j)^2$$ -for some index set $\mathcal{I}$. After all, for the elements you don't know, you might as well just set $M_{ij}=u_iv_j$ after you've determined the $u$ and $v$ you prefer, so they will contribute 0 to the final Frobenius objective. Alas, that doesn't make the problem any easier; it's still nonconvex.<|endoftext|> -TITLE: Why does this sequence converge to $\pi$? -QUESTION [7 upvotes]: Over at our friends at codegolf.SE, I asked a question about programs that seemed to converge to $\pi$, but didn't actually do that. -One of the answers (by soktinpk) was a solution that, although I can see many opportunities for numerical errors leading to a not-quite-$\pi$ result, I can't figure out why it seems to converge to $\pi$ in the first place. The scheme is as follows: - -Pick a number $h\ll1$. Take $s_0=1, s_1=1$. -While $s_n\ge 0$, calculate the next term as follows: -$$s_n=2s_{n-1}-(1+h^2)s_{n-2}$$ -When the above procedure terminates, calculate $p=2nh$, and behold, $p\approx\pi$! - -What causes this? Is it an accidental Taylor series, an inadvertent approximation of a sinusoid (that's my guess, perhaps it's something like $s\approx \sin(n/h)$ solved for $s=0$), or does this series have nothing to do with $\pi$ and is it just a result of people shouting LOOK IT'S $\Pi$ whenever they see 3.14...? - -REPLY [9 votes]: The characteristic equation for the recurrence relation -$$s_{n} - 2s_{n-1} + (1+h^2)s_{n-2} = 0$$ -is $\displaystyle\;\lambda^2 - 2\lambda + (1+h^2) = (\lambda-1)^2 + h^2 = 0\;$ -which has roots $1 \pm i h$. The general solution of the recurrence relation has the form -$$s_n = A (1+ih)^n + B(1-ih)^n$$ -If one impose the initial condition $s_0 = s_1 = 1$, one find $A = B = \frac12$. This leads to -$$s_n = \frac12 ((1+ih)^n + (1-ih)^n)$$ -Let $\theta = \tan^{-1}(h)$, we have -$$1 \pm ih = 1 \pm i\tan(\theta) = \frac{1}{\cos\theta}e^{\pm i\theta} = (1+\tan(\theta)^2)^{1/2} e^{i\theta}$$ -We can simplify $s_n$ as -$$s_n = \frac12 (1+h^2)^{n/2} \left( e^{in\theta} + e^{-in\theta} \right) -= (1+h^2)^{n/2} \cos(n\theta)$$ -At least for small $h$, the first $n$ that make $s_n < 0$ is one which make $n \theta > \frac{\pi}{2}$. -It is clear this equals to $\left\lfloor \frac{\pi}{2\theta} \right\rfloor + 1$ and hence -$$2nh = 2h\left(\left\lfloor \frac{\pi}{2\tan^{-1}(h)} \right\rfloor + 1\right)$$ -For small $h$, $\tan^{-1}(h) \approx h$. This implies $n \approx \frac{\pi}{2h} \implies 2nh \approx \pi$. -In certain sense, the computed $p \approx \pi$ because for small $h$, $1+ih \approx e^{ih}$.<|endoftext|> -TITLE: How do you find the value of $\sum_{r=0}^{44} \tan^2(2r+1)$? -QUESTION [12 upvotes]: Problem: -Find the value of $$\sum_{r=0}^{44} \tan^2(2r+1)$$ -Note: The angles here are in degrees. - -I don't know how to solve this question because trigonometric simplifications didn't get me anywhere. I think there was a method to solve this question using complex numbers which I no longer remember. Any hint/help will be appreciated. - -REPLY [5 votes]: Using Sum of tangent functions where arguments are in specific arithmetic series, -$$\tan90x=\dfrac{\binom{90}1t-\binom{90}3t^3+\cdots+\binom{90}{89}t^{89}}{\binom{90}0-\binom{90}2t^2+\cdots+\binom{90}{90}t^{90}}$$ where $t=\tan x$ -If $\tan 90x=\tan90^\circ=\infty$ -$90x=180^\circ n+90^\circ=90^\circ(2n+1)$ where $n$ is any integer -$\implies x=(2n+1)^\circ$ where $n\equiv0,1,2,\cdots,88,89\pmod{90}$ -So, the roots of $$t^{90}-\binom{90}{88}t^{88}+\cdots=0$$ -are $\tan(2n+1)^\circ$ where $n\equiv0,1,2,\cdots,88,89\pmod{90}$ -Now as $-\tan(2n+1)^\circ=\tan\{180^\circ-(2n+1)^\circ\}=\tan\{2(89-n)+1)^\circ\}$ -$$\implies\sum_{r=0}^{89}\tan^2(2n+1)^\circ=2\sum_{r=0}^{44}\tan^2(2n+1)^\circ$$ -and the roots of $$s^{45}-\binom{90}{88}s^{44}+\cdots=0$$ are $\tan^2(2n+1)^\circ$ where $n\equiv0,1,2,\cdots,88,89\pmod{90}$ -Using Vieta's formula, -$$\sum_{r=0}^{89}\tan^2(2n+1)^\circ=\binom{90}{88}=\binom{90}{90-88}=?$$ -Can you take it from here?<|endoftext|> -TITLE: Generalizing complex numbers: Is there a mathematical system isomorphic to 3 dimensional space? -QUESTION [5 upvotes]: As I understand it, complex numbers: $ax+i$ are isomorphic to two-dimensional space. -Quaternions consist of $4$ dimensions. Is that right? Wikipedia says "quaternions form a four-dimensional associative normed division algebra over the real numbers." -Is there a mathematical system generalizing complex numbers that consists of $3$ dimensions and is isomorphic to 3-dimensional space? - -REPLY [10 votes]: Not completely an answer to your question, but this might be interesting to you: -The Frobenius theorem states that up to isomorphism there are three finite-dimensional (unital) associative division algebras over the reals: the reals themselves (dimension 1), the field of complex numbers (dimension 2), and the quaternions (dimension 4). -If you're willing to give up associativity you can also add the octonions (dimension 8) to that list. See Hurwitz's theorem for that. -So depending on how similar to the complex numbers you want it to be, the answer might be a definitive no.<|endoftext|> -TITLE: Significance of the notion of equivalent actions vs. permutation isomorphic action -QUESTION [5 upvotes]: Let $G$ be a group acting on $\Delta$, and $H$ be a group acting on $\Gamma$. If there exists an isomorphism $\varphi : G \to H$ and a bijection $\psi : \Delta \to \Gamma$ such that -$$ - \psi( \alpha^g ) = \psi(\alpha)^{\varphi(g)} -$$ -for all $\alpha \in \psi, g \in G$ then the two group actions are said to be permutation isomorphic. They are called equivalent iff $G = H$ and $\varphi = \mbox{id}_G$, i.e. if $G$ acts on two sets $\Delta$ and $\Gamma$ and we have a bijection $\psi : \Delta \to \Gamma$ such that -$$ - \psi(\alpha^g) = \psi(\alpha)^g. -$$ -So the conditions of equivalent actions is stronger. But why we bother about that stronger notion? Where is it essential? -This question is related to my other recent question in which I asked for examples differentiating both notions. What I see there in the example provided in the answer, that just under the notion of equivalent actions we can for example conclude more specific about individual elements, for example that if it fixes a point in one action, then also in the other. So this makes it easier to count fixed points of a single element, under permutation isomorphic actions we just know that it is isomorphic to some other element fixing a point. -But are there any other causes. Any theorems that rely specifically on the notion of equivalent action, and not just permutation isomorphic groups? -I know that two actions on the cosets of some subgroup by right multiplication are equivalent iff the two subgroups are conjugate. By using that each action of a transitive group is equivalent to an action on some subgroup, this help in classifying all equivalent actions of some group. This is a nice connection, but still does not answer why we bother about classifying them, i.e. what's their significance in the first place? - -So to be more specific, do you know any theorem's were it is essential that two group actions are equivalent (either in its statement or its proof). Or any other intuitive ideas on the significance of this notion as opposed to permutation isomorphic? - -REPLY [4 votes]: Equivalence is the better condition because it comes from isomorphism in a very natural category to write down: namely, for any group $G$, there is a category $\text{Set}^G$ of $G$-sets, and two $G$-sets are isomorphic iff they are equivalent in your sense. In order to get permutation isomorphism we need to also look at the action of $\text{Aut}(G)$ on this category. -Variations of this category occur naturally in, for example, Galois theory: one way of stating the Galois correspondence is that if $k$ is a field, then the category of finite products of finite separable extensions of $k$ is equivalent to the category of finite continuous $G$-sets, where $G = \text{Gal}(k_s/k)$ is the absolute Galois group of $k$. Automorphisms of $G$ don't enter into the picture.<|endoftext|> -TITLE: Calculate $\phi(36)$, where $\phi$ is the Euler Totient function. Use this to calculate $13788 \pmod {36}$. -QUESTION [6 upvotes]: I am wondering if anyone can help me. I am trying to figure out how to - -Calculate $\phi(36)$, where $\phi$ is the Euler Totient function. Use this to calculate $13788 \pmod {36}$. - -I have an exam coming up an this will be one style of question. Can anyone please walk me through how it is done? -Thanks to SchrodingersCat I now know first part is $12$. -The second part should be along the lines of below but I do not understand how this was arrived at. -\begin{align} -13788 \pmod {36} &= 13788 \pmod {\phi(36)} \pmod {36} \\ &= 13788 \pmod {12} \pmod {36} \\ &= 138 \pmod {36} \\ &= ((132)2)2 \pmod {36} \\ &= (252)2 \pmod {36} \\ &= 132 \pmod {36} \\ &= 25 -\end{align} -Can anyone show me why it is $25$ and how do I get it? - -REPLY [3 votes]: $36=4\times9$ -Since $4$ and $9$ are relatively prime, we have -$\phi(36)=\phi(4)\phi(9)$ -Using the identity that for any prime $p$, $\phi(p^n)=p^n-p^{n-1}$ -$\phi(4)\phi(9)=\phi(2^2)\phi(3^2)=(2^2-2).(3^2-3)=12$ -As for the second part of the question, then, 13788 (mod 12) (mod 36) $\ne -138(mod 36)$ and hence your question is wrong. -There seems to be typo error and this part of the question can be modified to $13^{788}(mod\ 36) = 13^{788(mod \phi(36))}(mod\ 36) = 13^{788(mod 12)}(mod\ 36)=13^8(mod\ 36)=25$<|endoftext|> -TITLE: Compute $\lim_{x\to 0}\frac{x}{[x]}$ -QUESTION [7 upvotes]: When I take left hand limit of the function $\lim\limits_{x\to 0}\frac{x}{[x]}$, then $\lim\limits_{h\to 0^{-}}\frac{-h}{[-h]}=\lim_{h\to 0^{-}}\frac{-h}{-1}=0$ where $0 -TITLE: Partition of plane into disjoint circumferences -QUESTION [5 upvotes]: 1) Euclidian plane $\mathbb{R}^2$ is not a union of disjoint circumferences (assuming point is not a circumference of radius $0$). -2) If we exclude 1 point from plane, concentric circumferences with center at this point form a desired partition. -3) Choose $n \geq 2$ distinct points $p_1, \dots, p_n$. Is $\mathbb{R}^2 \setminus \{p_1, \dots, p_n\}$ a union of disjoint circumferences? - -I believe 3) is not, but I don't know how to prove it. -Here's my proof of (1). -Suppose such partition exist. Let's construct a sequence of compact circles $A_1 \supset A_2 \supset \dots$, such that $r_n \to 0$, where $r_n$ denotes the radius of $A_n$. Let $A_1$ be any circumference with its interior. Having chosen $A_n$, choose circumference passing through its center. Let $A_{n+1}$ be a union of this circumference with its interior. Obviously, $r_{n+1} < r_n/2$. Thus, $r_n \to 0$. -By Cantor's intersection theorem -$$\bigcap_{n=1}^{\infty} A_n \neq \varnothing.$$ -Choose $p \in \bigcap_{n=1}^{\infty} A_n$. If $p$ is a boundary point of some $A_n$ then $p \notin A_{n+1}$. Hence, $p$ is an interior point of all $A_n$ and boundary point of some other circle. This circle should be contained in all $A_n$ (otherwise circumferences would intersect), which is impossible because $r_n \to 0$. -Any ideas on (3)? Thanks in advance. -(1) and (2) are included for completeness. - -REPLY [5 votes]: Ah, this was a really fun problem! Here goes, hope I didn't mess anything up. :) -I'll do the case just $n=2$, and leave the generalization up to you. Color a circle magenta if it contains $p_1$ but not $p_2$ and color a circle cyan if it contains $p_2$ but not $p_1$. Color $p_1$ itself magenta and $p_2$ itself cyan as well. Finally, color a circle neon yellow if it contains both $p_1$ and $p_2$. By repeating the argument in (1) there are no circles enclosing neither $p_1$ nor $p_2$. Hence every point is either magenta, cyan, or neon yellow. -Now note that given any magenta circle, its interior is completely magenta. Actually, the magenta circles can be totally ordered by inclusion (since they can't intersect). So we consider two cases: - -If there is a maximal magenta circle (i.e. a magenta circle not contained in any other magenta circle) then the set of all magenta points is just a closed disk. -If there is no maximal magenta circle, then the set of magenta points can also be expressed as the union over all magenta circles of their interiors. This is a union of open sets, so it is itself open. - -We conclude the set of magenta points is either a closed disk or an open set. Similarly for the set of cyan points. Moreover, each set of points is convex. -To finish the problem: - -Suppose there are no neon yellow points. If the magenta points form a closed disk, then the cyan points are $\mathbb R^2$ minus a disk which is not convex. Contradiction. So the magenta points must be open. Similarly the cyan points must be open. But $\mathbb R^2$ is connected, so it can't be written as the union of two open sets. -Now suppose there are neon yellow points. We claim there is a neon yellow circle minimal by inclusion. If not, then repeat the argument of (1) to get a contradiction, since any neon yellow circle must have diameter the distance from $p_1$ to $p_2$. So we can find a neon yellow circle $\mathscr C$ whose interior is all magenta and cyan. Now repeat the argument of the previous part, replacing $\mathbb R^2$ by the interior of $\mathscr C$. - -Remark: There is a ``near miss'' (for $n=2$) if you look at one of the families of Apollonian circles: it's possible to cover $\mathbb R^2$ minus two points and a line.<|endoftext|> -TITLE: Elliptical Integral that diverges at one point -QUESTION [5 upvotes]: I have to solve the following integral $$I=\int_{\lambda_1}^yd\lambda\frac{1}{1-\lambda}\sqrt{\frac{(\lambda-\lambda_1)(\lambda-\lambda_2)(\lambda-\lambda_4)}{\lambda-\lambda_3}}$$ where $y>1>\lambda_1>\lambda_2>\lambda_3>\lambda_4>0$. -The problem, as you can see, is that the integral $I$ has non-integrable contribution at $\lambda=1$. So what I do basically to avoid the problem is to rewrite $I$ as -$$I=\lim_{r\to1}\int_{\lambda_1}^rd\lambda\frac{1}{1-\lambda}\sqrt{\frac{(\lambda-\lambda_1)(\lambda-\lambda_2)(\lambda-\lambda_4)}{\lambda-\lambda_3}}-\lim_{r\to1}\int_{r}^yd\lambda\frac{1}{\lambda-1}\sqrt{\frac{(\lambda-\lambda_1)(\lambda-\lambda_2)(\lambda-\lambda_4)}{\lambda-\lambda_3}}$$ -I already could solve $$I_1=\lim_{r\to1}\int_{\lambda_1}^rd\lambda\frac{1}{1-\lambda}\sqrt{\frac{(\lambda-\lambda_1)(\lambda-\lambda_2)(\lambda-\lambda_4)}{\lambda-\lambda_3}}$$ using "P. F. Byrd and D. F. Morris, Handbook of elliptic integrals for engineers and scientists, Vol 67, -Berlin Springer (1971)". -I only need to solve $$I_2=\int_{r}^yd\lambda\frac{1}{\lambda-1}\sqrt{\frac{(\lambda-\lambda_1)(\lambda-\lambda_2)(\lambda-\lambda_4)}{\lambda-\lambda_3}}$$ for $y>r>1$. The problem is that in this case I can not use "P. F. Byrd and D. F. Morris, Handbook of elliptic integrals for engineers and scientists, Vol 67, Berlin Springer (1971)" because in order to do it I will have to rewrite $I_2$ like a sum of two integrals with one of the limits of integration being $\lambda_i$ for $i\in\{1,\dots,4\}$ and if I do that I return to the case when there is no integrable contribution at $1$. -Could you please help me to solve $I_2$? - -REPLY [4 votes]: Looonnng-winded hint: The work below is admittedly a lot of algebra away from actually reaching an explicit final value for the desired integral, but I do think it covers the trickiest part of the derivation with the remainder being straightforward, though tedious. - - -Given $z>y>p>a>b>c>d$, define the elliptic integral - $$\mathcal{E}:=\int_{y}^{z}\frac{1}{x-p}\sqrt{\frac{\left(x-a\right)\left(x-b\right)\left(x-d\right)}{x-c}}\,\mathrm{d}x.$$ - -As you pointed out in your question, since the singularity at $p$ is in the interval $(a,\infty)$, we can't write $\mathcal{E}$ as the difference of integrals $\int_{y}^{z}=\int_{a}^{z}-\int_{a}^{y}$ when the limits $y$ and $z$ are greater than $p$. Another option is to write $\mathcal{E}$ as a difference of integrals with limits at $+\infty$ instead of $a$. This avoids the problem of needing to integrate over a singularity, but there's a new issue in that these integrals diverge. But we can fix that.... - -First note the following partial fraction decomposition: -$$\begin{align} -\small{\frac{\left(x-a\right)\left(x-b\right)\left(x-d\right)}{x-p}} -&=\small{\frac{x^{3}-\left(a+b+d\right)x^{2}+\left(ab+ad+bd\right)x-abd}{x-p}}\\ -&=\small{x^{2}+\frac{\left(p-a-b-d\right)x^{2}+\left(ab+ad+bd\right)x-abd}{x-p}}\\ -&=\small{x^{2}+\left(p-a-b-d\right)x}\\ -&~~~~~\small{+\frac{\left(p^{2}-pa-pb-pd+ab+ad+bd\right)x-abd}{x-p}}\\ -&=\small{x^{2}+\left(p-a-b-d\right)x+\left(p^{2}-pa-pb-pd+ab+ad+bd\right)}\\ -&~~~~~\small{+\frac{\left(p^{2}-pa-pb-pd+ab+ad+bd\right)p-abd}{x-p}}\\ -&=\small{x^{2}+\left(p-a-b-d\right)x+\left(p^{2}-pa-pb-pd+ab+ad+bd\right)}\\ -&~~~~~\small{+\frac{\left(p-a\right)\left(p-b\right)\left(p-d\right)}{x-p}}.\\ -\end{align}$$ -The first step to putting the elliptic integral in something closer to standard form is rewriting the integrand through rationalization so that there's a single square-root factor in the denominator with a quartic under the radical. -Letting $Q{\left(x\right)}$ stand for the quartic, -$$Q{\left(x\right)}:=\left(x-a\right)\left(x-b\right)\left(x-c\right)\left(x-d\right),$$ -we decompose the elliptic integral $\mathcal{E}$ into a sum of four simpler integrals using the partial fraction expansion given above: -$$\begin{align} -\mathcal{E} -&=\int_{y}^{z}\frac{1}{x-p}\sqrt{\frac{\left(x-a\right)\left(x-b\right)\left(x-d\right)}{x-c}}\,\mathrm{d}x\\ -&=\int_{y}^{z}\frac{\left(x-a\right)\left(x-b\right)\left(x-d\right)}{\left(x-p\right)\sqrt{\left(x-a\right)\left(x-b\right)\left(x-c\right)\left(x-d\right)}}\,\mathrm{d}x\\ -&=\left(p^{2}-pa-pb-pd+ab+ad+bd\right)\int_{y}^{z}\frac{\mathrm{d}x}{\sqrt{Q{\left(x\right)}}}\\ -&~~~~~+\left(p-a-b-d\right)\int_{y}^{z}\frac{x}{\sqrt{Q{\left(x\right)}}}\,\mathrm{d}x+\int_{y}^{z}\frac{x^{2}}{\sqrt{Q{\left(x\right)}}}\,\mathrm{d}x\\ -&~~~~~+\left(p-a\right)\left(p-b\right)\left(p-d\right)\int_{y}^{z}\frac{\mathrm{d}x}{\left(x-p\right)\sqrt{Q{\left(x\right)}}}\\ -&=:\left(p^{2}-pa-pb-pd+ab+ad+bd\right)I^{(0)}\\ -&~~~~~+\left(p-a-b-d\right)I^{(1)}+I^{(2)}\\ -&~~~~~+\left(p-a\right)\left(p-b\right)\left(p-d\right)J^{(1)}.\\ -\end{align}$$ -It remains to reduce each of the integrals $I^{(k)}$ and $J^{(1)}$ to standard form. The $I^{(k)}$ integrals are more or less straightforward since there are no singularity concerns in the integrands, so we focus first on the evaluation of $J^{(1)}$. -The convenient thing about the integral $J^{(1)}$ is that it does converge when the integration limits go to $+\infty$, unlike the case with $\mathcal{E}$. - -Main Result: -Define $J{\left(Q;p;z\right)}$ by the improper integral, -$$J{\left(Q;p;z\right)}:=\int_{z}^{\infty}\frac{\mathrm{d}x}{\left(x-p\right)\sqrt{Q{\left(x\right)}}}.$$ -Set $\kappa:=\sqrt{\frac{\left(a-d\right)\left(b-c\right)}{\left(a-c\right)\left(b-d\right)}}\land n:=\frac{\left(p-b\right)\left(a-d\right)}{\left(p-a\right)\left(b-d\right)}\land\varphi:=\arcsin{\left(\sqrt{\frac{\left(b-d\right)\left(z-a\right)}{\left(a-d\right)\left(z-b\right)}}\right)}\land\theta:=\arcsin{\left(\sqrt{\frac{b-d}{a-d}}\right)}$. Also, let $P{\left(y\right)}$ stand for the quartic expression $P{\left(y\right)}:=\left(1-y^{2}\right)\left(1-\kappa^{2}y^{2}\right)$. Using the substitution -$$\small{\sqrt{\frac{\left(b-d\right)\left(x-a\right)}{\left(a-d\right)\left(x-b\right)}}=y\implies x=\frac{a\left(b-d\right)-b\left(a-d\right)y^{2}}{b-d-\left(a-d\right)y^{2}}},$$ -the integral $J{\left(Q;p;z\right)}$ will transform as: -$$\begin{align} -J{\left(Q;p;z\right)} -&=\int_{z}^{\infty}\frac{\mathrm{d}x}{\left(x-p\right)\sqrt{Q{\left(x\right)}}}\\ -&=\small{\int_{\sqrt{\frac{\left(b-d\right)\left(z-a\right)}{\left(a-d\right)\left(z-b\right)}}}^{\sqrt{\frac{\left(b-d\right)}{\left(a-d\right)}}}\frac{\left(-1\right)2\left[b-d-\left(a-d\right)y^{2}\right]\,\mathrm{d}y}{\left[\left(p-a\right)\left(b-d\right)-\left(p-b\right)\left(a-d\right)y^{2}\right]\sqrt{\left(a-c\right)\left(b-d\right)}\sqrt{P{\left(y\right)}}}}\\ -&=-\frac{2}{\left(p-b\right)\sqrt{\left(a-c\right)\left(b-d\right)}}\int_{\sqrt{\frac{\left(b-d\right)\left(z-a\right)}{\left(a-d\right)\left(z-b\right)}}}^{\sqrt{\frac{b-d}{a-d}}}\frac{\mathrm{d}y}{\sqrt{P{\left(y\right)}}}\\ -&~~~~~\small{-\frac{2\left(a-b\right)}{\left(p-a\right)\left(p-b\right)\sqrt{\left(a-c\right)\left(b-d\right)}}\int_{\sqrt{\frac{\left(b-d\right)\left(z-a\right)}{\left(a-d\right)\left(z-b\right)}}}^{\sqrt{\frac{b-d}{a-d}}}\frac{\mathrm{d}y}{\left(1-ny^{2}\right)\sqrt{P{\left(y\right)}}}}\\ -&=-\frac{2}{\left(p-b\right)\sqrt{\left(a-c\right)\left(b-d\right)}}\int_{\sin{\left(\varphi\right)}}^{\sin{\left(\theta\right)}}\frac{\mathrm{d}y}{\sqrt{\left(1-y^{2}\right)\left(1-\kappa^{2}y^{2}\right)}}\\ -&~~~~~\small{-\frac{2\left(a-b\right)}{\left(p-a\right)\left(p-b\right)\sqrt{\left(a-c\right)\left(b-d\right)}}\int_{\sin{\left(\varphi\right)}}^{\sin{\left(\theta\right)}}\frac{\mathrm{d}y}{\left(1-ny^{2}\right)\sqrt{P{\left(y\right)}}}}\\ -&=-\frac{2\left[F{\left(\theta,\kappa\right)}-F{\left(\varphi,\kappa\right)}\right]}{\left(p-b\right)\sqrt{\left(a-c\right)\left(b-d\right)}}\\ -&~~~~~-\frac{2\left(a-b\right)\left[\Pi{\left(\theta,n,\kappa\right)}-\Pi{\left(\varphi,n,\kappa\right)}\right]}{\left(p-a\right)\left(p-b\right)\sqrt{\left(a-c\right)\left(b-d\right)}}.\\ -\end{align}$$ -Note that the parameter $n$ of the elliptic integrals of the third kind in the last line above is greater than $1$, and as such these integrals will be defined by Cauchy principle values. -The necessity for worrying about Cauchy principle values can be circumvented however, by using the following connection formula due to Legendre. - -Let $0<\varphi\le\frac{\pi}{2}\land0 -TITLE: Why does this double infinite sum $\sum_{n=1}^\infty \sum_{k=n}^\infty\frac{1}{k!}$ converge to $e$? -QUESTION [9 upvotes]: I can't seem to come to grips with the result below: -$$S=\sum_{n=1}^\infty \sum_{k=n}^\infty\frac{1}{k!}=e$$ -which is given by Mathematica (code below) and (numerically) verified by WolframAlpha. -In[65]:= Sum[1/k!, {n, 1, Infinity}, {k, n, Infinity}] - -Out[65]= E - -I've attempted to work it out in the following way: -$$\begin{align*}S&=\sum_{n=1}^\infty\sum_{k=n}^\infty \frac{1}{k!}\\[1ex] -&=\sum_{n=1}^\infty\left(\frac{1}{n!}+\frac{1}{(n+1)!}+\frac{1}{(n+2)!}+\cdots\right)\\[1ex] -&=\sum_{n=1}^\infty\frac{1}{n!}+\sum_{n=1}^\infty\frac{1}{(n+1)!}+\sum_{n=1}^\infty\frac{1}{(n+2)!}+\cdots\\[1ex] -&=\sum_{n=1}^\infty\frac{1}{n!}+\sum_{n=2}^\infty\frac{1}{n!}+\sum_{n=3}^\infty\frac{1}{n!}+\cdots\\[1ex] -&=(e-1)+\left(e-1-\frac{1}{2}\right)+\left(e-1-\frac{1}{2}-\frac{1}{6}\right)+\cdots\end{align*}$$ -which doesn't appear to me to follow a telescoping pattern, but I might be wrong about that. It's not obvious to me if this actually does telescope. -Edit: Changing the order of summation does wonders, as shown in the accepted answer, but I'm currently wondering if there is any possibility that the last line admits any neat telescoping argument? - -REPLY [16 votes]: Reverse the order of summation and this becomes -\begin{align*} -\sum_{k = 1}^{\infty} \sum_{n = 1}^k \frac{1}{k!} &= \sum_{k = 1}^{\infty} \frac 1 {k!} \sum_{n = 1}^k 1\\ -&= \sum_{k = 1}^{\infty} \frac{1}{k!} \cdot k \\ -&= \sum_{k = 1}^{\infty} \frac{1}{(k - 1)!} = e -\end{align*} - -To understand the change of order, note that all sums here are very convergent (and positive), so I'm not going to worry about technical issues. The original sum is about fixing $n$ and summing over $k \ge n$. If you imagine writing out all the pairs of natural numbers in a grid with $k$ running horizontally and $n$ vertically, this is fixing a column and adding up every pair below the main diagonal. That is, the lower left half of the grid. -On the other hand, we can also describe this as summing over every row, but stopping when we get to the main diagonal.<|endoftext|> -TITLE: Show that $f$ does not change sign on some interval $(\beta,+\infty)$. -QUESTION [6 upvotes]: Let $a,b,f$ are continuous functions on some interval $(\alpha,+\infty)$ such that $a,b$ have constant sign on $(\alpha,+\infty)$ and $f$ is differentiable on $(\alpha,+\infty)$. Suppose $f'=af+b$. Show that $f$ does not change sign on some interval $(\beta,+\infty)$. -So I consider this as cases: -Case I: $a>0,b>0$ -So I can rewrite $f=(f'-b)/a$. To the contrary assume that given $\beta \in \mathbb{R}$ there is $c_1,c_2>\beta $ such that $f(c_1)>0$ and $f(c_2)<0$. Then by intermediate value theorem there is $c\in(c_1,c_2)$ such that $f(c)=0$. So,$f'(c)=b(c)>0$. But after that I was stuck. Even a periodic function like sin function does have the property I got. So how do I prove the result? - -REPLY [2 votes]: There exists a function $A$ with $A'=a$ because $a$ is continuous. Then $$(e^{-A} f)'=e^{-A}b.$$ So the function $g(x)= e^{-A(x)}f(x)$ is monotonic. Thus, either (1): $ \exists \beta\;( x>\beta\implies g(x)\ne 0)$, and hence $x>\beta \implies f(x)\ne 0$, which implies that $f$ cannot change sign on $(\beta,\infty)$ because $f$ is continuous; or (2): $\exists \beta \;(x>\beta \implies g(x)=0)$ and hence $x>\beta\implies f(x)=0.$<|endoftext|> -TITLE: Divergent function of ratio must be logarithm -QUESTION [13 upvotes]: Given. Consider two functions $F(t)$ and $r(t,x)$ such that $\lim_{t\to\infty} F(t) = \infty$ and $\lim_{t\to\infty} r(t,x)$ is finite for any $x$. ($x$ and $t$ are always positive in what follows.) Suppose they sum to a function only of the ratio $x/t$: -\begin{align} -F(t) + r(t,x) = f (x/t) \qquad (1) -\end{align} -Claim. $f(x/t)$ must diverge logarithmically at large $t$. That is, -\begin{align} -\lim_{t\to\infty} \frac{f(x/t)}{\ln t} = const\ (\text{independent of } x) \qquad (2) -\end{align} -Flawed proof. -The paper cited below does the following. Define $r(x)= \lim_{t\to\infty} r(t,x)$. Then we can write: -\begin{align} -f(x/t) = F(t) + r(x) + \phi(t,x) \qquad (3) -\end{align} -where $\phi(t,x) = r(t,x) - r(x)$ goes to zero as $t\to\infty$. -Now it seems reasonable (this is the incorrect step!) that $F(t)+r(x)$ and $\phi(t,x)$ should separately be functions only of the ratio $x/t$, since they have different behavior at large $t$. In particular, -\begin{align} -F(t) + r(x) = W(x/t) \qquad (4) -\end{align} -It is then straightforward to show that $W$ is a logarithm, which completes the proof. We differentiate both sides of $(4)$ with $x$: -\begin{align} -r'(x) = \frac{1}{t}W'(x/t) \qquad (5) -\end{align} -Then set $x=1, t=\frac{1}{y}$: -\begin{align} -r'(1) = y\ W'(y) \qquad (6) -\end{align} -so $W(y) = r'(1) \ln y + $ const. -Counterexample to (4). Consider $F(t) = \ln t + \frac{1}{t}, r(t,x) = -\ln x - \frac{1}{t}.$ Then $r(x) = -\ln x$ and $F(t) + r(x) = \ln \frac{t}{x} + \frac{1}{t}$, so $(4)$ fails. -Can this proof be saved? If not, how else can it be done? -Reference. "A hint of renormalization" (American Journal of Physics, hep-th/0212049). See equation (30) and Appendix C. My $t$ is his $\Lambda$. - -REPLY [3 votes]: Under very weak conditions, we can show that $f$ diverges logarithmically. Measurability of $f$ (or $F$) is enough to obtain the result (and, if measurability is not required, then there are counterexamples). We can show that -$$ -\lim_{t\to\infty}\frac{F(t)}{\log t}=\lim_{t\to\infty}\frac{f(x/t)}{\log t}=\textrm{const}. -$$ -The first equality is clear from the fact that $r(t,x)=f(x/t)-F(t)$ is bounded in the limit $t\to\infty$. It is possible for the constant to be zero, in which case I would say that $f$ diverges sub-logarithmically rather than logarithmically. This is the case, for example, if $F(t)=f(t^{-1})=\log\log\max(e,t)$. -Defining $g\colon\mathbb{R}\to\mathbb{R}$ by $g(t)=f(e^{-t})$, we need to show that $g(t)/t$ tends to a limit as $t\to\infty$. -For any $t^\prime > 0$, we have -$$ -g(t+t^\prime)-g(t)=f(e^{-t^\prime-t})-F(e^t)-f(e^{-t})+F(e^t)=r(e^t,e^{t^\prime})-r(e^t,1). -$$ -So, this converges to a finite limit as $t\to\infty$. Setting $\lambda=\lim_{t\to\infty}(g(t+1)-g(t))$, then for any $\epsilon>0$ we have $-\epsilon\le g(t+1)-g(t)-\lambda\le\epsilon$ for all large enough $t$ -- say, for all $t \ge t_0$. Setting $t_n=t_0+n$ then, for each $t\ge t_0$ we can find positive integer $n$ with $t\le t_n\le t+1$ -\begin{align} -\lvert g(t)-\lambda t\rvert&\le\sum_{k=1}^n\lvert g(t_k)-g(t_{k-1})-\lambda\rvert+\lambda\lvert t-n\rvert+\lvert g(t_0)\vert+\lvert g(t)-g(t_n)\rvert\\ -&\le n\epsilon+\lambda\lvert t-t_n+t_0\rvert+\lvert g(t_0)\vert+\lvert g(t)-g(t_n)\rvert\\ -&\le (t_n-t_0)\epsilon+\lambda\lvert t_0-1\rvert+\lvert g(t_0)\vert+\sup_{s\in[0,1]}\lvert g(t+s)-g(t)\rvert -\end{align} -As long as we can show that $\sup_{s\in[0,1]}\lvert g(t+s)-g(t)\rvert$ is uniformly bounded by some $K > 0$ for all large enough $t$, the right hand side of this inequality equals $t\epsilon$ plus a bounded term. So, $\limsup_{t\to\infty}\lvert g(t)/t-\lambda\rvert$ is bounded by $\epsilon$. Taking $\epsilon$ arbitrarily small shows that $g(t)/t\to0$. -It remains to be shown that $\sup_{s\in[0,1]}\lvert g(t+s)-g(t)\rvert$ is uniformly bounded over all large enough $t$. Here, I will use measurability. First, as $g(t+s)-g(t)$ converges to a finite limit for each fixed $s$, $\limsup_{t\to\infty}\lvert g(t+s)-g(t)\rvert$ is bounded. By monotone convergence, the Lebesgue measure of $\{s\in[0,1]\colon\limsup_{t\to\infty}\lvert g(t+s)-g(t)\rvert < K\}$ tends to $1$ as $K$ goes to infinity. In particular, it is positive for some $K$. Then, using monotone convergence again, there exists a $t^*$ such that $\{s\in[0,1]\colon\sup_{t\ge t^*}\lvert g(t+s)-g(t)\rvert < K\}$ has positive measure. That is, there is an $A\subseteq[0,1]$ of positive measure such that $\lvert g(t+s)-g(t)\rvert < K$ for all $t\ge t^*$ and $s\in A$. -Now, I use the fact that, for a set $A$ of positive measure, the sum $A+A$ contains an open interval. The sum of intervals of lengths $r,s$ is an interval of length $r+s$. So, $A_n\equiv\{s_1+\cdots+s_n\colon s_1,\ldots,s_n\in A\}$ contains an interval of length greater than $1$ for large enough $n$. Say, $[a,a+1]\subseteq A_n$. Then, for all $s\in[a,a+1]$, we have $s=s_1+\cdots+s_n$ for $s_1,\ldots,s_n\in A$, so -$$ -g(t+s)-g(s)=\sum_{k=1}^n\left(g(t+s_1+\cdots+s_k)-g(t+s_1+\cdots+s_{k-1})\right) -$$ -which is bounded by $nK$. Therefore, for $t\ge t^*+a$ -\begin{align} -\sup_{s\in[0,1]}\left\lvert g(t+s)-g(t)\right\rvert -&\le\sup_{s\in[a,a+1]}\left\lvert g(t-a+s)-g(t-a)\right\rvert+\left\lvert g(t-a+a)-g(t-a)\right\rvert,\\ -&\le nK+nK -\end{align} -which is uniformly bounded as required.<|endoftext|> -TITLE: How to find $\lim_{x \to a}\frac{ a^nf(x)-x^nf(a)}{x-a}$ -QUESTION [6 upvotes]: f:$\mathbb {R} \to \mathbb{R}$ which is differentiable at $x=a$ the we are to evaluate the following:- -$$\lim_{x\to a}\frac{a^nf(x)-x^nf(a)}{x-a}$$ -My approach:- -$$\frac{a^nf(x)-x^nf(a)}{x-a}=x^na^n\frac{\frac{f(x)}{x^n}-\frac{f(a)}{a^n}}{x-a}$$ -Let -$g(x)=\frac{f(x)}{x^n}$ -then$$x^na^n\frac{\frac{f(x)}{x^n}-\frac{f(a)}{a^n}}{x-a}=x^na^n\frac{g(x)-g(a)}{x-a}$$ -so that $$\lim_{x \to a}x^na^n\frac{g(x)-g(a)}{x-a}=(\lim_{x \to a}x^na^n)g'(a)=a^{2n} \frac{d}{dx}(f(x)/x^n)$$ -$$=a^{2n}\left(\frac{x^nf'(x)-nx^{n-1}f(x)}{x^{2n}}\right)_{x=a}=a^nf'(a)-na^{n-1}f(a)$$ -Is my attemplt correct? - -REPLY [2 votes]: Another possible way could be Taylor expansion around $x=a$ -$$x^n=a^n+n a^{n-1} (x-a)+O\left((x-a)^2\right)$$ -$$f(x)=f(a)+(x-a) f'(a)+O\left((x-a)^2\right)$$ Then $$a^n f(x)-x^nf(a)=(x-a) \left(a^n f'(a)-n a^{n-1}f(a)\right)+O\left((x-a)^2\right)$$<|endoftext|> -TITLE: Lack of rigour in Spivak's Calculus book? -QUESTION [5 upvotes]: I logged on today with this exact question: Ellipse definition -I found it disconcerting for him to say that it was clear that $a > c$ when $a$ could be equal to $c$ (a straight line) or maybe even less than $c$ (if complex numbers are allowed). So he is assuming that we don't want a straight line, and also that complex numbers aren't allowed. Neither of those assumptions were stated or explained. I don't even know whether complex numbers would work, whether any sum at all could be arrived at. It's also not stated that the formula wouldn't work for a straight line; it's just glossed over by saying it 'clearly' couldn't be a straight line. -I picked up Spivak's book because I had heard it was extremely rigourous, but now I'm wondering a) whether the unstated assumption and lack of addressing conceivable possibilities is common in his book, and b) whether there were any other book recommendations to learn calculus with the requirement of rigour in mind. -I'm a bit hesitant to continue, as I may be unable to tell whether something 'clear' to him is not clear to me due to me not understanding it properly, or due to not being aware of his assumptions. As I'm trying to learn this on my own, that's not a favourable position for me to be in. - -REPLY [3 votes]: I agree with you 100%: Spivak's definition is sloppy. He says: - -A close relative of the circle is the ellipse. This is defined as - the set of points, the sum of whose distances from two "focus" - points is a constant. - -By this definition, the line segment between the two foci is an ellipse. When he says later, "we must clearly choose $a > c$", he is contradicting his own definition.<|endoftext|> -TITLE: Example of non-noetherian ring whose spectrum is noetherian -QUESTION [6 upvotes]: Since spectrum of noetherian ring is a noetherian topological space, I am finding an example s.t. a non-noetherian ring whose spectrum is noetherian. -Since most nice rings are noetherian, actually I do not have many examples to start, does any one can help? Thanks! - -REPLY [14 votes]: The standard example here is $A=k[x_1,x_2,\dots]/(x_1^2,x_2^2,\dots)$, for $k$ a field. Since each variable $x_n$ is nilpotent, every prime must contain $I=(x_1,x_2,\dots)$. But $A/I$ is just $k$, so $I$ is already a maximal ideal. So $I$ is the only prime, and so $\operatorname{Spec}(A)$ has only one point and is obviously Noetherian. But $I$ is not finitely generated, so $A$ is not Noetherian.<|endoftext|> -TITLE: Does there exist a prime number $p$ such that $p^2 \mid 2^{p-1}-1$? -QUESTION [5 upvotes]: Does there exist a prime number $p$ such that $p^2 \mid 2^{p-1}-1$ ? - -I tried for some small number $p$ and I think that it does, but I don't know how to prove this. - -REPLY [2 votes]: Actually there is. - -$1093$ does the job. - -These primes are named Wieferich primes but we don't know if there are infinitely many. -For more see here<|endoftext|> -TITLE: A combinatorial expression is equal to a binomial coefficient squared -QUESTION [6 upvotes]: Problem: Prove for all natural numbers the following identity: -$$\sum_{r=0}^{n}\frac{(2n)!}{(r!)^2((n-r)!)^2}=\dbinom{2n}{n}^2$$ -I have just been successful in interpreting the LHS of the above as sum of the coefficients of those terms in the expansion of $(a+b+c+d)^{2n}$ which are of $(ab)^r(cd)^{n-r}$ form. -I also tried wrote the LHS in terms of binomial coefficients as follows -$$\sum_{r=0}^{n} \dbinom{2n}{r}\dbinom{2n-r}{r}\dbinom{2n-2r}{n-r}$$ -But since $r+r+n-r$ is not a constant so the above sum cannot be interpreted as the coefficient of $x^{t}$ in some expression where t is some constant. -So, please help me with this problem. Even hints would be appreciated. -Also, I failed to find any combinatorial interpretation for this problem, though I am used to using double counting in combinatorial indentities. - -REPLY [4 votes]: Rewrite $\frac{(2n)!}{(r!)^2((n-r)!)^2} = \binom{2n}{n}\binom{n}{r}\binom{n}{n-r}$. -$$\therefore \sum_{r=0}^{n}\frac{(2n)!}{(r!)^2((n-r)!)^2}= \sum_{r=0}^{n}\binom{2n}{n}\binom{n}{r}\binom{n}{n-r} = \binom{2n}{n}\sum_{r=0}^{n}\binom{n}{r}\binom{n}{n-r}$$ -Now, $\sum_{r=0}^{n}\binom{n}{r}\binom{n}{n-r}$ is just choosing $n$ objects out of total $2n$ objects. Thus, $\sum_{r=0}^{n}\binom{n}{r}\binom{n}{n-r} = \binom{2n}{n}$ -$$\therefore \frac{(2n)!}{(r!)^2((n-r)!)^2} = \binom{2n}{n}\sum_{r=0}^{n}\binom{n}{r}\binom{n}{n-r} = \binom{2n}{n}^2$$<|endoftext|> -TITLE: How many trees on N vertices have exactly k leaves? -QUESTION [6 upvotes]: I need help on the topic of counting labeled trees (with its nodes numbered from 1 to N) with exactly k leaves. -I have thought about surjective functions that return the father of a node, but I'm not sure how to count all of them that give me correct trees. -Here is the source of the question: http://www-math.mit.edu/~djk/18.310/Lecture-Notes/counting_trees.html and it's not explained in this paper. -I would be very greatful if anyone could help me with a formula and, even more important, an explanation. -Thank you! - -REPLY [2 votes]: In the proof of Cayley's $n^{n-2}$ formula for each labeled tree on $n$ vertices a code word on $[n]$ of length $n-2$ is generated. A vertex is a leaf iff it does not appear in this code word. You therefore have to count the number of code words in which exactly $n-k$ different numbers appear. The result can be expressed in terms of Stirling numbers.<|endoftext|> -TITLE: Examples of fallacies in arithmetic and/or algebra -QUESTION [11 upvotes]: I'm currently preparing for a talk to be delivered to a general audience, consisting primarily of undergraduate students from diverse majors. My proposed topic would be Examples of fallacies in arithmetic and/or algebra. -So my question would be: - -What are some examples of arithmetic/algebraic fallacies that you know of? - -One example per answer please. -Let me give my own example, which is one of my personal favorites: - -Let $$a = b.$$ - Multiplying both sides by $a$, we get $$a^2 = ab.$$ - Subtracting $b^2$ from both sides, we obtain $$a^2 - b^2 = ab - b^2.$$ - Factoring both sides, we have $$(a + b)(a - b) = b(a - b).$$ - Dividing both sides by $(a - b)$, $$a + b = b.$$ - Substituting $a = b$ and simplifying, $$b + b = b,$$ and $$2b = b.$$ - Dividing both sides by $b$, $$2 = 1.$$ - -Of course, this fallacious argument breaks down because we divided by $a - b = 0$, since $a = b$ by assumption, and division by zero is not allowed. - -REPLY [5 votes]: I am fond of fallacies where a property of members of set of things, and the properties of the limit of that set, are assumed to be equal. But the limit need not be a member of the set, and therefore need have nothing in common with members of the set. -For example, imagine a collection of line segments that goes straight up one unit and straight right one unit. The total length is two. -Now imagine it goes up a half, right a half, up a half, right a half. Again, the length is two. And now we have something that looks like a staircase. -Now up a third, right a third, up a third, right a third, up a third, right a third. Another staircase. Length is still two. -Obviously as we continue this sequence the line segments more and more closely approximate a line of length root-two going diagonally. The conclusion we fallaciously reach is that two and its square root are equal.<|endoftext|> -TITLE: Vakil FOAG 11.3.B -QUESTION [6 upvotes]: I am thinking about how to use Krull's PIT to prove this statement (11.3.B on Vakil's notes): -If $(A,m,k)$ is a Noetherian local ring with maximal ideal $m$, and $f \in m$, then $\dim A/(f) \geq \dim A-1$. -What puzzles me is that Krull's PIT only gives us information about the codimension of $V(f)$. How can I know its dimension from its codimension? - -REPLY [8 votes]: Since the asker mentions in a comment to the other answer that this exercise is done before developing much theory of dimension of local rings, here's a short, self-contained proof. In fact, this proof can be interpreted geometrically and used almost word for word to prove problem $11.3.C$ in Vakil, that in projective space, hyperplane intersections reduce dimension by at most one, amongst other things. I'll put the geometric interpretation at the end. -Take any maximal chain of primes in $A, \mathfrak{p}_0\lneq \dots \lneq \mathfrak{p}_n$ (so that $n = \dim(A)$ and $\mathfrak{p}_n = \mathfrak{m}$). Then since $f \in \mathfrak{m}$, there is some $k$ such that $f \in \mathfrak{p}_k \setminus \mathfrak{p}_{k-1}$ (or $k = 0$). If $k = 0$, then $\dim(A/(f)) \geq n$ so we're done. If not, then there is some prime $\mathfrak{p}_{k-1}+(f)\leq \mathfrak{q}_{k-1}\leq \mathfrak{p}_k$ minimal with respect to this property. Then, repeating this process, there is some prime $\mathfrak{p}_{k-2}+(f) \leq \mathfrak{q}_{k-2} \leq \mathfrak{q}_{k-1}$ minimal with this property, giving a chain of primes $\mathfrak{q}_0\leq \dots \leq \mathfrak{q}_{k-1} \leq \mathfrak{p}_k \lneq \dots \lneq \mathfrak{p}_n$ which all contain $f$. We may have that $\mathfrak{q}_{k-1} = \mathfrak{p}_k$, but I claim that all the $\mathfrak{q}_i$ are distinct, which proves the theorem. -Indeed, suppose $\mathfrak{q}_i = \mathfrak{q}_{i+1}$. Then by construction, $\mathfrak{q}_{i+1}$ is minimal over $\mathfrak {p}_i + (f)$. Then by Krull's theorem (note that this is the one and only time we apply Krull in the proof) $\mathfrak{q}_{i+1}$ has height $0$ or $1$ over $\mathfrak {p_i}$. But $\mathfrak {p_i} \lneq p_{i+1} \lneq \mathfrak{q}_{i+1}$ (where we have the second inequality by the fact that $f \in \mathfrak{q}_{i+1} \setminus \mathfrak{p}_{i+1}$) so in fact $\mathfrak{q}_{i+1}$ has height at least $2$ over $\mathfrak{p_i}$, a contradiction. -Geometric interpretation: -This is a purely algebraic statement, but the proof is, at it's heart, geometric (and this is the more natural way to come up with it). What this amounts to is showing that the hyperplane $V(f) \subset X=\rm{Spec}(A)$ has dimension at least $\dim(A)-1$ (although in fact it proves the slightly stronger statement that the intersection of any irreducible component with $V(f)$ has codimension at most one less). -In the proof, we take $X_0\lneq \dots \lneq X_n$ a maximal chain of irreducible closed subsets in $X$. We observe that since $X_0 \subset V(f)$, all of the intersections are non-empty, and take $r$ such that $X_r$ is contained in $V(f)$ and $X_{r+1}$ isn't, so that $X_0 \lneq \dots \lneq X_r$ remains a chain of irreducible closed subsets in $V(f)$. We then pick out irreducible components $Y_i$ of $X_i\cap V(f)$ iteratively, giving a chain $X_0 \lneq \dots \lneq X_r \leq Y_{r+1}\leq \dots \leq Y_n$. We then complete the proof by showing that all of the $Y_i$ are distinct, finishing the proof. I think it's interesting that the proof makes a lot more sense geometricallt, you're really just following your nose until it's time to show that the $Y_i$ are distinct, which seems very hard to do geometrically. Recall that in the algebraic proof, this was where we used Krull's principal ideal theorem and it was relatuvely easy, the hard part was coming up with the strategy, which turns out to be the "obvious" strategy geometrically. -Remark: We only actually used that $A$ was a local ring in one place, to show that the hyperplane intersections were non-empty (algebraically, to show that some prime contained $f$). The local property is really not the key thing for this argument, it's the non-empty intersection. In both algebraic and geometric versions we used this non-emptiness to find somewhere to start (the $\mathfrak{p}_i$ and the $X_i$) and this is what allows us to generalise the proof.<|endoftext|> -TITLE: Zariski topology on $\mathbb{C}[X, Y]$ -QUESTION [6 upvotes]: For a commutative ring $A$, let Spec$(A)$ be the set of prime ideals. A topology on Spec$(A)$ is defined by the closed sets -$$ -\mathcal{V}(T) = \lbrace \mathbb{p} \in \text{Spec}(A) \vert T \subseteq \mathbb{p} \rbrace -$$ -for some $T \subseteq A$, called the Zariski topology. -We studied some basic properties of this topology in class (e.g. it is $\mathtt{T}_0$ and sober) and worked out the rather trivial examples where $A$ is a field or a principal ideal domain. More complicated examples, like $\mathbb{Z}[T]$ or $\mathbb{C}[X, Y]$, were left as an exercise. -I have no idea how to begin to answer the question 'describe the Zariski topology on Spec$(A)$' for these more complicated rings. I understand the theory, but what can be said about this topology in these cases? As a more general question: how to analyse the Zariski topology of a given ring? -This is not a homework assignment, I'm just trying to get a better grip on what I have to study. I have gotten no further than a few remarks: - -Both rings are domains, hence $\lbrace 0 \rbrace$ is the minimum of Spec$(A)$ in both cases. -By the Nullstellensatz, we know that the maximal ideals of $\mathbb{C}[X, Y]$ are of the form $(X - a, Y - b)$. Also the prime ideals of height one must be principal ideals, generated by an irreducible polynomial. -In $\mathbb{Z}[T]$, the prime ideals of height one are principal ideals, generated by an irreducible polynomial or a prime in $\mathbb{Z}$. - -REPLY [2 votes]: If $A$ is a Noetherian ring, there is a fairly easy way to get a grip on the Zariski topology on $\operatorname{Spec}(A)$. Every closed set $C$ can be written as a union of finitely many irreducible closed sets $C=C_1\cup\dots\cup C_n$ (The proof is by Noetherian induction: if every proper closed subset of $C$ has this property, then either $C$ is irreducible so we can let $n=1$ and $C_1=C$, or else we can write $C=A\cup B$ where $A$ and $B$ are smaller closed sets, and by induction we have such a decomposition for $A$ and $B$ and we can combine them to get one for $C$.) Furthermore, this decomposition is unique if we assume it is irredundant, in the sense that no $C_i$ is contained in another $C_j$. Since $\operatorname{Spec}(A)$ is sober, each irreducible closed set is the closure of a unique point. So if we know all of the prime ideals of $A$ and how they are ordered by inclusion, we know the closed subsets of $\operatorname{Spec}(A)$: they are just finite unions of sets of the form $\overline{\{P\}}=\{Q:P\subseteq Q\}$ for prime ideals $P$. -Let's see how this works in practice, say for $A=\mathbb{C}[X,Y]$. As you note, there are three kinds of primes in $A$. There are the maximal ideals, which are all of the form $(X-a,X-b)$. We can identify such a maximal ideal with the point $(a,b)\in\mathbb{C}^2$, so we have a copy of $\mathbb{C}^2$ in $\operatorname{Spec}(A)$ (though not with the usual topology). There are the height one primes, which are of the form $(f(X,Y))$ for some irreducible polynomial $f(X,Y)$ (note that contrary to what you say, $f$ need not be linear--for instance, $f(X,Y)=X^2+Y^3$ is irreducible). The closure of such a point in $\operatorname{Spec}(A)$ contains all of the maximal ideals $(X-a,Y-b)$ such that $f(a,b)=0$: that is, it is the curve in $\mathbb{C}^2$ defined by the equation $f(a,b)=0$, together with the one additional "generic" point $(f)$. Finally, there is the ideal $0$, whose closure is all of $\operatorname{Spec}(A)$. -So if a closed subset of $\operatorname{Spec}(A)$ is not the entire space, we can write it as a union of finitely many single points of $\mathbb{C}^2$ and finitely many curves in $\mathbb{C}^2$ defined by irreducible polynomials, where for each such curve we also throw in an additional "generic point" of the curve. This decomposition is unique if we additionally stipulate that none of our finitely many points should lie on any of the curves.<|endoftext|> -TITLE: Gradient of the TV norm of an image -QUESTION [6 upvotes]: Context: -I am trying to implement an algorithm for X-ray image reconstruction called ADS-POCS that minimizes the TV norm as well as reconstructs the image. After separating the reconstruction into 2 steps, namely data reconstruction and TV norm minimization, the second part is solved by a steepest descend. The image is a 3D image (relevant for the TV-norm). -Problem: -The paper defines $\vec{f}$ as a $1\times N_i$ vector of voxels. Later, defines the operator $\nabla_{\vec{f}}$ as -$\nabla_{\vec{f}} Q(\vec f)=\sum_{i} \frac{\partial}{\partial f_i} Q(\vec f) \vec \delta_i$, -being $\delta_i$ 1 in the $i$ voxel and 0 elsewhere. -eventually, the algorithm to minimize the TV norm of the image is defined as a gradient descend loop where the update is defined as: -$\vec f =\vec f -\alpha \cdot \nabla_{\vec{f}} ||\vec f ||_{TV}$ -My problem is that I dont know how to compute the $\nabla_{\vec{f}} ||\vec f ||_{TV}$ term. I know how would I compute $||\vec f ||_{TV}$, but I feel like the $\nabla_{\vec{f}}$ is actually derivation the norm itself. If this were the 2-norm , for example, I'd know how to derive it (e.g. see here), but the TV-norm has the absolute value function, and its also dependent in neighboring voxels, while the 2-norm isn't. -The TV norm can be written as (if I'm not wrong with my maths notation) -$||\vec f||_{TV} =\sum_i ||\nabla \vec f_i||_{2}$ -Being my mayor in ElecEng, I feel like there are some maths here that I'm missing in order to understand and code this"gradient of the TV-norm" operator. -So, how can I compute that term? How can I get the gradient of the TV-norm? - -Disclaimer: due to my little math knowledge, I am unaware if this is a too specific problem or it has a more generic mathematical explanation. If the question is too specific to help anyone else, please inform me and Ill delete/edit my question. -The paper: Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization Emil Y Sidky and Xiaochuan Pan - -REPLY [2 votes]: I know the question is old and closed, but I faced the same problem and this question helped me. In the end I came up with a different derivation, which I share below. -Let $\mathbf{f}\in\mathbb{R}^{n_1 \cdots n_m}$ be a vector (e.g. when $m=2$, a linearized black and white image). We define $TV(\mathbf{f})$ by -$$TV(\mathbf{f}) = \sum\limits_{i=1}^{n_1 \cdots n_m} [|\nabla \mathbf{f}|]_i.$$ -I am abusing notation somewhat, and what I mean by the weird gradient is the following: -$$([|\nabla \mathbf{f}|]_i)^2 = ([D^1 \mathbf{f}]_i)^2 + ... + ([D^m \mathbf{f}]_i)^2$$ -$i=1,\ldots,n_1 \cdots n_m$, where $D^\ell, \ell=1,\ldots,m$ is the discrete linear operator (e.g. forward difference) in the $\ell$-th direction. In a black and white image, they would be $D^x$ and $D^y$, for example. Therefore, $|\nabla \mathbf{f}|\in\mathbb{R}^{n_1 \cdots n_m}$ represents a vector like $\mathbf{f},$ whose coordinates are the norm of the discrete local gradient of $\mathbf{f}$. -With this understanding, let's compute the gradient of the $TV$ norm: -$$\partial_j TV(\mathbf{f}) = \partial_j \sum\limits_{i=1}^{n_1 \cdots n_m} \sqrt{([D^1 \mathbf{f}]_i)^2 + ... + ([D^m \mathbf{f}]_i)^2}$$ -$$\partial_j TV(\mathbf{m}) = \partial_j \sum\limits_{i=1}^{n_1 \cdots n_m} \sqrt{\left(\sum\limits_{k=1}^{n_1 \cdots n_m}D^1_{ik} f_k\right)^2 + ... + \left(\sum\limits_{k=1}^{n_1 \cdots n_m}D^m_{ik} f_k\right)^2}$$ -$$\partial_j TV(\mathbf{m}) = \sum\limits_{i=1}^{n_1 \cdots n_m} \frac{\partial_j\left(\left(\sum\limits_{k=1}^{n_1 \cdots n_m}D^1_{ik} f_k\right)^2 + ... + \left(\sum\limits_{k=1}^{n_1 \cdots n_m}D^m_{ik} f_k\right)^2\right)}{2\sqrt{\sum\limits_{k=1}^{n_1 \cdots n_m}(D^1_{ik} f_k)^2 + ... + \sum\limits_{k=1}^{n_1 \cdots n_m}(D^m_{ik} f_k)^2}}$$ -$$\partial_j TV(\mathbf{m}) = \sum\limits_{i=1}^{n_1 \cdots n_m} \frac{\left(\sum\limits_{k=1}^{n_1 \cdots n_m}D^1_{ik} f_k\right) \partial_j \left(\sum\limits_{k=1}^{n_1 \cdots n_m}D^1_{ik} f_k\right) + ... + \left(\sum\limits_{k=1}^{n_1 \cdots n_m}D^m_{ik} f_k\right) \partial_j \left(\sum\limits_{k=1}^{n_1 \cdots n_m}D^m_{ik} f_k\right)}{[|\nabla \mathbf{f}|]_i}$$ -Since $\partial_{j}f_k = \delta_{jk}$, where $\delta_{jk}$ is the Kronecker delta, we have -$$\partial_j TV(\mathbf{m}) = \sum\limits_{i=1}^{n_1 \cdots n_m} \frac{\left(\sum\limits_{k=1}^{n_1 \cdots n_m}D^1_{ik} f_k\right) \left(\sum\limits_{k=1}^{n_1 \cdots n_m}D^1_{ik} \delta_{jk}\right) + ... + \left(\sum\limits_{k=1}^{n_1 \cdots n_m}D^m_{ik} f_k\right) \left(\sum\limits_{k=1}^{n_1 \cdots n_m}D^m_{ik} \delta_{jk}\right)}{[|\nabla \mathbf{f}|]_i}$$ -$$\partial_j TV(\mathbf{m}) = \sum\limits_{i=1}^{n_1 \cdots n_m} \frac{[D^1 \mathbf{f}]_i}{[|\nabla \mathbf{f}|]_i} D^1_{ij} + ... + \frac{[D^m \mathbf{f}]_i}{[|\nabla \mathbf{f}|]_i} D^m_{ij}$$ -$$\partial_j TV(\mathbf{m}) = -\left[(D^1)^T\frac{D^1 \mathbf{f}}{|\nabla \mathbf{f}|}\right]_j + ... + \left[(D^m)^T\frac{D^m \mathbf{f}}{|\nabla \mathbf{f}|}\right]_j$$ -with some abuse in notation, we obtain -$$\partial_j TV(\mathbf{m}) = -\left[ ((D^1)^T, \ldots, (D^m)^T) \cdot \left(\frac{D^1 \mathbf{f}}{|\nabla \mathbf{f}|}, \ldots, \frac{D^m \mathbf{f}}{|\nabla \mathbf{f}|}\right)\right]_j$$ -or even -$$\nabla TV(\mathbf{m}) = -((D^1)^T, \ldots, (D^m)^T) \cdot \left(\frac{D^1 \mathbf{f}}{|\nabla \mathbf{f}|}, \ldots, \frac{D^m \mathbf{f}}{|\nabla \mathbf{f}|}\right)$$ -This is very similar to the Fréchet derivative of the continuous functional, apart from the sign: -$$TV(m) = -\nabla \cdot\left(\frac{\nabla m}{|\nabla m|}\right)$$ -This sign is implicit in the transposition of the derivative operators. In the continuous case, the gradient and the negative divergence are adjoints of each other, the derivative and its negative are also adjoints. In the discrete setting, $D_{CD}^i = -(D_{CD}^i)^T$ using central differences and $D^i_\text{FD} = -(D^i)^T_\text{BD}$ for forward and backward differences (with symmetric boundary conditions). -I have written some code in Julia, in which I have implemented this in a geophysical inversion setting, if you are interested!<|endoftext|> -TITLE: Is the category of finite sets small? -QUESTION [7 upvotes]: In the book Category theory by Awodey, a category can have proper classes as objects and arrows. Then he has -Definition 1.11. A category is small if both the collection of objects and the collection of arrows are sets. -Right below the definition he claims - -For example, all finite categories are clearly small, as is the category $\text{Sets}_{\text{fin}}$ of finite sets and functions. (Actually, one should stipulate that the sets are only built from other finite sets, all the way down, i.e., that they are “hereditarily finite”.) - -There is no set of finite sets, since there are the finite sets $\{M\}$ for any set $M$. So the second part of the first sentence is not true, right? Is that what he means by the remark in parenthesis? - -REPLY [6 votes]: The category of finite sets is not small. However, the category of finite sets is essentially small, i.e., equivalent to a small category. This suffices, for most purposes, to apply results about small categories. Usually, category theoretic stuff is (or, should!) be invariant under equivalences of categories. -The category of hereditarily finite sets is small; Asaf has already mentioned that they all belong to the set $V_{\omega}$ in the Von Neumann hierarchy.<|endoftext|> -TITLE: Optimal Strategy for this schoolyard game - (Charge, block, shoot) -QUESTION [10 upvotes]: I encountered this game when I was a kid (we called it Street Fighter back when it was all the rage) and recently saw it again with my nephews playing the same game with a different name and slightly different rules. -The basic game is an RPS-style game where each participant selects one of the following actions per round. - -Charge -Block -Fireball (uses up 1 charge) -Super Fireball (Uses up 5 charges) - -Anyone who gets hit by a fireball while charging is dead. Blocking cancels fireballs thrown at you and two fireballs fired at each other also cancel each other out. Super fireball goes through blocks and overpowers regular fireballs to automatically kill the opponent unless he super fireballs as well. -I was wondering what the optimal strategy was, if any. During which rounds is it best to fire/block? Is it better to go for the super blast, or to catch your opponent unawares with a well-timed regular fireball? -What would be the numbers for the 2-player case? How will this increase in complexity as the number of players increase as well? -Edit: What if the number of required charges for the super fireball is increased/decreased? - -REPLY [2 votes]: Let $X_{a,b}$ be the expected payoff of this game for a player with $a$ charges when the other player has $b$ charges, where $+1$ is a win and $-1$ is a loss, and assuming optimal play. We will take the cost of a super fireball to be $M>1$. By symmetry, $X_{a,b}=-X_{b,a}$, and clearly $X_{a,a}=0$ for any $a$. From state $(0,0)$, both players will charge up to state $(1,1)$; similarly, from state $(M,M)$, both players will super-fireball back to $(0,0)$. Next, $X_{M,a}=+1$ for any $a -TITLE: Draw the graph of $\sin(\pi/​2-​2x)$ -QUESTION [5 upvotes]: I tried to draw the graph of this function $\sin\left(​\frac{\pi}{2}​ - ​2​x\right).$ -If I understand correctly, this means that we have to shrink the graph by $2$, shift the curve by $\frac{\pi}{2}$ to the left, and invert it, because we multiplied $x$ by a negative number ($-2$ in this case). -The curve I got this way is the same as the curve of $\sin(2x)$ function. Where did I make a mistake? What kind of transformations, and in which order we need to make, to transform $\sin(x)$ to $\sin\left(​\frac{\pi}{2}​ - ​2​x\right)?$ - -REPLY [3 votes]: Thinking about it the way that you are saying, you can write $\sin(\pi/2-2x)=\sin(-(2x-\pi/2))=-\sin(2x-\pi/2)=-\sin(2(x-\pi/4))$, where in only the second to last step we have used a trig identity. So you shrink by a factor of $2$, shift to the right by $\pi/4$, and reflect through the $x$ axis. This winds up putting a maximum at $0$, which you might recognize as actually being a cosine, as you could derive using a different trig identity.<|endoftext|> -TITLE: Product of degree of two field extensions of prime degree -QUESTION [6 upvotes]: Let $L/K$ be a field extension. Let be $\alpha, \beta \in \mathbb{C}$, such that $[\mathbb{Q}(\alpha):\mathbb{Q}] = p$, and $[\mathbb{Q}(\beta):\mathbb{Q}] = q$, for some prime numbers $p$ and $q$. Assume $p \neq q$. Prove that: -$\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,[\mathbb{Q}(\alpha,\beta):Q] = [\mathbb{Q}(\alpha):\mathbb{Q}] \cdot [\mathbb{Q}(\beta):\mathbb{Q}]$ -I have no idea how to proceed on this, besides proving that the left hand side is $pq$. - -REPLY [8 votes]: You have $\mathbb{Q}\subset \mathbb{Q}(\alpha) \subset \mathbb{Q}(\alpha,\beta)$, so $$[\mathbb{Q}(\alpha,\beta):\mathbb{Q}]=[\mathbb{Q}(\alpha,\beta):\mathbb{Q}(\alpha)]\cdot [\mathbb{Q}(\alpha):\mathbb{Q}],$$which means $p=[\mathbb{Q}(\alpha):\mathbb{Q}]$ divides $[\mathbb{Q}(\alpha,\beta):\mathbb{Q}]$. The same argument applies if you replace $\alpha$ with $\beta$, so $q=[\mathbb{Q}(\beta):\mathbb{Q}]$ divides $[\mathbb{Q}(\alpha,\beta):\mathbb{Q}]$. Since $p,q$ are distinct primes, $pq$ divides $[\mathbb{Q}(\alpha,\beta):\mathbb{Q}]$. -Moreover,$$[\mathbb{Q}(\alpha)(\beta):\mathbb{Q}(\alpha)]\leq [\mathbb{Q}(\beta):\mathbb{Q}]=q$$because the minimal polynomial of $\beta$ over $\mathbb{Q}$ is also a polynomial in $\mathbb{Q}(\alpha)$. Thus $$[\mathbb{Q}(\alpha,\beta):\mathbb{Q}]\leq pq.$$ -Thus we conclude that $$[\mathbb{Q}(\alpha,\beta):\mathbb{Q}]=pq=[\mathbb{Q}(\alpha):\mathbb{Q}]\cdot[\mathbb{Q}(\beta):\mathbb{Q}].$$<|endoftext|> -TITLE: Notation of the second derivative - Where does the d go? -QUESTION [38 upvotes]: In school I was taught that we use $\frac{du}{dx}$ as a notation for the first derivative of a function $u(x)$. I was also told that we could use the $d$ just like any variable. -After some time we were given the notation for the second derivative and it was explained as follows: -$$ -\frac{d(\frac{du}{dx})}{dx} = \frac{d^2 u}{dx^2} -$$ -What I do not get here is, if we can use the $d$ as any variable, I would get the following result: -$$ -\frac{d(\frac{du}{dx})}{dx} =\frac{ddu}{dx\,dx} = \frac{d^2 u}{d^2 x^2} -$$ -Apparently it is not the same as the notation we were given. A $d$ is missing. -I have done some research on this and found some vague comments about "There are reasons for that, but you do not need to know..." or "That is mainly a notation issue, but you do not need to know further." -So what I am asking for is: Is this really just a notation thing? -If so, does this mean we can actually NOT use d like a variable? -If not, where does the $d$ go? -I found this related question, but it does not really answer my specific question. So I would not see it as a duplicate, but correct me if my search has not been sufficient and there indeed is a similar question out there already. - -REPLY [5 votes]: Think of the meaning of $d/dx$. The $d$ in the numerator is an operator: it says, "take the infinitesimal difference of whatever follows $d/dx$". In contrast, the $dx$ in the denominator is just a number (yes, I know; mathematicians, please don't cringe): it is the infinitesimal difference in $x$. -So $d/dx$ means "take the infinitesimal difference of whatever follows, and then divide by the number $dx$." -Similarly, $d^2/dx^2$ means "take the infinitesimal difference of the infinitesimal difference of whatever follows, and then divide by the square of the number $dx$." -In short, the $d$ in the numerator is an operator, whereas in the denominator, it is part of a symbol. A slightly less ambiguous notation, as suggested by user1717828, would be to put the $(dx)$ in the denominator in parenthesis, but it really isn't necessary in practice.<|endoftext|> -TITLE: y'''+4y"+4y'=2 solution of non-homogenous differential equation -QUESTION [6 upvotes]: This is non-homogenous differential equation : -$$y'''+4y''+4y'=2.$$ -Of course, I started with characteristic polynomial of homogenous case: -$$t^3+4t^2+4t=0$$ then $$t(t^2+4t+4)=0$$ we have: -$$t_1=0; t_{2,3}=-2.$$ So, solution of homogenous case is: -$$y_s(x)=c_1 + c_2e^{-2x}+c_3xe^{-2x}$$ -Now, I want to continue from this point to solution of non-homogenous differential equation. Please give any hint or general solution!! Thanks in advance. - -REPLY [3 votes]: we can make the D.E as a homogeneous D.E by taking the derivative -$$y''''+4y'''+4y''=0$$ -the characteristics equation -$$r^2(r^2+4r+4)=0$$ -so -$$y=c_1+c_2x+c_3e^{-2x}+c_4xe^{-2x}$$<|endoftext|> -TITLE: how to find null space basis directly by matrix calculation -QUESTION [5 upvotes]: The problem of finding the basis for the null space of an $m \times n$ matrix $A$ is a well-known problem of linear algebra. We solve $Ax=0$ by Gaussian elimination. Either the solution is unique and $x=0$ is the only solution, or, there are infinitely many solutions which can be parametrized by the non-pivotal variables. Traditionally, my advice has been to calculate $\text{rref}(A)$ then read from that the dependence of pivotal on non-pivotal variables. Next, I put those linear dependencies into $x = (x_1, \dots , x_n)$ and if $x_{i_1}, \dots , x_{i_k}$ are the non-pivotal variables we can write: -$$ x = x_{i_1}v_1+ \cdots + x_{i_k}v_k \qquad \star$$ -where $v_1, \dots, v_k$ are linearly independent solutions of $Ax=0$. In fact, $\text{Null}(A) = \text{span}\{ v_1, \dots, v_k \}$ and $k = \text{nullity}(A) = \text{dim}(\text{Null}(a))$. In contrast, to read off the basis of the column space I need only calculate $\text{rref}(A)$ to identify the pivot columns ( I suppose $\text{ref}(A)$ or less might suffice for this task). Then by the column correspondence property it follows that the pivot columns of $A$ serve as a basis for the column space of $A$. My question is this: - -What is the nice way to calculate the basis for the null space of $A$ without need for non-matrix calculation? In particular, I'd like an algorithm where the basis for $\text{Null}(A)$ appears explicitly. - -I'd like avoid the step I outline at $\star$. When I took graduate linear algebra the professor gave a handout which explained how to do this, but, I'd like a more standard reference. I'm primarily interested in the characteristic zero case, but, I would be delighted by a more general answer. Thanks in advance for your insight. The ideal answer outlines the method and points to a standard reference on this calculation. - -REPLY [3 votes]: the procedure suggested by amd is what I was looking for. I will supplement his excellent examples with a brief explanation as to why it works. Some fundamental observations: -$$ \text{Col}(A) = \text{Row}(A^T) \ \ \& \ \ [\text{Row}(A)]^T = \text{Col}(A)$$ -Also, for any Gaussian elimination there exists a product of elementary matrices for which the row reduction can be implemented as a matrix multiplication. That is, $\text{rref}(M) = EM$ for an invertible square matrix $E$ of the appropriate size. With these standard facts of matrix theory in mind we continue. -Let $A$ be an $m \times n$ matrix. Construct $M = [A^T|I]$ where $I$ is the $n \times n$ identity matrix. Suppose $\text{rref}(M) = EM$. Let $B$ be an $k \times m$ matrix and $C$ be an $(n-k) \times n$ matrix for which -$$ \text{rref}(M) = \left[ \begin{array}{c|c} B & W \\ \hline 0 & C \end{array} \right]$$ -the $W$ is a $k \times n$ matrix. Here we assume all rows in $B$ are nonzero. One special case deserves some comment: in the case $A^T$ is invertible there is no $0,W$ or $C$ and $k=0$. Otherwise, there is at least one zero row in $\text{rref}(A^T)$ as the usual identities for row reduction reveal that $\text{rref}(A^T) = \left[ \begin{array}{c} B \\ \hline 0 \end{array}\right]$. But, the nonzero rows in the rref of a matrix form a basis for the row space of a matrix. Thus the rows of $B$ form a basis for the row space of $A^T$. It follows the transpose of the rows of $B$ form a basis for the column space of $A$. I derive this again more directly in what follows. -We have $EM = E[A^T|I] = \left[ \begin{array}{c|c} B & W \\ \hline 0 & C \end{array} \right]$ thus $[EA^T|E] = \left[ \begin{array}{c|c} B & W \\ \hline 0 & C \end{array} \right]$. From this we read two lovely equations: -$$ EA^T = \left[ \begin{array}{c} B \\ \hline 0 \end{array}\right] \ \ \& \ \ E = \left[ \begin{array}{c} W \\ \hline C \end{array}\right]$$ -Transposing these we obtain -$$ AE^T = [B^T|0] \ \ \& \ \ E^T = [W^T|C^T]$$ -thus -$$ AE^T = A[W^T|C^T] = [AW^T|AC^T] = [B^T|0] $$ -Once more we obtain two interesting equations: -$$ (i.) \ AW^T = B^T \ \ \& \ \ (ii.) \ AC^T = 0 $$ -It follows immediately from $(i.)$ that the columns in $B^T$ are in the column space of $A$. Likewise, it follows immediately from $(ii.)$ that the columns in $C^T$ are in the null space of $A$. By construction, the columns of $B^T$ are the rows of $B$ which are linearly independent due to the structure of Gaussian elimination. Furthermore, the rank of $M$ is clearly $n$ by its construction. It follows that there must be $(n-k)$ linearly independent rows in $C$. But, I already argued that the rows of $B$ give a basis for $\text{Row}(A^T)$ hence $k$ is the rank of $A$ and $(n-k)$ is the nullity of $A$. This completes the proof that the columns of $C^T$ form a basis for $\text{Null}(A)$ and the columns of $B^T$ form the basis for $\text{Col}(A)$. In summary, to obtain both the basis for the column and null space at once we can calculate: -$$ [\text{rref}[A^T|I]]^T = \left[ \begin{array}{c|c} B^T & 0 \\ \hline W^T & C^T \end{array} \right]$$ -Of course, pragmatically, it's faster for small examples to simply follow the usual calculation at $\star$ in my original question. Thanks to amd for the help.<|endoftext|> -TITLE: Simplify $\sum\limits_{a_1=0}^{p_1}\sum\limits_{a_2=0}^{p_2}\sum\limits_{a_3=0}^{p_3}...\sum\limits_{a_n=0}^{p_n}\frac{p!}{a_1!a_2!a_3!.... a_n!}$ -QUESTION [6 upvotes]: I am currently doodling around with some mathematics and stumbled across the following expression: -$$\sum\limits_{a_1=0}^{p_1}\sum\limits_{a_2=0}^{p_2}\sum\limits_{a_3=0}^{p_3}\cdots\sum\limits_{a_n=0}^{p_n}\frac{p!}{a_1!a_2!a_3!\cdots a_n!}$$ -where $p=p_1+p_2+p_3+\cdots+p_n$ and $p_i \in \mathbb{N}$ -This is a tedious expression to calculate and thus I was wondering whether it is possible to simplify it? - -REPLY [2 votes]: As written, the summand doesn't depend on the variables being summed over, so it is simply equal to -$$p_1p_2 \cdots p_n \cdot \dfrac{p!}{p_1!p_2! \cdots p_n!}$$ -which, in turn, is equal to -$$\frac{p!}{(p_1-1)!(p_2-1)! \cdots (p_n-1)!}$$ -If, instead, you intended to sum over all $(p_1, p_2, \dots, p_n)$ such that $p_1+p_2+\cdots+p_n=p$, then the answer is quite simply $n^p$. -The reason for this is that $\frac{p!}{p_1!p_2!\cdots p_n!}$ is a multinomial coefficient, which counts the number of partitions of a set $A$ of size $p$ into $n$ sets $(A_1, A_2, \dots, A_n)$, where $|A_i|=p_i$ for each $1 \le i \le n$. Summing over all the possible sizes of such sets, i.e. all $(p_1, \dots, p_n)$ such that $p_1+p_2+\cdots+p_n=p$, the sum thus counts the number of partitions of a set of size $p$. -A partition of $A$ into $n$ sets is equivalent to a function $f : A \to [n]$, where $A_i = f^{-1}(\{i\})$ is the set of elements of $A$ mapped to $i$ by $f$. Thus the sum is equal to the number of functions $A \to [n]$ when $|A|=p$, which is precisely $n^p$.<|endoftext|> -TITLE: Why must $A_n$ be generated by the 3-cycles -QUESTION [9 upvotes]: For my course in Group Theory, I have seen various proofs that show why the alternating group $A_n$, which consists of the elements of $S_n$ that can be expressed as an even number of transpositions (i.e. 2-cycles), is generated by the 3-cycles. -All of these proofs, and sometimes also the question, seem to guide you to showing that any element in $A_n$ can be expressed as a product of 3-cycles. Now I get the proofs up to this point. -What I do not understand, and I hope you can help me with, is why the fact that any element in $A_n$ can be expressed as a product of 3-cycles means that $A_n$ is generated by the 3-cycles. Could it not be that, even though any element of $A_n$ can be expressed as a product of 3-cycles, that if we let the 3-cycles generate a group there will be elements in that group that are not in $A_n$? I do not see why our proof (for instance given here) would exclude that possibility. -If any of you could shed some light on this, your help is very much appreciated! - -REPLY [11 votes]: I don't think that both resolutions given are complete. -Let $\sigma_1,\cdots, \sigma_s$ be the 3-cycles from $S_n$. From the given answers, it was shown that $\langle \sigma_1,\cdots, \sigma_s\rangle \subseteq A_n.$ It remains, then, to show that $A_n \subseteq \langle \sigma_1,\cdots, \sigma_s\rangle. $ -Let $\alpha \in A_n$. We know that $\alpha$ can be written as a product of transpositions, and by the parity of $\alpha$ it must be the product of an even number of transpositions. Now note that the product of two transpositions is always a product of 3-cycles: indeed, if $\tau_1 = (a_1,a_2), \tau_2 = (b_1,b_2)$ are disjoint, then $\tau_1\tau_2 = (a_1b_1a_2)(b_1b_2a_1);$ and if they have an element in common, say $a_2=b_1$, then $\tau_1\tau_2 = (b_1b_2a_1).$ Now, we've shown that $\alpha$ has an even number of transpositions and since the product of two transposition is a 3-cycle, $\alpha$ is then a product of 3-cycles. Hence, $A_n\subseteq \langle \sigma_1,\cdots, \sigma_s\rangle$<|endoftext|> -TITLE: Triangularization of matrices over algebraically closed field -QUESTION [6 upvotes]: A friend of mine is studying physics in first semester and for his next assignment, he has to prove the following theorem: - -Let $V$ be a finite dimensional vector space over an algebraically closed field $K$. Further, let $f: V \to V$ be an endomorphism. Then there exists a basis $B$ of $V$, such that $\mbox{Mat}_{B,B}(f)$ is an upper triangular matrix. - -Now this theorem really stumbles me, because I know two proofs of it but they are way beyond first semester. They have only introduced elementary matrix/basis manipulation, basis change theorems and they know theorems about the existence of eigenvectors and eigenvalues ($K$ is algebraically closed so there has to exist an eigenvector). Is there a way to prove this theorem just with the mentioned work tools? -Thanks for your help! - -REPLY [4 votes]: Let $A$ be the matrix of $f$ with respect to a basis. You need to find an invertible matrix $S$ and an upper triangular matrix $T$ such that $A=STS^{-1}$. -Since the base field is algebraically closed, we can find an eigenvalue $\lambda$ and an eigenvector $v$. Complete $v$ to a basis of $V$, say $\{v=v_1,v_2,\dots,v_n\}$, and let $S_0=[v_1\ v_2\ \dots\ v_n]$. Then -$$ -S_0^{-1}AS_0= -\begin{bmatrix} -\lambda & \mathbf{x}^T \\ -\mathbf{0} & A_1 -\end{bmatrix} -$$ -for some vector $\mathbf{x}\in K^{n-1}$ and some $(n-1)\times(n-1)$ matrix $A_1$. By induction hypothesis, there is an invertible $(n-1)\times(n-1)$ matrix $S_1$ such that -$$ -T_1=S_1^{-1}A_1S_1 -$$ -is upper triangular. Consider -$$ -\hat{S}_1= -\begin{bmatrix} -1 & \mathbf{0}^T \\ -\mathbf{0} & S_1 -\end{bmatrix} -$$ -Then -$$ -\hat{S}_1^{-1}= -\begin{bmatrix} -1 & \mathbf{0}^T \\ -\mathbf{0} & S_1^{-1} -\end{bmatrix} -$$ -and -\begin{align} -\hat{S}_1^{-1}S_0^{-1}AS_0\hat{S}_1&= -\begin{bmatrix} -1 & \mathbf{0}^T \\ -\mathbf{0} & S_1^{-1} -\end{bmatrix} -\begin{bmatrix} -\lambda & \mathbf{x}^T \\ -\mathbf{0} & A_1 -\end{bmatrix} -\begin{bmatrix} -1 & \mathbf{0}^T \\ -\mathbf{0} & S_1 -\end{bmatrix}\\ -&= -\begin{bmatrix} -1 & \mathbf{0}^T \\ -\mathbf{0} & S_1^{-1} -\end{bmatrix} -\begin{bmatrix} -\lambda & \mathbf{x}^TS_1\\ -\mathbf{0} & A_1S_1 -\end{bmatrix} -\\ -&= -\begin{bmatrix} -\lambda & \mathbf{x}^TS_1\\ -\mathbf{0} & S_1^{-1}A_1S_1 -\end{bmatrix} -\\ -&= -\begin{bmatrix} -\lambda & \mathbf{x}^TS_1\\ -\mathbf{0} & T_1 -\end{bmatrix} -\end{align} -is upper triangular.<|endoftext|> -TITLE: What's wrong with this equal probability solution for Monty Hall Problem? -QUESTION [10 upvotes]: I'm confused about why we should change door in the Monty Hall Problem, when thinking from a different perspective gives me equal probability. -Think about this first: if we have two doors, and one car behind one of them, then we have a 50/50 chance of choosing the right door. -Back to Monty Hall: after we pick a door, one door is opened and shows a goat, and the other door remains closed. Let's call the door we picked A and the other closed door B. Now since 1 door has already been opened, our knowledge has changed such that the car can only be behind A or B. Therefore, the problem is equivalent to: given two closed doors (A and B) and one car, which door should be chosen (we know it's a 50/50 thing)? -Then, not switching door = choosing A, and switching door = choosing B. Therefore, it seems that switching should be equally likely, instead of more likely. -Another way to think: no matter which door we choose from the three, we know BEFOREHAND that we can definitely open a door with a goat in the remaining two. Therefore, showing an open door with a goat reveals nothing new about which door has the car. -What's wrong with this thinking process? (Note that I know the argument why switching gives advantage, and I know experiments have been done to prove that. My question is why the above thinking, which seems legit, is actually wrong.) Thanks. - -REPLY [2 votes]: You can better see it imagining you had a box with $100$ balls, from which $99$ are black and only $1$ is white, which is what you want. You grab one randomly and keep it in your hand without seeing it. Do you agree that you are $99/100$ likely to have picked a black ball, so in $99$ out of $100$ attempts (on average) the white ball would still be in the box? -In case you agree, let's keep that ball in your hand and now suppose that another person deliberately pulls out $98$ black balls from the box. With "deliberately" I mean that that person sees what he is pulling; there is no risk that he removes the white ball by accident. In this way, there are only two balls remaining, one in your hand and one in the box, and one of them is necessarily the white. -What do you think is the probability that the white ball is which is in the box? If you say $50$%, what happened with the $99$ out of $100$ attempts in which it was still in the box? The revelation of the $98$ black ones didn't move it from the box to your hand. -Before the revelation of the $98$ black balls, the cases are: - Hand || Box - ============================================= -1) In 99 out of 100 attempts -> 1 black || 98 black ones and 1 white -2) In 1 out of 100 attempts -> 1 white || 99 black ones - -So, when the other person removes the $98$ black balls from the box: - Hand || Box - ============================================= -1) In 99 out of 100 attempts -> 1 black || 1 white -2) In 1 out of 100 attempts -> 1 white || 1 black - -So, it is true that you always end with two balls, one white and one black, but the important thing is that they are in two different positions (hand or box), and those two positions depend on the first selection. Moreover, that first selection determines that the white ball will end more frequently in the "box" position than in the "hand" position. -The way you are thinking the Monty Hall problem is like since you are always going to end with two balls, it would be the same if you started with both in the box and you had to grab one. But it is not the same. One thing is the probability to get the correct one when you randomly pick from two, and another different thing is the probability that the correct is already set in one position or in the other. -Note that if you randomly decide if you will pick the ball in the box or the ball in your hand, like flipping a coin, then you will get the white $50$% of the time. But that does not mean that it is $50$% of the time in the hand and $50$% in the box. It is because the extra times that you guess right picking the one from the box are compensated with the extra times you guess wrong picking the one from the hand. The $50$% $= 1/2$ is the average of the two cases: -$$1/2 * 99/100 + 1/2 * 1/100$$ -$$= 1/2 * (99/100 + 1/100)$$ -$$= 1/2$$ -But if you always pick the ball that is in the box, your chances are: -$$1 * 99/100 + 0 * 1/100$$ -$$= 1 * 99/100$$ -$$= 99/100$$ -In Monty Hall it occurs the same. Since there are two incorrect doors and a correct one, it is like if the $3$ doors were $3$ balls in the box, $2$ blacks and $1$ white. The initial selection is like when you start grabbing one ball randomly, and after the revelation the switching door is like the other ball that was left in the box. In $2$ out of $3$ attempts you pick a wrong door (like you would pick a black ball $2$ out of $3$ times) so in $2$ out of $3$ attempts the correct one will be the other the host leaves closed.<|endoftext|> -TITLE: Is there a known mathematical foundation to the concept of emergence? -QUESTION [8 upvotes]: I'm researching many topics including emergence and chaos theory, and I cannot for the life of me find strictly mathematical treatments of the idea of emergence. Is there any form or field of mathematics that can predict the emergence of one equation from another, or from a set of equations? A simple analogy would be the "emergence" of a velocity equation by differentiating the position equation, and an acceleration equation from a velocity equation. More aptly, for example, is there any known way in which the Navier-Stokes equation can "emerge" from the equations of Schrödinger, Pauli or Dirac (or even the equations of QCD)? Some relatively "simple" transformation based upon, perhaps, a single parameter (ideally, maybe scale, energy, etc), that can change an equation from one integrative level to an equation from a higher/lower integrative level? -I realize this seems to be a hotly debated topic in some ways, but I cannot seem to find what I am looking for. My intuition for some reason says this may involve, among other ideas, fractional differential equations, Galois theory, fractal geometry, nested matrices, Fourier/Laplace transformations, that kind of thing. Deep down (despite my lack of formal mathematical education), I truly feel there HAS to be a relatively simple way in which equations can be transformed from small-scale dynamics to larger, emergent phenomena. Imagine having a transformation that could transform the Schrödinger equation smoothly through the Pauli equation, then the Dirac equation, on up through the Navier-Stokes equation (finally?) arriving at the Einstein Field Equations, all based upon a few (maybe even a single) parameter(s). - -REPLY [3 votes]: There is no known way of deriving the Navier-Stokes equation from the Bolzmann equations. -There are attempts at putting emergence on a firm mathematical foundation in very wide generality. While the following introduces it only in the context of cellular automata, it generalises well to other domains: -Robert S. MacKay.Space-time phases, page 387–426. London Mathematical Society Lecture Note Series. Cambridge University Press, 2013. -See also this paper for another method for quantifying emergence using shannon entropy: -ROBIN C. BALL, MARINA DIAKONOVA, and ROBERT S. MACKAY.Quantifying emergence in terms of persistent mutual information.Advancesin Complex Systems, 13(03):327–338, 2010.<|endoftext|> -TITLE: Are all rationally parametrized plane curves algebraic? How does one find their degree? -QUESTION [5 upvotes]: Suppose a plane curve is given parametrically by $x=p(t),y=q(t)$, where $p,q$ are rational functions. I originally assumed that this means that the parametrized curve is algebraic, i.e. that it is the zero set of a polynomial in $x$ and $y$, but now I have doubts. Take for starters the case where $p,q$ are polynomials of degrees $m$ and $n$ respectively. In simple examples like $x=t^2+2,y=t^3-1$ the algebraic equation is easy to obtain, $(x-2)^3=(y+1)^2$, and the degree of the curve appears to be $\text{max}(m,n)$ generically (of course one can make up degenerate examples like $x=t^3,y=t^3$, where it drops). But what about $x=t^5+t^2+1, y=t^6-t+1$? It's not like we can solve for $t$ in radicals, plug in, and take powers to eliminate the radicals to get an algebraic equation in terms of $x$ and $y$. And even when that's possible it is not clear what the degree will come out to be after taking all the powers. -So my questions are: when are polynomially (or more generally rationally) parametrized plane curves algebraic? Is there an algorithm for finding the algebraic equation when there is one? Is there a way to find the algebraic degree without finding the equation? Is the degree generically $\text{max}(m,n)$ when the parametrizing polynomials have degrees $m,n$? I looked in Gibson's Elementary Geometry of Algebraic Curves, and Reid's Undergraduate Algebraic Geometry, but they only deal with converse questions, from equation to parametrization. -EDIT: After searching I found out that converting parametric equations into implicit ones is called implicitization (the opposite of parametrization), and apparently it is a big thing in computational geometry and applications because it provides an efficient way of determining if a given point lies on a given curve (or surface). Sederberg, Anderson and Goldman give an overview of elimination theory that in particular implicitizes rationally parametrized plane curves, and explain why the degree does not increase under implicitization (p.78). - -REPLY [9 votes]: Yes, such curves are algebraic. There are several ways to see this via abstract algebra (field extensions, transcendence degrees, etc.); an explicit equation can be found by elimination theory, specifically by writing $x=p(t)$, $y=q(t)$ as polynomial relations between $t$ and $x,y$ respectively, and computing the resultant of these two polynomials with respect to $t$. -For example, for $x = t^5 + t^2 + 1$, $y = t^6 - t + 1$ the polresultant command in gp soon gives - --x^6 + 11*x^5 - 50*x^4 + (6*y^2 - 10*y + 124)*x^3 + (-31*y^2 + 57*y - 186)*x^2 + -(52*y^2 - 103*y + 164)*x + (y^5 - 8*y^4 + 25*y^3 - 66*y^2 + 86*y - 71)<|endoftext|> -TITLE: When $n!=m(m+1)(m+2)$: A Diophantine Equation -QUESTION [8 upvotes]: I believe that I saw this problem not long ago in a book: -Solve the Diophantine Equation $k!=n(n+1)(n+2)$, where $k,n$ are positive integers. -However, I am no longer able to find this question, and further examination has revealed the possibility that I may have been mistaken. -The equation appears to be similar to Brocard`s Problem, a unsolved problem in mathematics. -The only solutions appear to be $(n,k)=(1,3)(8,6)(4,5)(2,4)$. -Is there a easy way to solve this problem? - -REPLY [3 votes]: The following is slightly too long for a comment, apologies for that. Let $rad(x)$ be the largest squarefree divisor of $x$. -Conjecture (weak form of the ABC conjecture). -There exists an absolute $k \in \mathbb{R}$ (probably $k = 2$ works) such that for all coprime $a, b, c$ with $a + b = c$ we have $rad(abc)^k > c$. -Theorem. Assuming the above conjecture, there are at most finitely many solutions to the equation $n! = m(m+1)(m+2)$. More precisely, for every solution we have $n < 3^{3k+1}$. -Proof. Using known bounds on the Chebyshev functions (see here), we get on the one hand, -\begin{align*} -\log(rad(m(m+1)(m+2))) &= \log(rad(n!)) \\ -&< 1.000028n \\ -&< n\log(3) -\end{align*} - On the other hand, $\log(rad(m(m+2)(2m+2))) > \frac{1}{k} \log(m+1)$ by the above conjecture. We claim that these bounds contradict each other when $n > 3^{3k+1}$. -By an explicit form of Stirling's approximation, we have $\log(n!) > n (\log(n) - 1)$. Since $n! = m(m+1)(m+2) < (m+1)^3$ we get $\log(m+1) > \frac{1}{3}n (\log(n) - 1)$. We thusly obtain our desired contradiction; -\begin{align*} -\log(rad(m(m+1)(m+2)) &= \log(rad(m(m+2)(2m+2))) \\ -&> \frac{ \log(m+1)}{k} \\ -&> \frac{n(\log(n) - 1)}{3k} \\ -&> n \log(3) -\end{align*} -Where the final equality uses the assumption that $n > 3^{3k+1}$. -If $k=2$ indeed works, we only have to check $3^7 = 2187$ values for $n$ to find all solutions. By being slightly more careful with estimates and using $k = 1.63$ as a value (you can find some so-called abc triples here. No example with $k \ge 1.63$ is known), only $n < 400$ have to be checked.<|endoftext|> -TITLE: Continuous and discontinuous function problem -QUESTION [15 upvotes]: The following problem can be true? -Problem: For every positive integer $n>1$ there exists a function $ f\left( x \right)$ on $\mathbb{R}$ which satisfies both following conditions: -(i) $f\left( x \right),{\rm{ }}f\left( {f\left( x \right)} \right), \ldots ,{\rm{ }}f\left( { \ldots f\left( x \right) \ldots } \right)$ ( $ (n-1)$ times $ f $) discontinuous at every $x$ belong to $\mathbb{R}$. -(ii) $ f\left( { \ldots \left( {f\left( x \right) \ldots } \right)} \right)$ ( $n$ times $f$ ) continuous in $ \mathbb{R}$. - -REPLY [2 votes]: Let $A_{n}$ denote the set of real numbers which solve an integer polynomial of degree $n$, but are not roots of any polynomial of degree $< n$. Then $A_{1} = \mathbb{Q}$, and $A_{2}$ is the set of all irrational (real) solutions to quadratic polynomials, etc. Let -\begin{align*} -f(x) & = \begin{cases} -0 & \textrm{if } x \not \in A_{1}, \ldots, A_{n}, \\ -2 & \textrm{if } x \in A_{1} , A_{2} , \\ -\sqrt{2} & \textrm{if } x \in A_{3} , \\ -2^{1 / 3} & \textrm{if } x \in A_{4} , \\ -\vdots & \vdots \\ -2^{1 / (n - 1)} & \textrm{if } x \in A_{n} . -\end{cases} -\end{align*} -That should do it, I'm pretty sure.<|endoftext|> -TITLE: Cohomology ring of $n$-torus -QUESTION [7 upvotes]: While developing the cup product, Hatcher gives the following example: - -I understand most of it, but I am having trouble understanding what he means at the end by the first two sentences in the last paragraph (which begin "An equivalent statement is that..." and "Via the long exact sequence..." -I just don't know why that statement is equivalent or why the long exact sequence implies what he says it does. - -REPLY [4 votes]: One has the following quotient map $q : (Y \times I, Y \times \partial I) \to (Y \times S^1, Y \times s_0)$ obtained from the equivalence relation $\sim$ on $Y \times I$ defined on $\partial I \times Y$ by $(y, 0) \sim (y, 1)$. One can prove that $q_*$ on cohomology is an isomorphism as follows. -$$\require{AMScd} -\begin{CD} -H^k(Y \times I, Y \times \partial I) @<{q_*}<< H^k(Y \times S^1, s_0 \times Y)\\ -@A{\cong}AA @A{\cong}AA \\ -H^k(Y \times I/Y \times \partial I, pt) @<{\cong}<< H^k(Y \times S^1/s_0 \times Y, pt) -\end{CD}$$ -The vertical maps are isomorphisms because $(Y \times I, Y \times \partial I)$ and $(Y \times S^1, s_0 \times Y)$ are both good pairs, as so are $(I, \partial I)$ and $(S^1, pt)$. The bottom horizontal map is obtained from the obvious homeomorphism between $Y \times I/\partial I \times Y$ and $S^1 \times Y/s_0 \times Y$, hence is also an isomorphism. The diagram commutes, hence $q_*$ is also an isomorphism. By naturality of cross product, we have the following commutative diagram, -$$\require{AMScd} -\begin{CD} -H^n(Y; R) @>{\times \alpha}>> H^{n+1}(Y \times I, Y \times \partial I; R)\\ -@A\text{id}AA @A{q_*}AA \\ -H^n(Y; R) @>{\times \alpha'}>> H^{n+1}(Y \times S^1, Y \times s_0 ;R); -\end{CD}$$ -Where $\alpha$ is the generator of $H^1(I, \partial I; R)$ and $\alpha'$ is the generator of $H^1(S^1, s_0)$. As the top map and the two vertical maps are both isomorphisms, the bottom map is too. -For the second statement, note that we have the following split short exact sequence -$$0 \to H^{n+1}(Y \times S^1, Y \times \{s_0\}; R) \stackrel{\pi^*}{\to} H^{n+1}(Y \times S^1; R) \to H^{n+1}(Y \times \{s_0\}; R) \to 0$$ -obtained from the long exact sequence for $(Y \times S^1, Y \times \{s_0\})$, via the section obtained from the induced map of the retraction $r : Y \times S^1 \to Y$ on cohomology. -According to splitting lemma, $H^{n+1}(Y \times S^1, Y \times s_0; R) \times H^{n+1}(Y \times s_0; R) \cong H^{n+1}(Y \times S^1; R)$ where the isomorphism is given by $(\beta, \beta') \mapsto \pi^*(\beta) +r^*(\beta')$. Using the previous isomorphism, this means we have an isomorphism $$H^n(Y; R) \times H^{n+1}(Y; R) \to H^{n+1}(Y \times S^1; R)$$ given by $(\beta_1, \beta_2) \mapsto \pi^*(\alpha \times \beta_1) + r^*(\beta_2) = \alpha \times \beta_1 + 1 \times \beta_2$ by removing the $\pi^*$ because it's not just an embeddeding but an actual inclusion, and $r^*(\beta_2) = 1 \times \beta_2$ as the retraction $r$ is nothing but projection onto 1st coordinate.<|endoftext|> -TITLE: Find all natural numbers $x,y$ such that $3^x=2y^2+1$. -QUESTION [11 upvotes]: Find all natural numbers $x,y$ such that -$$3^x=2y^2+1$$ -solutions are $(1,1)$, $(2,2)$, $(5,11)$. I found that parity of both is same and If $x$ Is odd it is of the form $4k+1$. - -REPLY [2 votes]: Here's a somewhat simpler proof using Pell equations -(as Eric Towers suggested might be possible). -If $x$ is odd, say $x=2k+1$, then $3z^2 = 2y^2 + 1$ where $z = 3^k$. -Hence $(2y)^2 - 6z^2 = -2$ and we have -$$ -2y + 3^k \sqrt{6} = (2+\sqrt{6}) (5 + 2\sqrt{6})^m -$$ -for some $m=0,1,2,\ldots$. The first two solutions $m=0$, $m=1$ give -$z=1$, $z=9$ which recovers the known solutions with $x=1$ and $x=5$. -Suppose $k>2$. Then $z = 3^k \equiv 0 \bmod 27$, and by computing powers of -$5 + 2 \sqrt{6}$ mod $27$, we find that $27 \mid z$ if and only if -$m \equiv 4 \bmod 9$. But then $z$ is always a multiple of the $m=4$ solution -(in general if $2m+1 \mid 2m'+1$ then the $m$-th $z$ divides the $m'$-th one), -and the $m=4$ solution has $z = 8721 = 3^3 17 \, 19$. So $z$ can never be -a power of $3$ once $k>2$. -If $x$ is even, say $x=2k$, then $2y^2 = 3^{2k} - 1 = (3^k-1) (3^k+1)$. -Hence $3^k-1$ is either a square or twice a square, and the former is -impossible (no square is congruent to $-1 \bmod 3$). So $3^k-1 = 2{y'}^2$ -for some integer $y'$, giving a smaller solution $(k,y')$ to $3^x = 2y^2 + 1$. -Continuing in this fashion eventually yields a solution with $x$ odd. -But that solution cannot be $(x,y) = (5,11)$, because -$(3^{10} - 1) / 2 = 2^2 11^2 61$ is not a square. This leaves -the $(x,y) = (1,1)$ solution, and indeed $(x,y) = (2,2)$ also works -as we know $-$ but $x=4$ does not because $(3^4 - 1)/2 = 40$ -is not a square either. This completes the proof that the three solutions -with $x=1,2,5$ are the only ones.<|endoftext|> -TITLE: Given complex $|z_{1}| = 2\;\;,|z_{2}| = 3\;\;, |z_{3}| = 4\;\;$ : when and what is $\max$ of $|z_{1}-z_{2}|^2+|z_{2}-z_{3}|^2+|z_{3}-z_{1}|^2$ -QUESTION [9 upvotes]: If $z_{1},z_{2},z_{3}$ are three complex number Such that $|z_{1}| = 2\;\;,|z_{2}| = 3\;\;, |z_{3}| = 4\;\;$ -Then $\max$ of $|z_{1}-z_{2}|^2+|z_{2}-z_{3}|^2+|z_{3}-z_{1}|^2$ - -$\bf{My\; Try::}$ Let $z_{1}=2\left(\cos \alpha+i\sin \alpha\right)$ and $z_{2}=3\left(\cos \beta+i\sin \beta\right)$ and $z_{3}=4\left(\cos \gamma+i\sin \gamma\right)$ -So $$f(\alpha,\beta,\gamma) = 58-\left[12\cos(\alpha-\beta)+24\cos(\beta-\gamma)+16\cos(\gamma-\alpha)\right]$$ -Now How can I calculate $\max$ of $f(\alpha,\beta,\gamma)$ -Help me -Thanks - -REPLY [2 votes]: $$\dfrac S4=3\cos(A-B)+6\cos(B-C)+4\cos(C-A)$$ -$$=\cos A(3\cos B+4\cos C)+\sin A(3\sin B+4\sin C)+6\cos(B-C)$$ -$$=\sqrt{25+24\cos(B-C)}\cos\left(A-\arccos\dfrac{3\cos B+4\cos C}{3\sin B+4\sin C}\right)+6\cos(B-C)$$ -$$\le\sqrt{25+24\cos(B-C)}+6\cos(B-C)$$ -If $\sqrt{25+24\cos(B-C)}=y, 1\le y\le7$ and $\cos(B-C)=\dfrac{y^2-25}{24}$ -$$S\le4y+24\cdot\dfrac{y^2-25}{24}=4y+y^2-25=(y+2)^2-29\le-29$$ -$$\implies f(\alpha+\beta+\gamma)\ge58+29$$ -The equality occurs if $y=2$ and $A-\arccos\dfrac{3\cos B+4\cos C}{3\sin B+4\sin C}=2n\pi$ where $n$ is any integer<|endoftext|> -TITLE: A diophantine equation with only "titanic" solutions -QUESTION [27 upvotes]: I made a note some time ago that I had read in some book that the equation -$$313(x^3+y^3)=t^3$$ -has positive integer solutions, but that these are so large that it would be absolutely hopeless to search for them with computers. Unfortunately, I didn't write down where I read this and if you only have the equation, the results Google gives you aren't very helpful. I could only find this so far. -Can someone point me to an article or book where I can read more about this equation? (Preferably something with a proof of the claim above which is accessible even if you're not an expert in number theory.) - -REPLY [13 votes]: The curve $a^3+b^3=N$ is birationally equivalent to the elliptic curve $y^2=x^3-432N^2$ with -\begin{equation} -a=\frac{36N+y}{6x} \hspace{2cm} b=\frac{36N-y}{6x} -\end{equation} -For $N=313^2$, the curve has rank $1$ and generator $G=(x_0,y_0)$ with -$x_0$=426235512202934545020503360093256801131707221692968586587468/216170759226021502298882345008844433022529079715666681 -$y_0$=278275087731298331021683520315726613848790652329435004093249928083293904849586928211092140/100506794432879496007544646276171310440319758686599267034949687655666070652158579 -which give the solution Noam Elkies tabulated. -The curve has no torsion points so all rational solutions come from points of the form $mP, \, \, m=1,2,\ldots$. -Looking at the above transformations, a positive solution will only arise when $|y| < 36N$ since $x>0$ always. -Experiments show that this first happens when $m=9$ giving a solution with roughly 6770 digits<|endoftext|> -TITLE: Is is possible to obtain exactly 16 black cells? -QUESTION [22 upvotes]: We are given an $18*18$ table, all of whose cells may be black or white. Initially all the cells are colored white. We may perform the following operation: -Choose one column or one row and change the colour of all cells in this column or row. Is it possible by repeating the operation to obtain a table with exactly $16$ black cells? -I know that this question is based on invariant principle but I am not getting the invariant. - -REPLY [13 votes]: Well, mrprottolo gave a similar answer, but not identical, and I already typed this out, so I'll hit "Post" anyway! - -First of all, note that these operations commute with each other! Second, by shuffling the rows and columns as necessary, we can assume that only the first $m$ rows and the first $n$ columns are flipped. -So how many black cells are there? Measure the two rectangles: -$$n(18-m)+m(18-n)=18m+18n-2mn$$ -Setting that equal to $16$, we get: -$$9m+9n-mn=8$$ -Conclude that both $m$ and $n$ are even, and neither is a multiple of $3$. So $m\geq 2$, and by symmetry, we can assume $m\leq 9$, which forces $m\leq 8$. But -$$n=\frac{9m-8}{m-9}$$ -is negative on that range, a contradiction.<|endoftext|> -TITLE: Polynomials with some roots whose product is 1 -QUESTION [5 upvotes]: Consider the complex coefficient polynomial equation -\begin{eqnarray} -x^n-\left(a_1+\binom{n}{1}\right)x^{n-1}+\cdots+(-1)^k\left(a_k+\binom{n}{k}\right)x^{n-k}+\cdots+(-1)^{n-1}\left(a_{n-1}+\binom{n}{n-1}\right)x+(-1)^n=0 -\end{eqnarray} -By Vieta Theorem, the product of its roots is 1. If we impose the condition that, among the $n$ roots, there exist $k$ roots (counted with multiplicity) whose product is 1, then $a_1, \cdots, a_{n-1}$ have to satisfy a polynomial equation $P(a_1, \cdots, a_{n-1})=0$, where $P\in\mathbb{C}[a_1, \cdots, a_{n-1}]$ has 0 as the constant term. -Question: Under what condition does $P$ has nonzero linear term? -The following are some easy examples I have worked out. -If $k=1$, then that means one of the roots is 1. Plugging $x=1$ to the original polynomial equation yields that $P$ can be -\begin{eqnarray} -\sum_{i=1}^{n-1} (-1)^ia_i -\end{eqnarray} -whose linear term is nonzero. For $k=2$, $n=3$ or $4$, $P$ also has nonzero linear term. -If $k=2$ and $n=5$, then the original polynomial equation can be factored as -\begin{eqnarray} -(x^3+px^2+qx-1)(x^2+rx+1)=0 -\end{eqnarray} -Comparing coefficients, we have -\begin{align*} -a_1&=-p-r-5\\ -a_2&=pr+q-9\\ -a_3&=-p-qr-9\\ -a_4&=q-r-5 -\end{align*} -According to Mathematica, $P$ is, up to a constant multiple, -\begin{eqnarray} --a_1^3+a_3 a_1^2+a_4 a_1^2+a_1^2+a_4^2 a_1-2 a_2 a_1+2 a_3 a_1-a_2 a_4 a_1-a_3 a_4 a_1-2 a_4 a_1-a_4^3+a_2^2+a_3^2+a_2 a_4^2+a_4^2-2 a_2 a_3+2 a_2 a_4-2 a_3 a_4\end{eqnarray} -which does not have nonzero linear terms. -For $k=3$, $n=6$, $P$ is -\begin{eqnarray} -a_1^3-a_4 a_1^2+3 a_1^2-4 a_2 a_1+6 a_3 a_1-12 a_4 a_1+a_3 a_5 a_1+22 a_5 a_1+a_5^3-a_3^2-a_2 a_5^2+3 a_5^2+4 a_2 a_4-12 a_2 a_5+6 a_3 a_5-4 a_4 a_5\end{eqnarray} -which again does not have nonzero linear terms. My guess is that, if $k\geq 2$ and $n-k\geq$ 2, then $P$ does not have nonzero linear term, except the case $k=2$, $n=4$. - -REPLY [2 votes]: Here's my thoughts for the case $k=2$: -For a polynomial $p(x)$ let $\tilde p(x)=x^{\deg p}p(1/x)$ denote the polynomial with coefficients in reverse order. -Your polynomial is $f(x)=(x-1)^n+xg(x)$ where $\deg g=n-2$. -We have $\tilde f(x) = (-1)^n(x-1)^n+x\tilde g(x)$. -Note that $f$ has two roots with product $1$ iff $f$ and $\tilde f$ have a root in common (well, he have to be careful if that common root is $1$). -The general tool for this is to check if the resultant $\operatorname{res}(f(x),\tilde f(x))$ is zero. Expressed in terms of the coefficients of $g$ (i.e., the $(-1)^ia_i$), this resultant is a polynomial $Q\in\Bbb Z[a_1,\ldots,a_{n-1}]$. -Trivially, $Q$ is zero if $g=(-1)^n\tilde g$. For $n$ odd this trivial case amounts to $a_1+a_{n-1}=a_2+a_{n-2}=\ldots=0$ and for $n$ even to $a_1-a_{n-1}=a_2-a_{n-2}=\ldots=0$ (where notably $a_{n/2}$ does not occur). -This can be summarized by writing $n=2m+r$ with $r\in \{0,1\}$ and then the condition becomes that $a_k-(-1)^na_{n-k}=0$ concurrently for $1\le k\le m$. -Another trivial case - but for an undesired solution - is when $f(1)=0$, i.e., $g(1)=0$, i.e., $a_n-a_{n-1}\pm\ldots+(-1)^{n-1}a_1=0$. -We conclude that -$$ Q=(a_n-a_{n-1}\pm\ldots+(-1)^{n-1}a_1)\sum_{k=1}^m (a_k-(-1)^na_{n-k})P_k(a_1,\ldots,a_{n-1})$$ -and that the second factor, the sum, is our desired $P$. -A linear term can occur only if one of the $P_k$ has a constant term. -So if for such $k$ (with $2k -TITLE: Comparison between Shannon's and Blackwell's measure of informativeness -QUESTION [7 upvotes]: I want to compare the concept of ``precision of information'' between signals $x \in X$ and states $\omega \in \Omega$ defined by Blackwell and Shannon. -Denote the conditional probability distribution over signals given states by $\mathcal{P}_{X|\Omega}$ and the unconditional probability distribution over states and signals by $\mathcal{P}_\Omega$ and $\mathcal{P}_X$ respectively. Let the number of states and signals be finite. Blackwell says that the conditional probability distribution $\mathcal{P}_{X|\Omega}$ is more informative than another distribution $\mathcal{\tilde{P}}_{X|\Omega}$, $\mathcal{P}_{X|\Omega} \supset \mathcal{\tilde{P}}_{X|\Omega} $, if and only if there exist a Markov matrix $M$: -\begin{equation} -\mathcal{P}_{X|\Omega} M = \mathcal{\tilde{P}}_{X|\Omega} -\end{equation} -For example, suppose the conditional probability distribution $\mathcal{P}_{X|\Omega} $ is given by the following Markov matrix: -\begin{equation} -\begin{bmatrix} -1 & 0 \\ -0 & 1 -\end{bmatrix} -\end{equation} -Such a matrix is fully informative: The signal reveals that state with perfect accuracy. Suppose the matrix $M$ is written as: - \begin{equation} -M = -\begin{bmatrix} -1/2 & 1/2 \\ -1/2 & 1/2 -\end{bmatrix} -\end{equation} -The distribution $\mathcal{\tilde{P}}_{X|\Omega}$ is then very diffuse in the sense that the signals do not favour any state. -Shannon uses a different notion of ``informativeness''. Let entropy be given by function $H$. For the unconditional distribution over states, the entropy is: -\begin{equation} -H(\mathcal{P}_\Omega) = - \sum_\Omega \mathcal{P}_\Omega(\omega) \log_2 (\mathcal{P}_\Omega(\omega)) -\end{equation} -Informativeness between signals and states is quantified by the mutual information of the unconditional probability distribution between signals and states: -\begin{equation} -I(\mathcal{P}_{X,\Omega}) = H(\mathcal{P}_\Omega) - \sum_X \mathcal{P}_X (x) H(\mathcal{P}_{\Omega|X} ( \ |x)) -\end{equation} -In Shannon's notion of informativeness, $\mathcal{P}_{X,\Omega} $ is more informative than $\mathcal{\tilde{P}}_{X,\Omega} $ if $I(\mathcal{P}_{X,\Omega}) > I(\mathcal{\tilde{P}}_{X,\Omega}) $. -What is the relationship between the two concepts of informativeness is? Can one proof that an information structure which is more informative in the sense of one author is more informative in the sense of another author? - -REPLY [4 votes]: Let $P_X$ and $\tilde{P}_X$ be column stochastic matrices (experiments) of dimension $n_i \times |\Omega|$, $i=1,2$. If $\exists$ a column stochastic matrix $M_{n_1\times n_2}$ s.t. $P_X=M\tilde{P}_X$ then $\tilde{P}_X$ is said to be Blackwell more informative than $P_X$. Denote by $\geq_B$ this partial ordering on left stochastic matrices (though it is more common to work with right stochastic matrices, I'm keeping the setting as in the question). -It can be shown that $\tilde{P}_X \geq_B P_X \Rightarrow I(\tilde{P}_X) \geq I(P_X)$, i.e. Blackwell more informative implies higher Shannon Mutual Information, though the converse is not true. -The implication is straightforward (but tedious) by working through the algebra: take a $\tilde{P}_X$ and an $M$, get the expression for each $P_{X,ij}$ and replace in the definitions, working through the inequalities. -A counterexample for the converse can be found here: Rauh et al, 2017, Coarse-graining and the Blackwell Order (references [5] and [6]).<|endoftext|> -TITLE: Is every compact subset of $\mathbb{R}^n$ a deformation retract of some open neighborhood? -QUESTION [5 upvotes]: Suppose $A \subset X=\mathbb{R}^n $ is compact. Is it necessary that $ \exists$ an open set $U \supset A$ such that $A$ is a deformation retract of $U$? If yes, is there a concrete construction of the retraction homotopy? I am unable to come up with a proof or a counter example. The statement above holds in all examples that I can think of. - -REPLY [6 votes]: No. Consider the set $X = \{0\}\cup\{ \frac{1}{n} \mid n \in \mathbb{N}\} \subset \mathbb{R}$. As it is closed and bounded, $X$ is compact. If $U$ is an open set containing $X$, then $U_0$, the connected component of $U$ containing $0$, also contains $\frac{1}{n}$ for all $n \geq N$ for some $N \in \mathbb{N}$. But the connected open sets in $\mathbb{R}$ are open intervals which are homotopy equivalent to a point, but $U_0$ contains infinitely many points of $X$ onto which it must deformation retract. This is a contradiction.<|endoftext|> -TITLE: Is a general smooth rescaling of a complete vector field itself complete? -QUESTION [13 upvotes]: $\newcommand{\Ga}{\Gamma}$ -$\newcommand{\R}{\mathbb{R}}$ -$\newcommand{\til}{\tilde}$ -$\newcommand{\M}{M}$ -$\newcommand{\ep}{\epsilon}$ -$\newcommand{\brk}[1]{\left(#1\right)} $ -$\newcommand{\R}{\mathbb{R}}$ -$\renewcommand{\pd}[2]{\frac{\partial#1}{\partial#2}}$ -Let $M$ be a smooth manifold, $X \in \Ga(TM) $. -Assume $X$ is complete, i.e, the flow of $X$ is defined on whole $\mathbb{R} \times M$. -I wonder what happens to the flow when $X$ is multiplied by some real positive function $f \in C^{\infty}(M)$. My guess is that the flow will still be defined for any time, and that it will be a reparametrization of the original flow. (i.e only the speed may change). In particular $fX$ will be complete. - -Question: - Is this guess correct? Does it hold for non-compact manifolds? (Note I assume anyway $X$ is complete) - -Update: As shown by Travis, when $M$ is non-compact, the scaled field need not be complete. Of course, when $M$ is compact, than any vector field is complete. -For the compact case, I am still interested to know if there is -a global smoothly changing reparametrization $h:\mathbb{R} \times M \to \mathbb{R} $ such that $\psi(t,p)=\phi(h(t,p),p) \forall t \in \mathbb{R} , p \in M$? -My analysis (below) shows that if there is such a reparametrization than it's unique (since it's enough to prove uniqueness locally), but I do not know how to show there exists such a global $h$. -(See my analysis for details about where my "procedure" gets stuck). - -My analysis so far: -Let $\phi_p(t)=\phi(t,p)$ denote the $t$-time flow of $X$ from $p \in M$, i.e -$(1)\,\, \phi: \R \times M \to M \, , \, \dot \phi_p(t)=X(\phi_p(t))$ -Take $Y = fX$. Denote the flow of $Y$ by $\psi_p(t)$. Assume there exists a real function $h_p:\R \to \R$ such that $\psi_p(t)=\phi_p(h_p(t))$. -Then $\dot \psi_p(t)=Y(\psi_p(t)) \Rightarrow \dot \phi_p(h_p(t))\cdot h_p'(t)=f(\psi_p(t)) \cdot X(\psi_p(t))$ -So by $(1)$ we get: $$X(\psi_p(t)) \cdot h_p'(t)=X\Big(\phi_p\big(h_p(t)\big)\Big)\cdot h_p'(t)=f(\psi_p(t)) \cdot X(\psi_p(t))$$ -So if $ X(\psi_p(t)) \neq 0$, this forces $h_p'(t)=f(\psi_p(t))=f\Big(\phi_p\big(h_p(t)\big)\Big)$ -This motivates we try to analyze the following equation, $\forall p \in M$: -$$(2) \,\, h_p(0)=0,h_p'(t)=f\Big(\phi_p\big(h_p(t)\big)\Big)$$ -We now change notations: -Define $h:\R \times M \to \R$ via: $h(t,p)=h_p(t)$. -Denote $\til M = \R \times M$, and consider the hypersurface $S = \{0\} \times M \subseteq \til M$. - then $(2)$ becomes: -$$(3) \, \, h|_S=0, \pd{}{t} h = (f \circ \phi) \big(h(t,p),p\big)$$ -The vector field $\pd{}{t} \in \Ga\brk{T \til \M}$ is nowhere tangent to $S$ (since $T_{\brk{0,p}}S=0 \oplus T_p\M$ and $\pd{}{t}(t,p)=(1,0)$). -Denote $C^\infty\brk{\til M \times \R} \ni \til f: \til M \times \R \to \R$ via the formula: -$$\til f((t,p),s) = (f \circ \phi)(s,p) $$, then $(3)$ becomes: -$$ (4) \, \, h|_S=0, \pd{}{t} h = \til f \big((t,p),h(t,p)\big)$$ -The above equation is an instance of a Quasilinear Cauchy problem (on the manifold $\til M$), so we know $\forall \til p=(0,p) \in S$ there exists a unique solution in some neighbourhood $U$ of $\til p$. -(See for instance Theorem 9.53, page 242 in John M.Lee's book "Introduction to smooth manifolds") -In the case $M$ is compact, we can proceed in the following way: -$\forall p \in \M , (0,p) \in S \Rightarrow (0,p) \in U \Rightarrow$ there exists an open set $\til U_p \subseteq U$ which contains $(0,p)$. Hence, there exists $\ep_p \in \R \, , \, U_p \subseteq \M$ ($U_p$ open in $\M$) such that $(-\ep_p,\ep_p) \times U_p \subseteq \til U_p$. $\{U_p|p \in \M\}$ form an open cover of $\M$, hence (by compactness of $M$) there is a finite subcover $U_{p_1},\dots , U_{p_n}$. -Define $\ep = min\{\ep_{p_i}|i=1,\dots,n \}$. It follows immediately that $(-\ep,\ep) \times \M \subseteq U$. -So, we have stablished exsitence of a unique solution on $(-\ep,\ep) \times \M$. -The problem is how to continue from here. -A naive approach is to define - $X = \{t \in I| \text{there exists a unique solution for $(4)$ in } (-t,t) \times \M \}$. -Look at $s= \text{sup} X$. We claim $\text{sup} X \in X$. Since $X$ is closed downward, i.e : $x \in X \Rightarrow \brk{0 \le x' < x \Rightarrow x' \in X}$ it follows that $[0,s) \subseteq X$. -It's easy to see there must be a unique solution for $(4)$ on $(-s,s) \times \M$. (If there were two different solutions, they would differ already at some $s' 0$ in terms of $\delta$ and an upper bound for $|f|$ on $(a - \delta, a + \delta)$ (and, to be clear, on $a$). -On the other hand, the conjecture in the question is true if we add one of two simple hypotheses: It follows from the dramatic-sounding Escape Lemma that if $X$ is complete and - -$|f|$ is bounded and/or -$M$ is compact, - -then $f X$ is complete. (In this latter case, this is for the simple reason that any smooth vector field on a compact manifold is complete.)<|endoftext|> -TITLE: What is a homogeneous Differential Equation? -QUESTION [13 upvotes]: In first-order ODEs, we say that a differential equation in the form $$\frac{\mathrm d y}{\mathrm d x}=f(x,y)$$ -is said to be homogeneous if the function $f(x,y)$ can be expressed in the form $f\left(\displaystyle\frac{y}{x}\right)$, and then solved by the substitution $z=\displaystyle\frac{y}{x}$. -In second-order ODEs, we say that a differential equation in the form -$$a\frac{\mathrm d^2 y}{\mathrm d x^2}+b\frac{\mathrm d y}{\mathrm d x}+cy=f(x)$$ -is said to be homogeneous if $f(x)=0$. -Is there a relation between these two? What does homogeneous mean? I thought it's when something $=0$, because in linear algebra, a system of $n$ equations is homogeneous if it is in the form $\boldsymbol{\mathrm{Ax}}=\boldsymbol 0_{n\times 1}$; but this doesn't seem to be the case for first-order ODEs. - -REPLY [2 votes]: General Homogeneity -Ibragimov A Practical Course in Differential Equations and Mathematical Modeling, §3.1.3 "Homogeneous Equations", p. 93: - -An ordinary differential equation of an arbitrary order $$F(x,y,y',…,y^{(n)})=0$$ is said to be homogeneous [in general] if it is invariant under a scaling transformation (dilation) of the independent and dependent variables […]: $$\bar{x}=a^kx,\qquad\bar{y}=a^ly,$$ where $a>0$ is a parameter not identical with 1, and $k$ and $l$ are any fixed real numbers. The invariance means that $$F(\bar{x},\bar{y},\bar{y}',…,\bar{y}^{(n)})=0,$$ where $\bar{y}'=d\bar{y}/d\bar{x}$, etc. - -Double Homogeneity -Ibid. p. 95: - -[A differential equation] is double homogeneous if […] it does not alter under the transformations $$\bar{x}=ax,\qquad\bar{y}=y,$$ and $$\bar{x}=x,\qquad\bar{y}=ay,$$ with independent positive parameters $a$ and $b$, respectively. - -Type 1: Uniform Homogeneity -Ibid. §3.1.4 "Different types of homogeneity", p. 96: - -The uniformly homogeneous equations are invariant under the uniform scaling: $$\bar{x}=ax,\qquad \bar{y}=ay$$ - -The general form of this type of homogeneous equations is (cf. MathWorld): $$\frac{dy}{dx}=F\left(\frac{y}{x}\right).$$ -Type 2: Homogeneity by Function -Ibid. p. 97: - -This type of homogeneity designates invariance […] with respect to the dilation of $y$ only: $$\bar{x}=x,\qquad \bar{y}=ay$$ - -This is the more common understanding of homogeneity: -Licker's Dictionary of Mathematics p. 108 defines a homogeneous differential equation as - -A differential equation where every scalar multiple of a solution is also a solution. - -Zwillinger's Handbook of Differential Equations p. 6: - -An equation is said to be homogeneous if all terms depend linearly on the dependent variable or its derivatives. For example, the equation $y_{xx} + xy = 0$ is homogeneous while the equation $y_{xx} + y = 1$ is not. - -Olver's Introduction to Partial Differential Equations p. 9: - -A differential equation is called homogeneous linear if both sides are sums of terms, each of which involves the dependent variable $u$ or one of its derivatives to the first power; on the other hand, there is no restriction on how the terms involve the independent variables. Thus, $$\frac{d^2u}{dx^2}+\frac{u}{1+x^2}=0$$ is a homogeneous linear second-order ordinary differential equation.<|endoftext|> -TITLE: Find the number of bicycles and tricycles -QUESTION [8 upvotes]: Help for my son. My math is a bit rusty and I'm trying to remember how to go about answering this question: "There are 3 times as many bicycles in the playground as there are tricycles. There is a total of 81 wheels. What is the total number of bicycles and tricycles in the playground?" - -REPLY [2 votes]: \begin{align} -&\text{Number of bicycles} =3 \times 2x\text{ wheels}\\ -&\text{Number of tricycles} =3x\text{ wheels}\\ -&3 \times 2x\text{ wheels}\ +\ 3x\text{ wheels}=81\text{ wheels}\\ -&9x=81\\ -&x=9\\ -&\text{Number of bicycles} =3 \times 9\\ -&\text{Number of tricycles} =9\\ -\end{align}<|endoftext|> -TITLE: Calculus of variation with inequality constraints -QUESTION [7 upvotes]: I want to find the function $y$ which maximizes the functional -$J[y] = \int_0^1 g(x) y(x) dx$ -subject to $0 \leq y(x) \leq 1$ for all $x\in [0,1]$ and $\int_0^1 y(x) dx = k$ where $g$ is a strictly increasing function. -I know that I can take care of the isoperimetric constraint quite easily using the Lagrangian -$K[y] = \int_0^1 (g(x) y(x) + \lambda y(x)) dx$. -I also know that I can take care of constraints of the form $y(x) \leq 1$ using a substitution such as $u^2(x) = 1 - y(x)\geq 0$ to get -$K[u] = \int_0^1 (g(x) (1-u^2(x)) + \lambda (1-u^2(x)))dx$. -However, I am quite at a loss with a constraint of the form $0 \leq y(x) \leq 1$, i.e., when two inequalities are involved at the same time. How can I take care of this? - -REPLY [3 votes]: I think its better to write your constraint in this form: -$0 \leq y(x) \leq 1 \implies -\frac{1}{2} \leq y(x)-\frac{1}{2} \leq \frac{1}{2} \implies \left( y(x)-\frac{1}{2} \right)^2 \leq \frac{1}{4} \implies y(x)^2-y(x)<0$. Now you can use your knowledge to solve the problem. -Good luck.<|endoftext|> -TITLE: When $x$ is a real number and $x>1$, why is $x^x>(x+1)^{x-1}$? -QUESTION [8 upvotes]: When $x$ is a real number and $x>1$, why is the following true? -$x^x>(x+1)^{x-1}$ -I tried finding the minimum of $x^x-(x+1)^{x-1}$ with my limited calculus knowledge, but it shortly appeared out of my range. -It's good when I can understand a good answer, but I'd still be happy to come back years later when I'm better at math, so please don't hesitate to share your knowledge. - -REPLY [2 votes]: Let $f(x)=x\ln x-(x-1)\ln(x+1),\;$ so $\color{red}{f(1)=0}$ -and $\displaystyle f^{\prime}(x)=x\left(\frac{1}{x}\right)+\ln x-(x-1)\cdot\frac{1}{x+1}-\ln(x+1)=\frac{2}{x+1}-(\ln(x+1)-\ln x)$. -Since $\displaystyle \ln(x+1)-\ln x<\frac{1}{x}\;\;$ (by considering the area under $y=\frac{1}{x}$ from $x$ to $x+1$), -$\displaystyle \color{red}{f^{\prime}(x)>\frac{2}{x+1}-\frac{1}{x}=\frac{x-1}{x(x+1)}>0}\;$ for $x>1$; -so for $x>1,\;\;$ $\color{red}{f(x)>0}\implies x\ln x>(x-1)\ln(x+1)\implies x^x>(x+1)^{x-1}$<|endoftext|> -TITLE: Can you find the treasure?? -QUESTION [5 upvotes]: My big bro gave this problem one week ago. I could not still solve it.Please HELP. -STORY -A man was just looking for items in his store room. Suddenly he found a map , which showed - -then it stated -That if the man goes straight from the pole(P) to house A and turns 90* and moves to M such that PA=AM. -Similarly if he goes from P to B and turns 90* again to move from B to N such that BN=PN -THEN a straight line MN is produced.In the midpoint of MN the TREASURE IS PRESENT. - -THE PROBLEM -Now the man went to find the treasure but when he reached there he was shocked. -because the pole(P) was cut down (i.e the pole was absent from the place) -NOW CAN HE FIND THE TREASURE???? -thanks in advance!! - -REPLY [4 votes]: As the diagram suggests, any starting point and the endpoints ($A^\prime$ and $B^\prime)$ of the routes through $A$ and $B$ determine pairs of congruent triangles with the (other) vertices $X$ and $Y$ of the square with diagonal $\overline{AB}$. The key midpoint property then becomes clear (and nicely related to the distance from starting point to square-corner). -Now, starting at any point on the $Y$ side of $\overleftrightarrow{AB}$, the instructions (with appropriately-oriented turns) take you to $X$ (necessarily the midpoint of $\overline{A^\prime B^\prime}$); and vice-versa. (What about points on $\overleftrightarrow{AB}$?) So, you should check both points, just to be sure. $\square$<|endoftext|> -TITLE: $\pi$ -Hall normal subgroup is characteristic -QUESTION [6 upvotes]: I've this exercise on my textbook: "Show that if G is a group (not necessary soluble), a normal $\pi$ -Hall subgroup is characteristic." - -I've tried to resolve it in the following way. -Let $\alpha$ an automorhism of G; if H$\ne$H$^\alpha$ then since H is normal HH$^\alpha$ is a subgroup of G containing properly H. But, for order reasons HH$^\alpha$ is another $\pi$ -Hall subgroup of G, that is absurd. -My solution is correct? Thanks in advice and sorry for my bad English! - -REPLY [4 votes]: Well your sentence "for order reasons"is a bit vague. Look at the factor group $H^{\alpha}H/H$. This is a $\pi'$-group since $|G:H|$ is a $\pi'$-number. But this factor group is also isomorphic (2nd isomorphism theorem) to $H^{\alpha}/(H \cap H^{\alpha})$, and this is a $\pi$-group. Hence either factor must be trivial, whence $H=H^{\alpha}$.<|endoftext|> -TITLE: $2^z$ behavior when changing real and imaginary components of $z$ -QUESTION [6 upvotes]: I'm reading The Music of the Primes by du Sautoy and I've come across a section that I'm having difficulty understanding: - -Euler fed imaginary numbers into the function $2^x$. To his surprise, out came waves which corresponded to a particular musical note. Euler showed that the character of each note depended on the coordinates of the corresponding imaginary number. The farther north one is, the higher the pitch. The farther east, the louder the volume. - -My understanding here is that the results are dependent on the sine function and that the real part of the exponent affects the amplitude and the imaginary part of the exponent affects the frequency. -I'd like to understand this more intuitively, which I tend to get through visualization. So I went to Wolfram Alpha and started with graphing $2^{x+iy}$. That wasn't very helpful. -So I tried graphing it with fixed $x$ values, and indeed, I could see the amplitude of the (now 2D) graph changing. -I also see that $2^{x+iy}$ is also expressed as $2^x \cos(y \log(2))+i 2^x \sin(y \log(2))$ and I think I can see that changing the value of $x$ would affect the amplitude. -I'm unable to demonstrate the frequency changing by setting y to specific values. -What am I missing? (...Other than a semester in a Complex Analysis class!) -edit: -So while reading more online, I came across this blog that makes a similar claim. I suspect the book of oversimplifying, but wonder if this explains what was simplified? - -[...] But $x^{z-1} + x^{\bar{z} - 1}$ is just a wave whose amplitude depends on the real part of $z$ and whose frequency depends on the imaginary part (i.e., if $z=a+biz=a+bi$, then $x^{z-1} + x^{\bar{z}-1} = 2x^{a-1} cos (b \log x)$) [...] - -(I copied this from the blog, but removed some odd \'s ...) -Is it the inclusion of the conjugates that causes this amplitude/frequency? - -REPLY [2 votes]: The issue is that as far as I know, there is no canonical way of translating a "musical note" into a complex number - however you can translate it into a complex function. As SolUmbrae wrote, a note is simply a sine or cosine function defined by its amplitude, frequency and offset. To keep it simple, let's forget about the offset. A pure note (single frequency) can then be written as $f(t) = A\cdot\cos(\omega\cdot t)$, where $t$ represents time. The higher $A$ is, the louder the note will be; the higher $\omega$ is, the higher the pitch. -You can rewrite $f(t)$ as $f(t) = \Re(A\cdot e^{i\omega t})$ (the real part of the complex function $A\cdot e^{i\omega t}$). -This is where the conjugate comes in in your edit : $z + \overline{z} = 2\cdot\Re(z)$ for any complex number $z$, so adding the conjugate allows you to work with real numbers instead of complex ones (but this is besides the point of your original question). -Let's forget about taking the real part and keep $\psi(t) = A\cdot e^{i\omega t}$ as a simple way to represent a sinusoidal wave function (in other words, a note). -I'm going to make one more simplification and consider $e^z$ rather than $2^z$ as the formula to turn complex numbers into notes (this will get rid of the $\log(2)$ in the formula). We then have $e^z=e^{x+iy}=e^x\cdot e^{iy}$ as a formula for "musical notes". -How can we reconcile our two formulas for $e^z$ and $\psi(t)$? We can see that $A$ and $\omega$ are enough to completely define $\psi(t)$ (if you know those two number, you can reconstruct the function) so, in that sense, $\psi(1) = A\cdot e^{i\omega}$ is enough to define a note. Putting it all together, $\psi(1) = A\cdot e^{i\omega} = e^{\log(A)}\cdot e^{i\omega} = e^{\log(A)+i\omega} = e^z$ where $z = \log(A) + i\omega$. In other words, the real part of $z$ corresponds to the (logarithm of) the amplitude of the note, and the imaginary part corresponds to the pitch. The higher $x$ is, the louder the note; the higher $y$ is, the higher the pitch. -Note : this is by no means a rigorous demonstration; it makes several simplifying assumptions, and it conflates the exponential function and the representation of a complex number as $e^{iz}$. But I hope it's enough to grasp the intuition of representing a sine function as a real and imaginary part.<|endoftext|> -TITLE: Integral $\int_0^1(x(1-x))^n\frac{d^n}{d^n x}(\log x \cdot\log (1-x))dx$ -QUESTION [10 upvotes]: While playing around with the first values of the integral - -$$ -I_n:=-\int_0^1\left(x(1-x)\right)^n\frac{d^n}{d^nx}\left(\log x \cdot\log (1-x)\right){\rm d}x, \quad \quad n=1,2,3,\cdots, -$$ - -I got -$$ -\small{\begin{align} -I_1&=0,&I_2&=\frac19,&I_3&=0,&I_4&=\frac3{25},\\ -I_5&=0,&I_6&=\frac{40}{49},&I_7&=0,&I_8&=\frac{140}{9},\\ -I_9&=0,&I_{10}&=\frac{72576}{121},&I_{11}&=0,&I_{12}&=\frac{6652800}{169},\\ -I_{13}&=0,&I_{14}&=\color{#99004d}{3953664},&I_{15}&=0,&I_{16}&=\frac{163459296000}{289},\\ -I_{17}&=0,&I_{18}&=\frac{39520825344000}{361},&I_{19}&=0,&I_{20}&=\color{#99004d}{27583922995200},\\ -I_{21}&=0,&I_{22}&=\frac{4644631106519040000}{529},&I_{23}&=0,&I_{24}&=\color{#99004d}{3446935565184663552},\\ -I_{25}&=0,&I_{26}&=\color{#99004d}{1636721540923392000000},&I_{27}&=0,&I_{28}&=\frac{777776389315596582912000000}{841},\\ -I_{29}&=0,&I_{30}&=\cdots. -\end{align}} -$$ -By splitting up the initial integral into $\displaystyle \int_0^{1/2}$, $\displaystyle \int_{1/2}^1$ and by using the symmetry of the integrand, I've indeed proved that $I_{2n+1}=0, \, n=0,1,2,3,\cdots.$ -Now observing the first values above, my question is: - -Does the integral $I_{2n}$ take on infinitely integer values? - -REPLY [8 votes]: 1) We have first with $\displaystyle D=\frac{d}{dx}$: -$$J_n(x)=D^{n}(\log(x)\log(1-x))=\sum_{k=0}^{n}{n\choose k}D^{n-k}(\log(x))D^{k}(\log(1-x))$$ hence -$$J_n(x)=D^{n}(\log(x))\log(1-x)+D^{n}(\log(1-x))\log (x)+\sum_{k=1}^{n-1}{n\choose k}D^{n-k}(\log(x))D^{k}(\log(1-x))$$ -Now for $m\geq 1$ -$$D^{m}(\log(x))=D^{m-1}(1/x)=(-1)^{m-1}\frac{(m-1)!}{x^m}$$ -and -$$D^{m}(\log(1-x))=-D^{m-1}(1/(1-x))=-\frac{(m-1)!}{(1-x)^m}$$ -This gives -$$J_n(x)=(-1)^{n-1}\frac{(n-1)!}{x^n}\log(1-x)-\frac{(n-1)!}{(1-x)^n}\log x+\sum_{k=1}^{n-1}{n\choose k}(-1)^{n-k}\frac{(k-1)!(n-k-1)!}{x^{n-k}(1-x)^{k}}$$ -2) We multiply by $x^n(1-x)^n$, and we note that -$$\int_0^1 x^n \log (x)dx=\int_0^1(1-x)^n\log(1-x)dx=-\frac{1}{(n+1)^2}$$ -and -$$\int_0^1 x^k(1-x)^{n-k}dx=B(k+1,n-k+1)=\frac{k!(n-k)!}{(n+1)!}$$ -we find that -$$-I_n=(1-(-1)^{n-1})\frac{(n-1)!}{(n+1)^2}+\frac{1}{n+1}\sum_{k=1}^{n-1}(-1)^{n-k}(k-1)!(n-k-1)! $$ -3) If $n=2m+1$, we have -$$(2m+2)I_{2m+1}=\sum_{k=1}^{2m}(-1)^k (k-1)!(2m-k)!=B_m$$ -The change of parameter $k'=2m+1-k$ give $B_m=-B_m$, hence $B_m=0$. -4) If $n=2m$, we have -$$-I_{2m}=2\frac{(2m-1)!}{(2m+1)^2}+\frac{1}{2m+1}\sum_{k=1}^{2m-1}(-1)^k (k-1)!(2m-1-k)! $$ -Note that $(m-1)!$ divide $(k-1)!(2m-1-k)!$ for all $k$, $1\leq k\leq 2m-1$. So to have $I_{2m}\in \mathbb{Z}$, it suffice to have that $(2m+1)^2$ divide $(2m-1)!$ and $2m+1$ divide $(m-1)!$; -This is true for example if for $m\geq 2$, we have $2m+1=p_1p_2$ with $p_1$, $p_2$ prime with $p_1 -TITLE: Can anyone explain this equation (about $\frac\pi2$ ) -QUESTION [9 upvotes]: $${\frac{\pi}{2} = \lim_{l \to \infty} \prod_{j = 1}^{l + 1} \frac{(2j)(2j)}{(2j - 1)(2j+1)}}$$ - -Hi all. -My first impression of this equation is naive curiosity why "limit" is required. -Can I just drop the limit sign and replace $l+1$ by $\infty$? -Or If "limit" can not be omitted, why would we multiply all the terms upto $l+1$? -Does it change anything if I replace $l+1$ by $l$? - -REPLY [11 votes]: Can I just drop the limit sign and replace $l+1$ by $\infty$? - -These mean the same thing: -$$ -\prod_{j=1}^\infty\frac{2j}{2j-1}\frac{2j}{2j+1} -\stackrel{\text{def}}{\equiv}\lim_{\ell\to\infty}\prod_{j=1}^\ell\frac{2j}{2j-1}\frac{2j}{2j+1} -$$ - - -Does it change anything if I replace $l+1$ by $l$ ? - -Since $\ell\to\infty\iff\ell-1\to\infty$, we have -$$ -\begin{align} -\lim_{\ell\to\infty}\prod_{j=1}^\ell\frac{2j}{2j-1}\frac{2j}{2j+1} -&=\lim_{\ell-1\to\infty}\prod_{j=1}^\ell\frac{2j}{2j-1}\frac{2j}{2j+1}\\ -&\equiv\lim_{\ell\to\infty}\prod_{j=1}^{\ell+1}\frac{2j}{2j-1}\frac{2j}{2j+1} -\end{align} -$$ - -One way to evaluate the infinite product -$$ -\begin{align} -\prod_{j=1}^{\ell}\frac{2j}{2j-1}\frac{2j}{2j+1} -&=\frac{2^{2\ell}\ell!^2}{(2\ell)!}\frac{2^{2\ell+1}\ell!(\ell+1)!}{(2\ell+2)!}\\ -&=\frac{2^{4\ell}\ell!^4}{(2\ell)!^2(2\ell+1)}\\ -&=\left(\frac{4^\ell}{\binom{2\ell}{\ell}}\right)^2\frac1{2\ell+1} -\end{align} -$$ -Using inequality $(9)$ from this answer, we get -$$ -\frac{\pi\left(\ell+\frac14\right)}{2\ell+1} -\le\prod_{j=1}^{\ell}\frac{2j}{2j-1}\frac{2j}{2j+1} -\le\frac{\pi\left(\ell+\frac13\right)}{2\ell+1} -$$ -Using the Squeeze Theorem, we get -$$ -\lim_{\ell\to\infty}\prod_{j=1}^{\ell}\frac{2j}{2j-1}\frac{2j}{2j+1}=\frac\pi2 -$$ - -Another way to evaluate the infinite product -$$ -\begin{align} -\prod_{j=1}^\ell\frac{2j}{2j-1}\frac{2j}{2j+1} -&=\prod_{j=1}^\ell\frac{j}{j-\frac12}\frac{j}{j+\frac12}\\ -&=\frac{\Gamma(\ell+1)/\Gamma(1)}{\Gamma\left(\ell+\frac12\right)/\Gamma\left(\frac12\right)}\frac{\Gamma(\ell+1)/\Gamma(1)}{\Gamma\left(\ell+\frac32\right)/\Gamma\left(\frac32\right)}\\ -&=\frac{\Gamma\left(\frac12\right)\Gamma\left(\frac32\right)}{\Gamma(1)^2}\frac{\Gamma(\ell+1)\Gamma(\ell+1)}{\Gamma\left(\ell+\frac12\right)\Gamma\left(\ell+\frac32\right)}\\ -&=\frac12\frac{\Gamma\left(\frac12\right)^2}{\Gamma(1)^2}\frac{\Gamma(\ell+1)^2}{\Gamma\left(\ell+\frac12\right)^2}\frac1{\ell+\frac12}\\ -\end{align} -$$ -By Gautschi's Inequality, -$$ -\frac\ell{\ell+\frac12}\le\frac{\Gamma(\ell+1)}{\Gamma\left(\ell+\frac12\right)}\frac1{\ell+\frac12}\le\frac{\ell+1}{\ell+\frac12} -$$ -By the Squeeze Theorem, -$$ -\begin{align} -\lim_{\ell\to\infty}\prod_{j=1}^\ell\frac{2j}{2j-1}\frac{2j}{2j+1} -&=\frac12\frac{\Gamma\left(\frac12\right)^2}{\Gamma(1)^2}\cdot1\\ -&=\frac\pi2 -\end{align} -$$ - -REPLY [4 votes]: These are all good questions. Some comments: -(1) Products with $\infty$ as the upper limit are not really products. You cannot multiply an infinite number of factors. Multiplication is defined as a binary operation, so you can only multiply two factors at a time. Repeating this many times allows a finite product; you can then try to take a limit to pass to the infinite case, but don't make the mistake of thinking that you are performing an infinite number of operations. Same thing with infinite sums. In other words, an "infinite product" is defined to be the limit of a sequence of partial products (and similarly for "infinite sums"). -(2) $\ell +1$ is not significant, could use anything that grows without bound as $\ell\to\infty$ -(3) No<|endoftext|> -TITLE: Splitting Field of the polynomial $x^4+x+1$ over $\mathbb{F}_2$. -QUESTION [8 upvotes]: What is the splitting field $\mathbb{F}_q$ of the polynomial $x^4+x+1$ over $\mathbb{F}_2$? -I already knew the polynomial $x^4+x+1$ is irreducible and its roots are distinct in some extension field of $\mathbb{F}_2$. However, I am not sure if the splitting field must be of the form $\mathbb{F}_{2^k}$ and if the polynomial $x^4+x+1$ must be a divisor of $x^{2^k}-x$. -Note: I am new to field extension and I haven't learnt about the degree of field extension, so please provide explanation without using it. - -REPLY [3 votes]: The splitting field of this polynomial is -$$ -K=\Bbb{F}_2[x]/\langle x^4+x+1\rangle. -$$ -This follows from your observation that $p(x)=x^4+x+1$ is irreducible, and from the fact that if $\gamma=x+\langle x^4+x+1\rangle$ is a zero of that polynomial, then - -The other zeros of $p(x)$ are $\gamma^2$, $\gamma^4$, and $\gamma^8=\sqrt\gamma$, so $p(x)$ splits into linear factors over $K$ and -All the non-zero elements of $K$ are actually powers of $\gamma$, so no smaller field will do. - -These facts can be seen from the first principles as follows: - -We know $p(\gamma)=\gamma^4+\gamma+1=0$. Squaring this equation using the binomial formula and the fact that $2=0$ in $K$ we get -$$0=\gamma^8+\gamma^2+1+2\cdot\text{something}=\gamma^8+\gamma^2+1=p(\gamma^2).$$ Repeating this shows that $\gamma^4$ and $\gamma^8$ are also zeros of $p(x)$. This trick is often facetiously called freshman's dream, because we have all met beginners who want to square binomials like $(a+b)^2=a^2+b^2$ - a formula that only works in a commutative ring of characteristic two. -The other fact I did as the mid section of this answer I prepared for referrals like this. You see that I denote the field $K$ by $\Bbb{F}_{16}$ there. - - -Remarks (and/or extras) - -The second part could, indeed, be more easily deduced using basic facts about degrees of field extensions. Because $p(x)$ has degree four, we can deduce that $K$ is a four-dimensional space over $\Bbb{F}_2$. -That freshman's dream -trick works over all finite fields. It implies that adjoining a single root of an irreducible polynomial always automatically gives the other roots as well. This fact is special to finite fields. You have surely seen examples of irreducible polynomials over $\Bbb{Q}$ where this does not happen. $x^3-2$ is the standard example of this phenomenon. The same fact can be rephrased as All finite extensions of finite fields are Galois extensions. But it sounds like you may not have heard of the concept of a Galois extension yet. -We have, indeed, that $p(x)\mid x^{16}+x$. See this question for the other factors and a few more details.<|endoftext|> -TITLE: Simpler proof for $\frac{a^3b}{c}+\frac{b^3c}{d}+\frac{c^3d}{a}+\frac{d^3a}{b}\geq a^3+b^3+c^3+d^3$ -QUESTION [5 upvotes]: Let $a\geq b\geq c\geq d>0$. Prove that: -$$\frac{a^3b}{c}+\frac{b^3c}{d}+\frac{c^3d}{a}+\frac{d^3a}{b}\geq a^3+b^3+c^3+d^3$$ -I have a proof, but my proof is very ugly: -Let $c=d+u$, $b=d+u+v$ and $a=d+u+v+w$, where $u$, $v$ and $w$ are non-negatives. -After these substitutions we'll get something obvious. -Maybe there is something nice? -Thank you! - -REPLY [4 votes]: Make the change: $x_1 \ge x_2 \ge x_3 \ge x_4$ -$$a=e^{x_1},b=e^{x_2},c=e^{x_3},d=e^{x_4}$$ -Then the inequality re written as: -$$e^{3x_1+x_2-x_3}+e^{3x_2+x_3-x_4}+e^{3x_3+x_4-x_1}+e^{3x_4+x_1-x_2} \ge e^{3x_1}+e^{3x_2}+e^{3x_3}+e^{3x_4}$$ -$$A=(3x_1+x_2-x_3, 3x_2+x_3-x_4,3x_4+x_1-x_2,3x_3+x_4-x_1)$$ -$$B=(3x_1, 3x_2,3x_4,3x_3)$$ -Set $A$ majorizes set $B$ $(A \succ B):$ -$$A_1 \ge B_1$$ -$$A_1+A_2 \ge B_1+B_2$$ -$$A_1+A_2+A_3 \ge B_1+B_2+B_3$$ -$$A_1+A_2+A_3+A_4=B_1+B_2+B_3+B_4$$ -$f(x)=e^x -$ concave function, so by Karamata's_inequality: -$$f(A_1)+f(A_2)+f(A_3)+f(A_4)\ge f(B_1)+f(B_2)+f(B_3)+f(B_4)$$<|endoftext|> -TITLE: Unique factorization theorem in algebraic number theory -QUESTION [5 upvotes]: Consider the set $S = a + b \sqrt {-6}$, where $a$ and $b$ are integers. Now, to prove that unique factorization theorem does not hold in set $S$, we can take the example as follows: -$$ -10 = 2 \cdot 5 = (2+\sqrt {-6}) (2-\sqrt {-6}) -$$ -"Thus we can conclude that there is not unique factorization of 10 in set $S$. Note that this conclusion does not depend on our knowing that $2+\sqrt {-6}$ and $2-\sqrt {-6}$ are primes; they actually are, but it is unimportant in our discussion. " -Can someone explain why the conclusion is independent of nature of $2+\sqrt {-6}$ and $2-\sqrt {-6}$. Basically, unique factorization theorem is based on the fact that factors are primes. So, why is it independent? -Note: This is from the book An Introduction to the Theory of Numbers, 5th Edition by Ivan Niven, Herbert S. Zuckerman, and Hugh L. Montgomery. - -REPLY [3 votes]: $2$ is Irreducible but not Prime. -In fact if $2=cd$, then $N(c)=2$ but there is no solution to $a^2 + 6b^2 = 2$ reducing modulo 6. Thus $2$ is Irreducible. -$2 |10 = (2+\sqrt {-6}) (2-\sqrt {-6})$, but if $2 |(2+\sqrt {-6})$ then $2 |\sqrt{-6}$ which is impossible since $2(a+b\sqrt{-6})=\sqrt{-6}$ has no solutions. Same for the minus. Thus $2$ is not Prime.<|endoftext|> -TITLE: Borel set that is not countable union or intersection of open or closed sets -QUESTION [6 upvotes]: In this previous question, one can read the following: - -It is important to keep in mind, by the way, that Borel sets are more than just countable unions and intersections of open and closed sets. - -I tried to find an explicit example of a Borel set $A$ that can't be written only using a finite number of the following operations : - -countable unions of open or closed sets ; -countable intersections of open or closed sets. - -I know that there are examples of Borel sets that are neither $G_{\delta}$ nor $F_{\sigma}$, but I could write them using the two operations above. -According to this article on Borel hierarchy (if I understood it well), a Borel set can be obtained by a countable number of the two operations above. -Thank you for your comments! - -REPLY [8 votes]: Your question isn't entirely clear, because "countable unions of open or closed sets" and "countable intersections of open or closed sets" aren't "operations", they are sets. I suppose you mean "countable unions" and "countable intersections", and what you're asking for is essentially a set that is not at a finite level ($\mathbf{\Sigma}_n^0$ or $\mathbf{\Pi}_n^0$) of the (boldface) Borel hierarchy. -In fact, there is an explicit construction of a set at every level ($\mathbf{\Sigma}_\xi^0$ or $\mathbf{\Pi}_\xi^0$) of the Borel hierarchy that does not belong to any earlier level: this is constructed by considering a "universal" such set, i.e., roughly one whose sections give every possible set of that level. This is done for example in Kechris, Classical Descriptive Set Theory (1995, Springer GTM 156), §22.A, specifically theorems 22.3 and 22.4. The construction is indeed explicit, though it is not very transparent. -Here's how I think I can simplify it slightly. Instead of working in $\mathbb{R}$, it is much better to work in the Cantor space $\mathcal{C} = 2^{\mathbb{N}}$ of binary sequences (note that this is homeomorphic to the standard Cantor set). A crucial fact is that $\mathcal{C}^2$ is homeomorphic to $\mathcal{C}$ (by separating a binary sequence into its even and odd terms), and in fact even $\mathcal{C}^{\mathbb{N}}$ is homeomorphic to $\mathcal{C}$ (using a bijection between $\mathbb{N}^2$ and $\mathbb{N}$). -Start with a universal open set in $\mathcal{C}^2$: to get one, consider an enumeration of finite binary sequences, and consider the set $U$ of those $(x,y) \in \mathcal{C}^2$ such that, for some $i$ for which $y(i)=1$, the sequence $x$ starts with the $i$-th finite binary sequence in the enumeration. Since the set $V_i$ of elements $x\in\mathcal{C}$ which start with the $i$-th finite binary sequence gives an open basis $(V_i)$ of $\mathcal{C}$, this is a "universal" open set in the sense that every open set of $\mathcal{C}$ is $\{x : (x,y) \in U\}$ for some $y \in \mathcal{C}$. The complement $F$ of $U$ gives a universal closed set. -Now we want to construct a universal $F_\sigma$, i.e., $\mathbf{\Sigma}_2^0$ set. To do this, consider the set of $(x,(y_n)) \in \mathcal{C} \times \mathcal{C}^{\mathbb{N}}$ such that $(x,y_n)$ (an element of $\mathcal{C}^2$) belongs to $F$ for some $n$: since every closed set of $\mathcal{C}$ can be obtained as $\{x : (x,y) \in F\}$ for some $y \in \mathcal{C}$, every $F_\sigma$ (=countable union of closed sets) can be obtained as a union of $\{x : (x,y_n) \in F\}$ for some sequence $y_n$ of elements of $\mathcal{C}$, that is, as the set of $x$ for which $(x,(y_n))$ belongs to the set I just described. By using a homeomorphism between $\mathcal{C}^{\mathbb{N}}$ and $\mathcal{C}$ we get a universal $F_\sigma$ set in $\mathcal{C}^2$. Its complement is a universal $G_\delta$ (=$\mathbf{\Pi}_2^0$) set. -Redo the same construction as above, using the universal $G_\delta$ set instead of $F$ to get a universal $\mathbf{\Sigma}_3^0$ set. Then do it again and again: this gives a sequence $U =: B_1, B_2, B_3, B_4,\ldots$ where $B_n$ is a universal $\mathbf{\Sigma}_n^0$ set in $\mathcal{C}^2$. -Finally, do the construction one last time but with $n$ varying: that is, consider the set of $(x,(y_n)) \in \mathcal{C} \times \mathcal{C}^{\mathbb{N}}$ such that, for some $n$, the element $(x,y_n)$ (of $\mathcal{C}^2$) belongs to the complement $B_n$. Bring it to $\mathcal{C}^2$ (or to $\mathcal{C}$) using homeomorphisms: call this $B_\omega$. -Note that this construction is completely explicit: the axiom of choice was never used, for example, and given the choice of a few finistic bijections (between $\mathbb{N}^2$ and $\mathbb{N}$ or finite binary sequences and $\mathbb{N}$), the set is completely specified. -Now why can't the universal $\mathbf{\Sigma}_n^0$ set $B_n$ be $\mathbf{\Pi}_n^0$? This is just a diagonal argument: if $B_n$ were $\mathbf{\Pi}_n^0$ then the complement of its diagonal $\{y : (y,y) \not\in B_n\}$ would be $\mathbf{\Sigma}_n^0$, and this clearly contradicts the universality of $B_n$. So $B_n$ is no earlier than stated in the Borel hierarchy, and $B_\omega$ is a Borel set which cannot be constructed by finitely using countable unions or intersections.<|endoftext|> -TITLE: Conjectured closed form for $\sum_{n=-\infty}^\infty\frac{1}{\cosh\pi n+\frac{1}{\sqrt{2}}}$ -QUESTION [28 upvotes]: I was trying to find closed form generalizations of the following well known hyperbolic secant sum -$$ -\sum_{n=-\infty}^\infty\frac{1}{\cosh\pi n}=\frac{\left\{\Gamma\left(\frac{1}{4}\right)\right\}^2}{2\pi^{3/2}},\tag{1} -$$ -as -$$ -S(a)=\sum_{n=-\infty}^\infty\frac{1}{\cosh\pi n+a}. -$$ -In particular I find by numerical experimentation -$$ -\displaystyle \frac{\displaystyle\sum_{n=-\infty}^\infty\frac{1}{\cosh\pi n+\frac{1}{\sqrt{2}}}}{\displaystyle\sum_{n=-\infty}^\infty\frac{1}{\cosh\pi n}}\overset{?}=-\frac{1}{2}\left(1+\sqrt{2}\right)+\sqrt{2+\sqrt{2}}\tag{2} -$$ -(Mathematica wasn't able to find a closed form directly, but then I decided to switch to calculation of ratios of the sums, calculated ratios numerically and then was able to recognize this particular ratio as a root approximant. This was subsequently verified to 1000 decimal places). -I simplified this expression from the previous edition of the question. -Unfortunately for other values of $a$ I couldn't find a closed form. Of course $(2)$ together with $(1)$ would imply a closed form for the sum $S(1/\sqrt{2})$ -How one can prove $(2)$? - -REPLY [14 votes]: Let -$$ -S_1(\alpha)=\sum_{n=-\infty}^\infty\frac{1}{\cosh\pi \alpha n+\frac{1}{\sqrt{2}}}, -$$ -$$ -S_2(\alpha)=\sum_{n=-\infty}^\infty\frac{1}{\cosh\pi \alpha n-\frac{1}{\sqrt{2}}}, -$$ -then due to $2\cosh^2x-1=\cosh 2x$ one obtains -$$ -S_2(\alpha)-S_1(\alpha)=2\sqrt{2}\sum_{n=-\infty}^\infty\frac{1}{\cosh 2\pi \alpha n}, -$$ -$$ -S_2(\alpha)+S_1(\alpha)=4\sum_{n=-\infty}^\infty\frac{\cosh\pi\alpha n}{\cosh 2\pi \alpha n}. -$$ -Now if one defines elliptic integrals of the first kind $K$ and $\Lambda$ according to equations $\frac{K'}{K}=\frac{K(k')}{K(k)}=\alpha$, $\frac{\Lambda'}{\Lambda}=\frac{K(k_1')}{K(k_1)}=2\alpha$, where $k'=\sqrt{1-k^2},~k_1'=\sqrt{1-k_1^2}$, then the well known formulas from the theory of elliptic functions (see Whittaker and Watson, A Course of Modern Analysis) state that -$$ -\sum_{n=-\infty}^\infty\frac{1}{\cosh \pi \alpha n}=\frac{2K}{\pi},~\sum_{n=-\infty}^\infty\frac{1}{\cosh 2\pi \alpha n}=\frac{2\Lambda}{\pi},~\sum_{n=-\infty}^\infty\frac{\cosh\pi\alpha n}{\cosh 2\pi \alpha n}=\frac{2\Lambda}{\pi}~\text{dn}(i\Lambda'/2,k_1), -$$ -$$ -k_1=\frac{1-k'}{1+k'},\quad \Lambda=\frac{1}{2}(1+k')K,\quad \text{dn}(i\Lambda'/2,k_1)=\sqrt{1+k_1}. -$$ -From this by trivial algebra one can deduce that - -$$ -S_1(K'/K)=\frac{K\sqrt{2}}{\pi}(1+k')\left(\frac{2}{\sqrt{1+k'}}-1\right). -$$ - -Now for $k=1/2$ one has $k'=1/2$, $K=K'=K_0$, therefore -$$ -\frac{\displaystyle\sum_{n=-\infty}^\infty\frac{1}{\cosh\pi n+\frac{1}{\sqrt{2}}}}{\displaystyle\sum_{n=-\infty}^\infty\frac{1}{\cosh\pi n}}=\frac{S_1(1)}{2K_0/\pi}=\frac{(1+k')}{\sqrt{2}}\left(\frac{2}{\sqrt{1+k'}}-1\right)=\sqrt{2+\sqrt{2}}-\frac{1+\sqrt{2}}{2}. -$$<|endoftext|> -TITLE: Central Limit Theorem for Lévy Process -QUESTION [6 upvotes]: I am reading a book, which uses the Central Limit Theorem of Lévy Processes $X_t$ without mentioning the exact theorem. -Due to the infinite divisible property I can write $X_t$ as a sum of $N$ iid random variables $X^i$ -$$ -X_t=\sum_{i=1}^N X^i_{t/N} -$$ -The Problem is, that i want $t\rightarrow \infty$ but for the CLT i have to keep my sequence of my equidistant iid random variables fixed (like $t/N$ fixed). But they do change, as $t\rightarrow \infty$. -The book now just says, that with the central limit theorem for Lévy Processes it holds for $t\rightarrow \infty$ -\begin{align} -\frac{X_t-\overbrace{tE[X_{1}]}^{=E[X_t]}}{\sqrt{t}}\rightarrow \mathcal{N}(0,\operatorname{Var}[X_1])\\ -\sqrt{t} \left(\frac{X_t}{t}-E[X_1])\rightarrow \mathcal{N}(0,\operatorname{Var}[X_1]\right) -\end{align} -I can't find any proofs, lectures or literature about it. Can you help me out? - -REPLY [5 votes]: Without any additional assumptions on the Lévy process $(X_t)_{t \geq 0}$, a central limit theorem does not hold true. -Let $(X_t)_{t \geq 0}$ be a (one-dimensional) Lévy process with Lévy triplet $(b,\sigma^2,\nu)$. Define -$$T(x) := \nu((x,\infty)) + \nu((-\infty,-x))$$ -and -$$U(x) := \sigma^2+2 \int_0^x y T(y) \, dy$$ -for $x>0$. There is the following statement by Doney and Maller: - - -Suppose that $T(x)>0$ for all $x>0$. Then there exist deterministic functions $a(t),b(t)>0$ such that $$\frac{X_t-a(t)}{b(t)} \stackrel{t \to \infty}{\to} N(0,1) \tag{1}$$ if, and only if, $$\frac{U(x)}{x^2 T(x)} \stackrel{x \to \infty}{\to} \infty.$$ -Suppose that $T(x)=0$ for all $x>0$ (i.e. the Lévy measure $\nu$ is symmetric). Then $(1)$ holds if, and only if, $\sigma^2>0$. In this case, $a(t) = t \mathbb{E}(X_1)$ and $b(t) = \sigma \sqrt{t}$ is admissible. - - -In dimension $d>1$ there are CLT-results for Lévy processes by Grabchak. -References: - -R.A. Doney and R.A. Maller.: Stability and Attraction to Normality for Lévy processes at Zero and at Infinity. Journal of Theoretical Probability, Vol. 15, July 2002. -Michael Grabchack: A note on the multivariate CLT and convergence of Lévy processes at Long and Short Times. ArXiV.<|endoftext|> -TITLE: Is there a relationship between isometry as defined on metric spaces and those on vector spaces? -QUESTION [5 upvotes]: I am taking a course on linear algebra and another on real analysis. - -In linear algebra we defined that two vector spaces are isomorphic if -there existed a - -bijective and linear map - -between the two vector spaces -In real analyis we defined that two metric spaces are isometric if there existed a - -bijective, distance preserving map $d(x,y) = d'(fx, fy)$ - between the two metric spaces - - -I looked up the definition of isometry online and many sources tell me it is a bijective "structure preserving" map. -Is there some commonality between these so called structures? -Or is it hopeless for me to guess what would be an isomorphism defined between topological spaces, Hilbert spaces or Banach spaces until I see the definitions? - -REPLY [2 votes]: Normed spaces (including Banach spaces) are in particular metric spaces: the metric is given by $d(x,y) = |x - y|$. (This should agree with your intuition for $\mathbb{R}^3$). The notion of isometry is then the same as what you would guess. -But an isomorphism doesn't have to be an isometry: for example, scaling by 2 is an isomorphism (of a real vector space), but it is not an isometry. -Generally one uses the expression "isometric isomorphism" to describe structure preserving maps between normed vector spaces. For example: Every infinite dimensional separable Hilbert space is isometrically isomorphic to the space of square summable sequences. (The space of square summable sequences is the space of sequences of real numbers $(a_i)$ with $\Sigma a_i^2 < \infty$. The norm is $\Sigma a_i^2$. Actually it's a Hilbert space, so it has an inner product. Can you see it? Exercise: this space is complete.) -Note that an isometric map is necessarily injective, so if the vector space is finite dimensional, then a linear isometry is also an isomorphism (of vector spaces). But in the infinite dimensional case this is not true: for example, consider the "shift" map on the (Hilbert) space of square summable sequences, sending the sequence $(a_0, \ldots)$ to $(0,a_0, \ldots)$. -In general a topological space does not have a metric. -The general notion of a structure preserving map varies depending on the objects involved, but the general idea is: if $f : X \to Y$ is BLAH-structure preserving, then any proof of a fact having to do with X's BLAH-structure can be translated into a proof of that same fact about $Y$'s BLAH-structure via $f$. -For example: If $V$ and $W$ are linear spaces, an $f$ is an isomorphism of vector spaces ... - -Any basis in $V$ $v_i$ becomes a basis in $W$ under $f$. The proof that $v_i$ is a basis has two parts, linear independence and spanning. In addition to the linearity of $f$, one uses the surjectivity of $f$ to show that $f(v_i)$ still spans, and the injectivity to show that $f(v_i)$ are still independent. (Though there are other ways to do this.) -Then the space of linear functions $W \to \mathbb{R}$ can be (isomorphically) identified with the space of linear functions $V \to \mathbb{R}$ via $f$. How? (Exercise.) - -Etc. -If $V$ has more structure, maybe a norm, then a linear isomorphism $f$ may not preserve statements about it. One has to go to infinite dimensions for this to be interesting, since all norms on $\mathbb{R}^n$ are equivalent (good exercise. Possible hint: Think about the unit ball, and the relationship between the unit ball and norms via the so called Minkowski functional.) -As an example: Any two vector spaces with the basis of the same cardinality are isomorphic. So take $V$ to be the Banach space (norm is sup norm) of continuous functions on $[0,1]$, called $C[0,1])$, and $W$ to be the normed space $V$ of $C^1$ functions (continuously differentiable) on $[0,1]$, again with the sup norm. -These spaces are isomorphic, having basis with the same cardinality (you can use Taylors theorem to write $V = \mathbb{R} \oplus C[0,1]$: $f = f(0) + \int_0^x f'(t) dt$, so the map sends $f$ to the pair $(f(0), f')$, that it is bijective and linear is a combination of some standard calculus theorems: every continuous function $g(x)$ is the derivative of the function $f(x) = \int_0^x g(t) dt$, the derivative of a constant is zero, the integral of zero is zero, integration and differentiation are linear, etc.)*, but not isometric because the former is complete as a metric space and the latter is not. -*Alternatively, you can use the functionals $e_a(f) = f(a)$, for $a \in \mathbb{Q}$ to embed each in the space $\mathbb{R}^{\mathbb{Q}}$. This gives an upper bound for the cardinality of their basis, which is $P(\mathbb{N})$. Then you need to show that neither has countable basis: since $V \subset C[0,1]$, you just need to show that $V$ cannot have a countable basis. So you need to build an uncountable linearly independent set: if you subdivide the interval into halves (and quarters, and eights, etc.), and use bump functions, you can build this uncountable family. (I am assuming that there is no cardinality between $|\mathbb{N}|$ and $|P(\mathbb{N})|$... I think this is right but I don't really know set theory too well. Am I assuming the continuum hypothesis?) -(Question: You can put a norm on the space of $C^k$ functions on the interval so that it becomes a Banach space - the $C^k$ norm, which is the sum of the sup norms of the first $k$ derivatives. Are these Banach spaces isometrically isomorphic, for various $k$, or maybe they are all distinct? $C^k([0,1]) \cong \mathbb{R}^k \oplus C^0([0,1])$ - as vector spaces, using Taylors theroem, but this is not an isometry I think. However, I am reasonably sure that it is a homeomorphism.) -It may be enlightening to look up the definition of an isomorphism in a category, and also the Yoneda lemma, but again maybe not. Possibly better to think about different objects that have many layers of structure, and what kind of things are preserved by different maps.<|endoftext|> -TITLE: The equation $-1 = x^2 + y^2$ in finite fields -QUESTION [6 upvotes]: In an ordered field we have $x^2 \ge 0$, hence the equation $-1 = x^2 + y^2$ has no solution. But what about finite fields in general? What is the solutions set -$$ - -1 = x^2 + y^2 -$$ -of this equation? - -REPLY [8 votes]: Others have explained why there exists at least one solution $P_0=(x_0,y_0)\in \Bbb{F}_q^2$. The standard trick for finding all the solutions goes as follows (see also Lubin's answer). -If $P=(x,y)\in \Bbb{F}_q^2$ is another point on the curve $x^2+y^2+1=0$, then the line $L$ connecting $P_0$ and $P$ is either vertical, when $x=x_0$ and thus $y=\pm y_0$, or it has a slope $t\in\Bbb{F}_q$. In the latter case the equation of the line $L$ is thus -$$ -y-y_0=t(x-x_0). -$$ -Plugging the solution $y=t(x-x_0)+y_0$ into the equation $y^2+x^2+1=0$ gives -$$ -x^2+t^2(x-x_0)^2+2t(x-x_0)y_0+y_0^2+1=0. -$$ -After expanding and combining equal degree terms we arrive at -$$ -(t^2+1)x^2+[2ty_0-2t^2x_0]x+[t^2x_0^2-2tx_0y_0+y_0^2+1]=0. -$$ -Because $P_0$ is on that quadratic curve $x=x_0$ is one solution. From Vieta relations we see that the other solution is thus -$$ -x=x(t):=-\frac{2ty_0-2t^2x_0}{t^2+1}-x_0. -$$ -Because the point $P$ was assumed to be on the line $L$, we get -$$ -y=y(t):=t(x(t)-x_0)+y_0. -$$ -So we get all the points $P$ of the curve $x^2+y^2=1$ as $P(t)=(x(t),y(t))$ with $t$ ranging over the field $\Bbb{F}_q$, as well as the point $P(\infty)=(x_0,-y_0)$ corresponding to the case of $L$ having an infinite slope. -We also observe that if $t^2+1=0$, then the formulas involve division by zero, so we need to throw those values of $t$ away. As a summary: - -If $t^2+1\neq0$ for all $t\in \Bbb{F}_q$ there are exactly $q+1$ solutions $(x,y)\in\Bbb{F}_q^2$. -If $t^2+1=0$ has two solutions in $\Bbb{F}_q$, then the number of points with coordinates in $\Bbb{F}_q$ on the curve $x^2+y^2+1=0$ is equal to $q-1$. - -It is worth remarking that the curve $x^2+y^2+1=0$ has genus zero, so its projective version $X^2+Y^2+Z^2=0$ always has exactly $q+1$ points in $\Bbb{P}^2(\Bbb{F}_q)$. Two of those points will lie on the line at infinity when $-1$ has a square root in $\Bbb{F}_q$.<|endoftext|> -TITLE: How can the Hadwiger–Nelson problem depend on the axioms of set theory? -QUESTION [9 upvotes]: The wikipedia page on the Hadwiger Nelson problem says the following two things: - -The correct value may actually depend on the choice of axioms for set theory. - -and - -the problem is equivalent (under the assumption of the axiom of choice) to that of finding the largest possible chromatic number of a finite unit distance graph. - -Assuming we take the axiom of choice as a given, this latter remark makes the problem sounds like a combinatorial problem - not one that you would expect to depend on foundational issues. Is it really possible that two different models of ZFC could contain distinct finite subgraphs of the unit distance graph? - -REPLY [8 votes]: I believe you are misinterpreting the statement. -Any two well-founded models of ZF with the same ordinals will agree on the finite planar unit-distance graphs, and their chromatic numbers. This is due to the Shoenfield absoluteness theorem. (Actually, much more is true - if I'm not missing something, any two models of ZF with the correct $\omega$ should agree on the finite planar unit-distance graphs and their chromatic numbers.) -However, the axiom of choice is required to show that the maximum of these numbers is indeed the chromatic number of the plane! In the absence of choice, these two numbers don't have to be the same. - -Here's a sketch of how to prove that the chromatic number of the plane is the maximum of the chromatic numbers of the finite unit-distance graphs, assuming choice. We'll actually prove a stronger result: that an arbitrary graph is $k$-colorable iff all of its finite subgraphs are $k$-colorable (this is the Erdos-de Bruijn theorem). -We use ultrafilters (we don't have to, but they're fun). Suppose $G$ is a graph; let $\mathcal{F}$ be the set of finite subgraphs of $G$, and suppose every $F\in\mathcal{F}$ is $k$-colorable. Fix, for each $F\in\mathcal{F}$, a $k$-coloring $c_F$ of $F$. Now let $\mathcal{U}$ be an ultrafilter on $\mathcal{F}$ such that, for each $F\in\mathcal{F}$, the set $\{G\in\mathcal{F}: F\subseteq G\}$ is in $\mathcal{U}$ (such an ultrafilter exists since the family of such sets has the finite intersection property). -Now let $\chi$ the "ultralimit" of the $c_F$s along $\mathcal{U}$: for an edge $e\in G$, set $\chi(e)=i$ iff $\mathcal{U}$-many colorings $c_F$ have $c_F(e)=i$. It's not hard to verify that this is in fact a $k$-coloring of $G$. -(Strictly speaking, this only shows that the chromatic number of the plane is at most $k$, but the other inequality is immediate.)<|endoftext|> -TITLE: Meaning of "polynomially larger" -QUESTION [11 upvotes]: For example -Is $n$ polynomially larger than $\frac{n}{\log n}$? Than $n \log n$? -Is $n^2$ polynomially larger than $\frac{n}{\log n}$? Than $n \log n$? -I am trying to understand the difference because apparently the first line isn't, but the second is (Master Theorem). - -REPLY [10 votes]: "Polynomially larger" means that the ratio of the functions falls between two polynomials, asymptotically. Specifically, $f(n)$ is polynomially greater than $g(n)$ if and only if there exist generalized polynomials (fractional exponents are allowed) $p(n),q(n)$ such that the following inequality holds asymptotically: -$$p(n)\leq \frac{f(n)}{g(n)}\leq q(n)$$ -For the first problem, we have the ratio is equal to $\log(n)$. It is not the case that there exist polynomials $p(n),q(n)$ such that $p(n)\leq \log(n)\leq q(n)$ asymptotically, because no polynomial is a lower bound for $\log(n)$. Thus it is not polynomially bounded. $n\log(n)$ is the same (even the same quotient if taken in the other order). -For the second problem, we have the ratio is equal to $n\log(n)$. It is the case that $n\leq n\log(n)\leq n^2$ asymptotically, so it is polynomially bounded and therefore $n^2$ is polynomially larger. $\frac{n^2}{n\log(n)}=\frac{n}{\log(n)}$, and we have that (asymptotically) $$n^\frac{1}{3}\leq \frac{n}{\log(n)}\leq n$$<|endoftext|> -TITLE: what does ≼ or ≺ mean? -QUESTION [6 upvotes]: I was reading a paper about well-orderings and this came up: -Suppose (E, ≤) and (F, ≼) are isomorphic well-orderings. Then there exists a unique isomorphism for (E, ≤) to (F, ≼). -I've been scouring the internet for what this symbol means. Someone said it means "precedes", but that led me to wonder if 1 ≼ 2 would be true but then someone else said that X ≼ Y <=> $$X = X\land Y$$ which made no sense to me. Could someone explain the meaning of this symbol? Thanks. - -REPLY [6 votes]: The curly versions of the less than and greater than signs are commonly used to denote some other ordering than the one that we are usually talking about. For instance there is a partial ordering on the symmetric matrices, where $A \preccurlyeq B$ if and only if $B-A$ is a nonnegative definite matrix. We write $\preccurlyeq$ instead of $\leq$ to avoid confusion with the ordering that we use more commonly. This would be especially important if we had two orderings on the same set. -But it's just a symbol. It would be perhaps confusing but certainly not wrong to use it some other way.<|endoftext|> -TITLE: Fourier transform of $1/|x|^{\alpha}$. -QUESTION [9 upvotes]: My problem is to prove the following identity: -$$C_{\alpha}\int_{\mathbb R^n} \frac{1}{|x|^\alpha} \phi(x) dx = C_{n-\alpha}\int_{\mathbb R^n} \frac{1}{|x|^{n-\alpha}} \widehat{\phi}(x) dx$$ -where $\phi:\mathbb R^n \to \mathbb C$ is on the Schwartz space , $0<\alpha -TITLE: How to show this fraction is equal to 1/2? -QUESTION [12 upvotes]: I have the fraction: -$$\frac{\left(2 \left(\frac {a}{\sqrt{2}}\right) + a \right) a} {2(1 + \sqrt{2})a^2}$$ -Using Mathematica, I've found that this simplifies to $\frac{1}{2}$, but how did it achieve the result? How can I simplify that fraction to $\frac12$? - -REPLY [8 votes]: I think you may have missed that by definition, -$\sqrt{2}\sqrt{2}=2$ -And thus, -$\frac{2}{\sqrt{2}}=\sqrt{2}$ -This simplification issue is quite common. Of course using this and multiplying out/ factoring terms may get your desired result: -$$=\frac{(\sqrt{2}+1)a^2}{2(\sqrt{2}+1)a^2}$$ -In which the $\frac{(\sqrt{2}+1)a^2}{(\sqrt{2}+1)a^2}$ reduces to one in the case $a \neq 0$ .<|endoftext|> -TITLE: Relative entropy for wiener measure/wiener measure with girsanov change of drift -QUESTION [5 upvotes]: I've read an article on relative entropy properties that gives a result for the relative entropy of two equivalent measures as they are found in applications of girsanovs theorem. -For two measures P, Q we define the relative entropy the following way: -$H(Q;P)=\int_\Omega\frac{dQ}{dP}\log\left(\frac{dQ}{dP}\right)dP$ -And the Girsanov theorem is a statement about measures related through the density -$\frac{dQ}{dP}=\exp\left(\int_0^T\theta_tdB_t-\frac{1}{2}\int_0^T\theta_t^2dt\right)$ -Where $B_t$ is a Standard Brownian motion (with measure P) and (according to Girsanov) $B_t-\int_0^t\theta_sds$ is a Standard Brownian motion under Q. -The claim is that -$H(Q;P)=\frac{1}{2}\int_0^T\theta^2_tdt$ -which is claimed to be a straightforward calculation, but I can't quite get to the end-result. Any help would be greatly appreciated. -Here is how far i got so far: -$H(Q;P)=\int_\Omega\frac{dQ}{dP}\left(\int_0^T\theta_tdB_t-\frac{1}{2}\int_0^T\theta_t^2dt\right)dP$ -$=\int_\Omega\frac{dQ}{dP}\left(\int_0^T\theta_tdB_t\right)dP$ -$\quad-\left(\frac{1}{2}\int_0^T\theta_t^2dt\right)\int_\Omega \frac{dQ}{dP} dP$ -$=\int_\Omega\frac{dQ}{dP}\left(\int_0^T\theta_tdB_t\right)dP$ -$\quad-\frac{1}{2}\int_0^T\theta_t^2dt\cdot1$ -Where I've used that the integral of a density must be one. I guess I'd have to use something like Itô-Isometry to get from the integral with respect to the Brownian motion to something deterministic with the squared integrand, but I can't quite figure out what to do with the density to get to it. If I could somehow justify something like: -$\int_\Omega\frac{dQ}{dP}\left(\int_0^T\theta_tdB_t\right)dP=\int_\Omega\left(\int_0^T\theta_tdB_t\right)^2dP=\int_0^T\theta_t^2dt$ -that would conclude the calculation. Any ideas? - -REPLY [3 votes]: As you say, -$$\theta_{t}\text{d}B_{t} = \theta_{t}\text{d}W_{t}+\theta_{t}^2\text{d}t\ ,$$ where $W$ is a $Q$-Brownian motion. Hence $$\log\frac{\text{d}Q}{\text{d}P} = \int \theta_{t}\text{d}W_{t} + \frac{1}{2} \int \theta_{t}^2\text{d}t$$ and the $P$-expectation becomes $$H(Q;P)=\mathbb{E}_{Q}\left(\int \theta_{t}\text{d}W_{t}\right)+\frac{1}{2}\mathbb{E}_{Q}\left(\int \theta_{t}^2\text{d}t\right) = 0 + \frac{1}{2}\int \theta_{t}^2\text{d}t$$ since the stochastic integral is a $Q$-martingale starting at zero and process $\theta$ is deterministic. -Hence the relative entropy equals $$H(Q;H) = \frac{1}{2}\int \theta_{t}^2\text{d}t$$ as claimed.<|endoftext|> -TITLE: Weak Convergence in $\ell^p$ -QUESTION [6 upvotes]: First, my definition of weak convergence in $X$ is that $x_n \rightharpoonup x$ if $\phi(x_n) \to \phi(x)$ for all $\phi \in X^*$. -I recently read the statement that $e_n \rightharpoonup 0$ in $\ell^p$, $p>1$, where $e_n$ is the canonical basis vector. -In $\ell^2$, this is clear to me, since the weak convergence $x_n \rightharpoonup x$ in $\ell^2$ (which is Hilbert) is equivalent to $\langle x_n, y \rangle \to \langle x, y \rangle$ for all $y \in \ell^2$. -But when $p \neq 2$, I struggle to prove this since I don't know the form of a general functional $\phi \in X^*$. Can you help me understand this? - -REPLY [4 votes]: Here is an approach without an explicit characterisation of the dual space of $\ell^p$. Let $\varphi$ be a linear continuous functional on $\ell^p$, $\left(a_n\right)_{n\geqslant 1}$ be an element of $\ell^p$. Let $c_n=a_n\operatorname{sgn}(a_n)\operatorname{sgn}\left(\varphi\left(e_n\right)\right)$, where $\operatorname{sgn}(x)=1$ if $x\gt0$, $-1$ if $x\lt 0$ and $0$ for $x=0$. Let $x_N:=\sum_{n=1}^Nc_ne_n$. Then the sequence $\left(x_N\right)_{N\geqslant 1}$ converges in $\ell^p$ to some $x$ (indeed, $\left\lVert x_{M+N}-x_N\right\rVert_p^p=\sum_{n=N+1}^{N+M}\lvert a_n\rvert^p$). -As a consequence, the sequence $\left(\varphi\left(x_N\right)\right)_{N\geqslant 1}$ is bounded and it follows that $\sum_{n\geqslant 1}a_n\left\lvert \varphi\left(e_n\right)\right\rvert$ is convergent (indeed, $\varphi(x_N)=\sum_{n=1}^Nc_n\varphi(e_n)=\sum_{n=1}^N\lvert a_n \varphi(e_n)\rvert$). In other words, we proved -$$\tag{*} -\left(a_n\right)_{n\geqslant 1}\in\ell^p\Rightarrow \sum_{n\geqslant 1}\left\lvert a_n\varphi\left(e_n\right)\right\rvert<+\infty. -$$ -This implies that the sequence $\left(\varphi\left(e_n\right)\right)_{n\geqslant 1}$ converges to $0$. Indeed, if not, there is a $\delta\gt 0$ and a sequence $(n_k)$ of integers growing to infinity such that $\left\lvert \varphi\left(e_{n_k}\right)\right\rvert\gt \delta$ for all $k$. Let $\left(b_k\right)\in \ell^p\setminus \ell^1$ and define $a_n$ by $a_{n_k}=b_k$ and $a_n=0$ if $n$ is not of the form $n_k$ for some $k$ to get a contradiction with $(*)$.<|endoftext|> -TITLE: Subspaces of $C^\alpha [0,1]$ are finite dimensional if closed in $C[0,1]$ -QUESTION [7 upvotes]: For $0 < \alpha < 1$, let $C^\alpha([0,1])$ be the subspace of $C[0,1]$ consisting of continuous functions with norm -$$ \| f\|_\alpha = \|f\| + \sup_{x\neq y} \frac{|f(x) - f(y)|}{|x-y|^\alpha},$$ -where $\|\cdot\|$ is the ordinary sup norm on $C[0,1]$. - -Problem: Let $X$ be a linear subspace of $C^\alpha[0,1]$. Suppose further $X$ is closed in $C[0,1]$. Then $X$ is finite dimensional. - -My strategy is to show that the unit ball $B \subseteq X$ is compact w.r.t. the $\| \cdot \|_\alpha$ norm. By the Arzela-Ascoli theorem, I can prove that $B$ is compact w.r.t. the $\| \cdot \|$ norm. It is clear to me that -$$\|\cdot\| \leq \|\cdot\|_{\alpha}.$$ -However, how can I show that there is some constant $C$ so that -$$\|\cdot\|_\alpha \leq C\|\cdot\|?$$ - -REPLY [2 votes]: I already know that the unit ball in $X$ (denoted $B$) is compact in the $\|.\|$ topology. So I just need to have the estimate -$$\| . \|_{\alpha} \leq C \|.\|$$ -for some $C$ to conclude that $B$ is compact in the $\|.\|_{\alpha}$ topology. Now $X$ is closed in $C[0,1]$, the inclusion $i : C^\alpha[0,1] \to C[0,1]$ is continuous so that $i^{-1}(X) = (X,\|.\|_{\alpha})$ is closed in $C^{\alpha}[0,1]$. Now we consider the "identity map" -$$\Phi : (X,\|.\|_{\alpha}) \to (X,\|.\|)$$ -which is bijective and continuous. The domain is a Banach space by the paragraph above (a closed subspace of a Banach space is a Banach space). It follows by the open mapping theorem that $\Phi^{-1}$ is continuous, so we have the estimate above as desired.<|endoftext|> -TITLE: Stuck proving that if $m$ and $n$ are perfect squares. Then $m+n+2\sqrt{mn}$ is also a perfect square. -QUESTION [5 upvotes]: I am relatively new to proofs and can't seem to figure out how to solve an exercise. -I am trying to prove: - -Suppose that $m$ and $n$ are perfect squares. Then $m+n+2\sqrt{mn}$ is also a perfect square. - -I know that per the definition of a perfect square, that $m=a^2$ and $n=b^2$, if a and b are some positive integer. -I can then use substitution to rewrite the statement as: -$$a^2+b^2+2\sqrt{a^2b^2}$$ -I also know that $2\sqrt{a^2b^2}$ can be simplified to: -S -$$a^2+b^2+2ab$$ -I am stuck after this point though. I don't know how to eliminate the $2ab$. - -REPLY [4 votes]: You don't need to eliminate the $2ab$ term. -Notice that $(a+b)^2 = (a+b)(a+b) = a^2+ab+ba+b^2 = a^2+b^2+2ab$. - -REPLY [2 votes]: Now use the fact that $a^2+b^2+2ab = (a+b)^2$.<|endoftext|> -TITLE: Show $f_n = f \circ f \circ \dots \circ f \longrightarrow 0$ uniformly on compact sets -QUESTION [6 upvotes]: I am seeking help on a complex analysis qualifying exam problem. - -Let $D$ be a bounded open connected subset of $\mathbf{C}$ containing $0$ and let $f \colon D \to D$ be an analytic function satisfying $f(0) = 0$ and $\left| f^\prime \right|(0) < 1$. Define $f_n = f \circ f \circ \dots \circ f$ ($n$ times). Prove that $f_n \longrightarrow 0$ uniformly on compact sets. - -The hint is to start locally around zero. I was able to prove that there exists a neighborhood $U$ contained within the radius of convergence of $f$ about $0$ such that $f_n \longrightarrow 0$ uniformly on compact sets contained in $U$. I note that the proof did not use the boundedness of $D$. -I am having trouble extending the result to the entirety of $D$. Perhaps I am supposed to exploit the connectedness of $D$, considering something like $E = \left\{ z \in D \colon \text{ the result is true locally around } z \right\}$ and showing this set is open and closed. If this is true, then for an arbitrary compact subset of $D$ we can take a finite cover applying the local result and be done with it. It is obvious that $E$ is open, but I am having trouble showing it is closed. -I don't have an idea where the boundedness of $D$ comes into play... -Many thanks in advance for your help. - -REPLY [4 votes]: Assume we have proved there exists $U \ni 0$ open on which $f_n \longrightarrow 0$ uniformly on compact sets. (This is not too hard. Start with $U$ small enough so that the power series for $f$ at $0$ converges on $U$. Then by assumption, for $z \in U$ we have $$ \left| f(z) \right| = \left|z\right| \left| f^\prime(0) + c_2 z + \dots \right|. $$ Since $f^\prime(0) < 1$, upon shrinking $U$ further, by continuity there exists $\alpha < 1$ such that $$ \left| f(z) \right| < \alpha |z|. $$ If $K \subset U$ is compact, then $$ \left| f_n(z) \right| \leq \alpha^n \sup_{w \in K} \left| w \right| \longrightarrow 0.)$$ -Now, since $D$ is bounded and $f$ maps into $D$, $f_n(z)$ is uniformly bounded, hence by the Arzelà-Ascoli theorem for analytic functions (i.e. Montel's theorem) there is a subsequence $f_{n_k}$ that converges uniformly on compact sets to some analytic function $f$. In particular, since $f_n \longrightarrow 0$ on $U$, $f(U) = 0$, hence $f = 0$. -Fix $\epsilon > 0$ and let $K$ be a compact subset of $D$. By what we just showed, there exists $N$ such that $\left| f_N(K) \right| \leq \epsilon$. Take $\epsilon$ sufficiently small so that $K^\prime \subset U$, where $K^\prime = f_N(K)$. Since $f_n \longrightarrow 0$ uniformly on $K^\prime$, take $M$ so that for all $n \geq M$, $\left| f_M(K^\prime) \right| \leq \epsilon$. But $f_M \circ f_N = f_{N + M}$, hence for all $n \geq N + M$ we conclude $$ \left| f_n(K) \right| \leq \epsilon. $$<|endoftext|> -TITLE: What is a proof of this limit of this nested radical? -QUESTION [8 upvotes]: It seems as if $$\lim_{x\to 0^+} \sqrt{x+\sqrt[3]{x+\sqrt[4]{\cdots}}}=1$$ -I really am at a loss at a proof here. This doesn't come from anywhere, but just out of curiosity. Graphing proves this result fairly well. - -REPLY [2 votes]: For any $2 \le n \le m$, let $\phi_{n,m}(x) = \sqrt[n]{x + \sqrt[n+1]{x + \sqrt[n+2]{x + \cdots \sqrt[m]{x}}}}$. I will interpret the expression we have as following limit. -$$\sqrt{x + \sqrt[3]{x + \sqrt[4]{x + \cdots }}}\; -= \phi_{2,\infty}(x) \stackrel{def}{=}\;\lim_{m\to\infty} \phi_{2,m}(x)$$ -For any $x \in (0,1)$, we have $\lim\limits_{m\to\infty}(1-x)^m = 0$. This implies -the existence of an $N$ so that for all $m > N$, we have -$$(1-x)^m < x \implies 1 - x < \sqrt[m]{x} \implies \phi_{m-1,m}(x) = \sqrt[m-1]{x + \sqrt[m]{x}} > 1$$ -It is clear for such $m$, we will have $\phi_{2,m}(x) \ge 1$. -Recall for any $k > 1$ and $t > 0$, $\sqrt[k]{1 + t} < 1 + \frac{t}{k}$. -Start from $\phi_{m,m}(x) = \sqrt[m]{x} \le 1$, we have -$$\begin{align} -& -\phi_{m-1,m}(x) = \sqrt[m-1]{x + \phi_{m,m}(x)} -\le \sqrt[m-1]{x + 1} \le 1 + \frac{x}{m-1}\\ -\implies & -\phi_{m-2,m}(x) = \sqrt[m-2]{x + \phi_{m-1,m}(x)} -\le \sqrt[m-2]{x + 1 + \frac{x}{m-1}} \le 1 + \frac{1}{m-2}\left(1 + \frac{1}{m-1}\right)x\\ -\implies & -\phi_{m-3,m}(x) = \sqrt[m-3]{x + \phi_{m-2,m}(x)} -\le 1 + \frac{1}{m-3}\left(1 + \frac{1}{m-2}\left(1 + \frac{1}{m-1}\right)\right)x\\ -& \vdots\\ -\implies & -\phi_{2,m}(x) \le 1 + \frac12\left( 1 + \frac13\left(1 + \cdots \left(1 + \frac{1}{m-1}\right)\right)\right)x \le 1 + (e-2)x -\end{align} -$$ -Notice for fixed $x$ and as a sequence of $m$, $\phi_{2,m}(x)$ is monotonic increasing. By arguments above, this sequence is ultimately sandwiched between $1$ and $1 + (e-2)x$. As a result, $\phi_{2,\infty}(x)$ is defined for this $x$ and satisfies -$$1 \le \phi_{2,\infty}(x) \le 1 + (e-2) x$$ -Taking $x \to 0^{+}$, we get -$$1 \le \liminf_{x\to 0^+} \phi_{2,\infty}(x) \le \limsup_{x\to 0^+}\phi_{2,\infty}(x) \le \limsup_{x\to 0^+}(1 + (e-2)x) = 1$$ -This implies $\lim\limits_{x\to 0^+} \phi_{2,\infty}(x)$ exists and equal to $1$.<|endoftext|> -TITLE: Proof of Hoeffding's Covariance Identity -QUESTION [5 upvotes]: Let $X,Y$ - be random variables such that $\operatorname{Cov}(X,Y)$ - is well defined, let $F(x,y)$ - be the joint-CDF of $X,Y$ - and let $F_X(x),F_Y(y)$ - be the CDF of $X,Y$ - respecitvely. Hoeffding's covariance identity states $$\operatorname{Cov}(X,Y)=\int\limits_{-\infty}^\infty \int\limits_{-\infty}^\infty \left[F(x,y)-F(x)F(y)\right] \, dx \, dy$$ - It can easily be seen that $$[F(x,y)-F(x)F(y)] = \mathbb{P}(X\leq x,Y\leq y)-\mathbb{P}(X\leq x) \mathbb{P}(Y\leq y) -=\mathbb{E}[1_{\{ X\leq x\} }\cdot1_{\left\{ Y\leq y\right\} }] - \mathbb{E}[1_{\{ X\leq x\} }] \mathbb{E}\left[1_{\{ Y\leq y\} } \right] = \operatorname{Cov}\left(1_{\{ X\leq x\} }, 1_{\{ Y\leq y\} }\right)$$ -So it would suffice to prove that$$\text{Cov}\left(X,Y\right)=\int\limits _{-\infty}^{\infty}\int\limits _{-\infty}^{\infty}\text{Cov}\left(1_{\left\{ X\leq x\right\} },1_{\left\{ Y\leq y\right\} }\right) \, dx \, dy$$ -I haven't manged to prove this but I did manage to prove that $$\operatorname{Cov}(X,Y) = \int\limits_{-\infty}^\infty \int\limits _{-\infty}^\infty \operatorname{Cov}\left(1_{\{ X\geq x\} },1_{\{ Y\geq y\} }\right) \, dx \, dy$$ - I would really appreciate some help getting from the result I did manage to prove to either the original Hoeffding identity or to the equivalent identity in terms of $\operatorname{Cov}(1_{\{ X\leq x\} }, 1_{\{ Y\leq y\} })$. - -REPLY [5 votes]: It suffices to observe that the random variables $\mathbb 1_{\{X \le x\}}$ and $\mathbb 1_{\{X \ge x\}}$ are perfectly correlated (except on a set of measure 0). Specifically, their sum is almost surely 1. Since the same holds for the indicator for $Y$, it immediately follows that the covariance of $\mathbb 1_{\{X \le x\}}$ and $\mathbb 1_{\{Y \le y\}}$ will be equal to the covariance of $\mathbb 1_{\{X \ge x\}}$ and $\mathbb 1_{\{Y \ge y\}}$.<|endoftext|> -TITLE: Showing that the polynomial $Y^2+X^2(X-1)^2$ is irreducible -QUESTION [6 upvotes]: How to show that the polynomial $Y^2+X^2(X-1)^2$ is irreducible in $\mathbb R[X,Y]$? - -I tried to show that $\mathbb R[X,Y]$ modulo this ideal is an integral domain but I cannot find any homomorphism. - -REPLY [9 votes]: It is helpful to think of this polynomial not as an element of $\mathbb{R}[X,Y]$, but as an element of $A[Y]$, where $A=\mathbb{R}[X]$. That is, we consider it as a polynomial only in $Y$, with polynomials in $X$ as coefficients. Now suppose we had a factorization $Y^2+X^2(X-1)^2=f(X,Y)g(X,Y)$. Then as polynomials in $Y$, the degrees of $f$ and $g$ must add to $2$, and their leading coefficients must multiply to $1$. The only units in $A$ are constants, so we may multiply $f$ and $g$ by constants to assume they are both monic. If either $f$ or $g$ has degree $0$, then it is just $1$, so we have the trivial factorization. The only other possibility is that they both have degree $1$. This means we have $f(X,Y)=Y+f_0(X)$ and $g(X,Y)=Y+g_0(X)$ for some $f_0(X),g_0(X)\in A$. So we must have $$Y^2+X^2(X-1)^2=(Y+f_0(X))(Y+g_0(X))=Y^2+(f_0(X)+g_0(X))Y+f_0(X)g_0(X).$$ -Thus $g_0(X)=-f_0(X)$ and $-f_0(X)^2=X^2(X-1)^2$. But no such $f_0(X)$ exists (for instance, the leading coefficient of the left-hand side must be negative but the leading coefficient of the right-hand side is $1$).<|endoftext|> -TITLE: Diophantine equations for polynomials -QUESTION [9 upvotes]: I know that there has been work on diophanitine equations with solutions in poynomials ( rather than integers ) of the Fermat and Catalan type $x(t)^n+y(t)^n=z(t)^n$ ; $x(t)^m-y(t)^n=1$ and these have been completely solved (For Fermat's equation with $n>2$ and in $\mathbb C[t]$ by Greeneaf and for Catalan's equation in $\mathbb C[t]$ by Nathanson ) . I would like to know whether there has been similar work on solutions in $\mathbb C[x]$ or $\mathbb Z[x]$ for Pell type equations $f(x)^2-ng(x)^2=1$ , where $n$ is given positive integer ; or similarly for say Erdos-Strauss conjecture $4f(x)g(x)h(x)=n(f(x)g(x)+g(x)h(x)+h(x)f(x))$, where $n>1$ is given integer , or say concerning Ramanujan-Nagell-Lebesgue type equation $f^2+D=Ag^n$, where $D$,$A$ are given integers and we have to find polynomials $f,g$ and positive integer $n$ . Any reference or link concerning these and other types of Diophantine equations with solutions in polynomials will be highly appreciated . Thanks in advance - -REPLY [2 votes]: If $n$ is constant (integer or not) then -$f^2 - ng^2 = 1$ has no solution in nonconstant polynomials $f,g$: -over $\bf C$ we can extract a square root $m$ of $n$, -factor $f^2 - ng^2$ as $(f-mg)(f+mg)$, and observe that -both factors must have degree zero because their product does. -If $D,A$ are constants with $D \neq 0$ then there is no solution to -$f^2 + D = Ag^n$ in nonconstant polynomials except in the trivial case $n=1$. -This is a consequence of the -Mason(-Stothers) -theorem (polynomial analogue of the -abc conjecture), -because $f^2$, $D$, and $Ag^n$ would be relatively prime as polynomials, and would have too many repeated factors. -(The exponent 2 case also gives -an alternative proof of the result on $f^2-ng^2=1$.)<|endoftext|> -TITLE: Group of order $2n$ and subgroups of order $n$ -QUESTION [6 upvotes]: From a French oral examination (second year) : - -Let $G$ be a finite group of order $2n$. Show that the number of subgroups of $G$ of order $n$ is never $2$. - -I have no ideas ! - -REPLY [4 votes]: Assume not, -Then $H,K$ be subgroup of index $2$, hence normal. Thus, $H\cap K$ is normal in $G$ and $|H\cap K|$ has index $4$ in $G$. -Thus, $G/H \cap K\cong Z_4$ or $G/H \cap K\cong Z_2 \times Z_2$, In firs case $G$ has a uniq subgroup of index $2$ including $H\cap K$, in second case $G$ has a $3$ subgroup of index $2$ including $H\cap K$. Contradiction.<|endoftext|> -TITLE: Components of the space of immersions 2-manifold into $\mathbb R^3$ -QUESTION [5 upvotes]: Let $M$ be a $2$-sphere with $g$ handles. Consider the space of maps $M\to \mathbb R^3$, which are immersions [i.e. smooth maps with nondegenerate differential in each point $x\in M$], with compact-open topology. It is well-known that for $g=0$ this space is path-connected; and how about the same question when $g>0$? Is it an open question, or it's possible to find the article with the proof? - -REPLY [4 votes]: The space of immersions has $2^{2g}$ components. -Let's go back to the proof of the fact for $g=0$: Smale-Hirsch immersion theory. The form of the result I want is from here. -Theorem: The space of immersions $\Sigma_g \to \Bbb R^3$ is homotopy equivalent to the space of bundle injections $T\Sigma_g \to \bf{3}$, the trivial bundle of rank 3 over $\Sigma_g$. -Instead of re-writing the wheel, I'll take the answer there starting at the cited theorem and say what must be modified. -What the author there obtains is that that space of immersions is homotopy equivalent to $\text{Maps}_*(\Sigma_g,O(3)) \times SO(3)$. Nothing about that was special to $g=0$. (Note that because we're basepointed we may as well call the mapping space $\text{Maps}_*(\Sigma_g,SO(3))$.) The issue is in his identification of that mapping space, he realizes that for $g=0$ it's $\Omega^2 SO(3)$, which is easily identified. We can't quite do this. -1) There are many homotopy classes, depending on the choice of homomorphism $\pi_1(\Sigma_g) \to \pi_1(SO(3))$ we pick. There are $2^{2g}$ many of them. If we fix the induced map to be zero, passing to the double cover of $SO(3)$ shows that this map is null, so there are precisely that many homotopy classes. -Phrasing this in terms of geometry, by picking a single immersion into $\Bbb R^3$, we've chosen a framing of the stable tangent bundle $T\Sigma_g \oplus \bf 1$. Any other immersion's framing differs by a map $\Sigma_g \to SO(3)$, and this space has $2^{2g}$ components. -Note that Smale-Hirsch immersion theory is for $C^\infty$ things, and I bet you can get it down to $C^2$. I don't know about $C^0$, $C^1$. I bet it's disconnected then too.<|endoftext|> -TITLE: Calculate limit of integral sequence -QUESTION [5 upvotes]: Hi i need to calculate limit of integral sequence: -$$\lim_{n\rightarrow\infty}\frac{1}{n}\int_{1}^{\infty}\frac{dx}{x^2\cdot\ln{(1+\frac{x}{n})}}=\lim_{n\rightarrow\infty}\int_{1}^{\infty}\frac{dx}{x^2\cdot n\cdot\ln{(1+\frac{x}{n})}}=\int_{1}^{\infty}\frac{dx}{x^2\cdot e^x}$$ and it's true if i can use monotone convergence theorem. Everything seem fine but i am not sure about monotonicity of $f_n$. I thinks it's decreasing and according to statement it should be otherwise. And taking negative doesn't seem right at all. -And another one. -$$\lim_{n\rightarrow\infty}\int_{0}^{n}(1+\frac{x}{n})^{n+1}\cdot e^{-2x}=\int_{0}^{\infty}e^{-x}$$ and here i am not sure about the $\infty$ in integral. -I am hopeless newbie so i will be glad for thorough explanation. - -REPLY [6 votes]: If you are not familiar with dominated convergence and such things, you could use the inequality -$$ -\frac{y}{1+y}<\ln(1+y) -TITLE: Is $x_{n+1}=\frac{x_n}{2}-\frac{2}{x_n}$ bounded? -QUESTION [7 upvotes]: Consider the sequence: $x_1=3, x_{n+1}=\frac{x_n}{2}-\frac{2}{x_n}$. This sequence is bounded or is unbounded? - -Attempt -Checking a few terms we get $x_1 = 3, x_2 = \dfrac{5}{6}, x_3 = -\dfrac{119}{60},x_4 = \dfrac{239}{14280},\cdots$. I will prove by contradiction that it is bounded. Suppose that the sequence is bounded. Then there exists some $x_k$ such that $x_k \geq \dfrac{x_n}{2}-\dfrac{2}{x_n}$ or $x_k \leq \dfrac{x_n}{2}-\dfrac{2}{x_n}$ for all $n$. -I seem to get stuck here. - -REPLY [8 votes]: The map $x \mapsto x/2 - 2/x$ is a degree $2$ map on $\Bbb P^1(\Bbb C)$ with two superattractive fixed points ($2i$ and $-2i$) where the derivative is $0$, and a repulsive fixed point at $\infty$ where the derivative is $2$. -Dynamically, this really looks just like the map $y \mapsto y^2$ which also has two superattractive fixed points $0$ and $\infty$, and one repulsive fixed point at $1$, with the same behaviour near those points. -Hence, you should look at $$y_n = \frac {x_n-2i}{x_n+2i}.$$ Using this change of variable, you get $$y_{n+1} = y_n^2.$$ Also, the real line is transformed into the unit circle in $\Bbb C$ ($x$ is real iff $|y|=1$). -Now the behaviour of a point on the unit circle under the squaring map is well-understood : write $y_0 = \exp(\lambda i\pi)$. If $\lambda$ is rational then the sequence is ultimately periodic, and in general the behaviour of the sequence is obtained by looking at the binary digits of $\lambda$ -Since $y_0 = \frac{3-2i}{3+2i} = \frac{5-12i}{13}$, you need to look at $\frac 1 \pi \arctan \frac{12}{5}$. Since it is not rational, the sequence $(y_n)$ is not ultimately periodic. -(you can also deduce this from the fact that $\Bbb Z[i]$ is a unique factorisation domain and $3\pm 2i$ are prime, so the prime factorisations of $y_n = y_0^{2^n}$ are obviously all different) -To show it is not bounded you need to prove that the binary development of that constant has strings of $0$s or $1$s of arbitrary length. - -This also gives a geometrical interpretation of the recursion :<|endoftext|> -TITLE: How to multiply a vector from the left side with matrix? -QUESTION [6 upvotes]: I have always dealt with vector - matrix multiplication where the vector is the right multiplicand, but I am not sure how to apply the product between a matrix and a vector when the vector is the left multiplicand. -I have the following example -$$\beta = \begin{pmatrix} \beta_0 & \beta_1 \end{pmatrix} \in \mathbb{R}^{1 \times 2}$$ -and a general matrix -$$A = \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22}\end{pmatrix} \in \mathbb{R}^{2 \times 2}$$ -What would be the algorithm to multiply $\beta \cdot A$? Of course the result is a $1 \times 2$ row vector. - -REPLY [7 votes]: Matrix multiplication is defined so that the entry $(i,j)$ of the product is the dot product of the left matrix's row $i$ and the right matrix's column $j$. -If you want to reduce everything to matrices acting on the left, we have the identity $xA = \big(A^Tx^T\big)^T$ where $T$ denotes the transpose. This is because $(AB)^T = B^TA^T$, and the operation that sends a matrix to its transpose is self-inverse.<|endoftext|> -TITLE: Definition of topological space: Is Ω equal to the powerset of X? -QUESTION [12 upvotes]: A topological space is a set $X$ and a collection $\Omega$ of subsets of $X$ such that: - -$\emptyset \in \Omega$ and $X \in \Omega$ -The union of any collection of $\Omega$ is in $\Omega$ -The intersection of any finite collection of sets in $\Omega$ is again in $\Omega$ - -My question is, is the set $\Omega$ essentially the powerset of $X$? Or is the powerset of $X$ just a special case of a suitable collection of subsets of $X$? - -REPLY [2 votes]: Let X = {a,b} be a set with 2 elements. There are four distinct topologies on X: -{∅, {a,b}} (the trivial topology) -{∅, {a}, {a,b}} -{∅, {b}, {a,b}} -{∅, {a}, {b}, {a,b}} (the discrete topology) - -(wikipedia: Finite topological space) -The second and third of these are topologies, but are not power sets of X. -Afaict if all the elements of X are elements of the topology then there is only one topology, the discreet, identical to the powerset of X.<|endoftext|> -TITLE: Volume of 1/2 using hull of finite point set with diameter 1 -QUESTION [14 upvotes]: It's easy to bound a volume of a half. For example, the points $(0,0,0),(0,0,1),(0,1,0),(3,0,0)$ can do it. The problem is harder if no two points can be further than 1 apart. Bound a volume of 1/2 with a diameter $\le 1$ point set. -With infinite points at distance 1/2 from the origin, a volume of $\pi/6 = 0.523599...$ can be bound. But we want a finite point set. What is the minimal number of points? -(A 99 point set used to be here. See Answers for a much better 82 point set) -Here's a picture of the hull. Each vertex is numbered. Green vertices have one or more corresponding blue faces with vertices at distance 1. Each blue face has a brown number giving the opposing green vertex. Red vertices and yellow faces lack a face/vertex pairing. - -Some may think that Thomson problem solutions might give a better answer. The first diameter 1 Thomson solution with a volume of 1/2 is 121 points with volume .500069. -These points will not fit in a diameter 1 sphere, but the maximal distance between points is less than 1. Similarly, a unit equilateral triangle will not fit in a diameter 1 circle. -Is 99 points minimal for bounding a volume of 1/2 using a point set with diameter 1? Or, to phrase it as a hypothesis: -99 Point Hypothesis -99 points of diameter 1 in Euclidean space. -99 points with a volume of a 1/2. -Take one off, move them around (without increasing diameter) -You can't get a volume of 1/2 any more. - -REPLY [6 votes]: (Update) -My current result is $82$ points: -consider this point set: -pts = { -{39331, -1787, 125739}, -{-42020, -78476, 96709}, -{97017, -83209, 30835}, -{-17033, 70737, 109597}, -{-54599, 29504, 115688}, -{-69547, 63866, 91701}, -{-84862, -62280, -80052}, -{111630, -49662, -51118}, -{110858, 44843, -58218}, -{7570, -94324, 91248}, -{115828, -36578, 50910}, -{-103422, 33617, 73525}, -{13903, 130088, -24865}, -{-48488, -30540, -119577}, -{13546, 105208, 78574}, -{92754, -90941, -22055}, -{-87842, -12726, -97961}, -{17890, -95311, -90222}, -{-32617, 127358, -17688}, -{-83770, -100939, 6478}, -{-67513, -103415, -46172}, -{-15435, 70574, -111233}, -{42948, 122369, 28253}, -{82827, -31757, -98975}, -{-8841, 14824, 130515}, -{-31918, -116156, 52485}, -{-124638, 33189, 26548}, -{46151, -58101, 108697}, -{-107711, 76927, -3256}, -{8590, -131155, -3832}, -{-2349, -45047, 123671}, -{-67052, 113066, 17470}, -{-49845, -26471, 118738}, -{45038, -56580, -110986}, -{124167, -45279, 903}, -{60780, -115738, 12319}, -{-109374, -68092, -27125}, -{-40207, -124921, 2722}, -{74952, 40665, 100449}, -{88162, -58830, 78010}, -{60461, 114907, -29946}, -{110136, -3355, -73936}, -{70896, 79060, -79787}, -{56554, -97875, 67358}, -{72446, -84584, -71147}, -{30586, 57713, 114256}, -{-15936, -120088, -52161}, -{-480, -46761, -124154}, -{-72908, 103917, -38653}, -{-101424, 28721, -80454}, -{-45115, 103290, 68859}, -{41881, -117921, -41667}, -{-74575, -93889, 53049}, -{108114, 53390, 54482}, -{15266, -123265, 42434}, -{40723, -3854, -126221}, -{90334, 94409, 22158}, -{96396, 85431, -32579}, -{-63349, 75478, -88497}, -{122169, 52183, -1811}, -{108487, 5280, 74810}, -{-88785, -956, 96779}, -{-7851, 14221, -131625}, -{64857, 88850, 73124}, -{23713, 102177, -81511}, -{129972, 1413, -27143}, -{-119337, -14421, 52312}, -{-88103, -51438, 82718}, -{-10887, 127563, 33645}, -{33805, 54367, -116181}, -{-102814, 64657, -52366}, -{-126644, 25744, -26822}, -{-25275, 110536, -68979}, -{-112785, -59627, 30034}, -{-129858, -19908, 289}, -{-36740, -84005, -95750}, -{78058, 29755, -103069}, -{-118373, -22382, -53597}, -{-55526, 28946, -116699}, -{-94065, 79056, 48080}, -{80742, -15619, 102763}, -{129505, 8123, 26059} -} - -Then (Mathematica code) -Volume[ConvexHullMesh[pts]] - -is $\approx 9.00744\times10^{15}$. -And Mathematica sketch: -ConvexHullMesh[pts]] - - -Another picture. If all vertices of a face are at distance one from another vertex, the face is colored blue. - -Since all point coordinates are integer, then one can write it directly (with arbitrary small computational errors): -$$Diameter = \sqrt{68\;719\;348\;253} \approx 262\;143.\;754\;938;$$ -$$Volume = \dfrac{54\;044\;635\;971\;533\;362}{6} \approx 9\;007\;439\;328\;588\;893.\;666\;667.$$ -If multiply all coordinates by $\dfrac{1}{2^{18}}$, then we'll get: -$$Diameter = \frac{\sqrt{68\;719\;348\;253}}{262\;144} \approx 0.999\;999\;065;$$ -$$Volume = \dfrac{54\;044\;635\;971\;533\;362}{2^{54}\times 6} \approx 0.\;500\;013\;326.$$ - -Note: when add any point (with real coordinates) rather close to (the center of) any face, one will get the set of $83, 84, ...$ points with described property.<|endoftext|> -TITLE: Name for a ring that also has composition - aka function application? -QUESTION [22 upvotes]: What is the following type of ring? Does it have a name? -Suppose $(R,\cdot,+,0,1)$ is a ring with another binary operation, $\circ$ with the properties: -$$(a\circ b)\circ c=a\circ(b\circ c)\\ -(a+b)\circ c=(a\circ c)+(b\circ c)\\ -(a\cdot b)\circ c = (a\circ c)\cdot (b\circ c)\\ -0\circ c = 0; 1\circ c=1$$ -Many of these rings have a left-right $\circ$-identity. -Foundational Example 1: Given a ring $S$, the set $S^S$ of all functions $S\to S$ with point-wise addition and multiplication and composition for $\circ$: -$$(f+g)(s)=f(s)+g(s)\\(fg)(s)=f(s)g(s)(f\circ g)(s)=f(g(s))$$ -This has a left-right $\circ$ identity, $I(s)=s$. -Every algebra $R$ of this sort with a right $\circ$ identity in this class is isomorphic to a sub-algebra of some $S^S$, specifically, with $S=R$. This is because if $f_a=c\mapsto a\circ c$ then if $f_a=f_b$ then $a=f_a(X)=f_b(X)=b$ where $X$ is the right identity. -Foundational example 2: Given a commutative ring $S$, $S[x]$ with polynomial composition. -This case has a left-right $\circ$ identity, $x$. -Continuous functions: -Given a topological ring, the set of continuous functions $S\to S$ satisfies this condition, with the identity function a left-right $\circ$-identity. -Entire functions: The ring of entire functions $\mathbb C\to\mathbb C$ is an integral domain, containing (an isomorphic image of) $\mathbb C[x]$, with $\circ$ being standard function composition. -Less well-known example: The set of all functions $f:\mathbb Z\to\mathbb Z$ with the condition: $$f(m)\equiv f(n)\pmod{m-n}$$ for all $m,n\in\mathbb Z$. -This ring is an odd integral domain that contains an isomorphic image of $\mathbb Z[x]$, since every integer polynomial function is of this sort, but there are more elements that are non-polynomials (and some non-integer polynomial functions, too.) It has $I(n)=n$ as a left-right $\circ$ identity. -The condition can be seen as requiring some "smoothness" of the functions in all the $p$-adic metrics. They can be seen as matching a certain strong uniform continuity/Lipshitz continuity in all the $p$-adics.In particular, for any $p$, we have that $f$ can be extended to the $p$-adic numbers uniquely to make it continuous, and $\|f(a)-f(b)\|_p\leq \|a-b\|_p$ for all $p$-adic integers $a,b$. Indeed, it is essentially a "bounded Lipshitz" criterion - this doesn't work in general, but does work in non-Archimedian valuation rings. -Related: If $(S,\|\|_S)$ is a non-Archimedian valuation ring with bounded valuation $\|s\|_S\leq 1$ (think the $p$-adic integers, or the rational integers under the $p$-adic norm.) Then we can take the ring $R$ of functions $f:S\to S$ with the -property: -$$\|f(r)-f(s)\|_S\leq \|r-s\|_S$$ -This is a ring of the above sort. -If we take $S=(\mathbb Z,\|\|_p)$, the integers with the $p$-adic norm, this set $R_p$ is just all functions $f:\mathbb Z\to\mathbb Z$ such that $p^k\mid f(n+p^k)-f(n)$ for all $n\in\mathbb Z$ and $k>0$. -Then the original ring at the beginning of this section can be seen as $\bigcap_p R_p$. The intersection of sub-algebras is always a sub-algebra. -Near misses - without multiplicative unit: The ring of meromorphic functions $\mathbb C\to\mathbb C$ is almost of this form, but when $c(x)=C$ is constant and $h(x)$ is not defined at $C$, we don't get a meromorphic function. However, if we relax the condition, and not require $R$ to have a multiplicative unit, we can take the ring of meromorphic functions $f$ with $f(0)=0$. Then the set of such functions is such a ring. -A related near miss: The sub-algebra of $S[[x]]$ of formal power series over a ring $S$, with no constant term. Again, no multiplicative unit, but everything else satisfied. -Properties: -For any $f\in R$, $R_{f}=\{a\circ f\mid a\in R\}$ is a sub-algebra. The condition that $0\circ f=0,1\circ f=1$ shows that $0,1\in R_f$. Indeed, $R_f$ is a sub-algebra, because $(a\circ f)\circ (b\circ f)=(a\circ f\circ b)\circ f$. More generally, given $g\in R$ and $a\in R_f$ then $g\circ a\in R_f$. So $R_f$ is sort of like a $\circ$-right-ideal. -For example, the sub-algebra of even entire functions is $R_{x^2}$. -The subring $R_0=\{a\circ 0\mid a\in R\}$ has the property that if $r\in R_0$ then $r\circ f = (a\circ 0)\circ f = a\circ(0\circ f)=a\circ 0=r$ for any $f$. -Note that $R_0\subseteq R_f$ for any $f$. Indeed, if $r\in R_0$ then $r=a\circ 0 = a\circ(0\circ f)=(a\circ 0)\circ f$. Now, if $f\in R_0$, we get $R_0=R_f$ because $a=a\circ f =a\circ(f\circ 0) = (a\circ f)\circ 0$. -On the other hand, if $a=a\circ f$ for all $f$, then $a=a\circ 0$ so $a\in R_0$. -Basically, $R_0$ is really "all" of the constant functions. Also, for $f\in R$ and $r\in R_0$, $f\circ r\in R_0$. - -REPLY [5 votes]: This appears to be very similar to a composition ring, except that the axioms on Wikipedia don't include the axioms $0\circ c=0$ and $1\circ c=1$ (and indeed, don't require an unital ring, so there may not be an $1$), but require a commutative ring, which you don't require. -However, $0\circ c=0$ follows from $0\circ c = (0+0)\circ c = (0\circ c) + (0\circ c)$, so at least that axiom is superfluous. -For $1\circ c$ this doesn't work, since in a ring you don't need to have multiplicative cancellation. So this is a true extra condition to a composition ring; indeed, the linked Wikipedia üage contains an example that explicitly contradicts this axiom (namely, $f\circ g=0$ for all $f,g\in R$).<|endoftext|> -TITLE: If $M_*$ and $N_*$ are graded modules over the *graded* ring $R_*$, what is the definition of $M_* \otimes_{R_*} N_*$? -QUESTION [9 upvotes]: Quick question (hopefully): What is the correct definition of a tensor product of two graded $R_*$-modules and/or graded $R_*$-algebras $M_*$ and $N_*$ over the graded ring $R_*$? -$M_* \otimes_{R_*} N_* = ?$ -If R is not graded I know how to do this, but when $R_*$ is graded the usual construction doesn't work ($N_i$ isn't a $R_*$ module). -The motivation is that I want to understand what happens when the quasicoherent sheaf associated to a graded module is pulled back along $\pi: $ Proj($R_*$) $\to$ Proj($S_*$), provided this map is defined by some map of graded rings $S_* \to R_*$. I am guessing that a correct algebraic definition for this construction will show that $\pi^{*}(\widetilde{M_*}) = \widetilde{M_* \otimes_{S_*} R_*}$. - -REPLY [6 votes]: Just a comment -- an indirect but intuitive way to remember this is the following: -First of all, as a module $M_* \otimes_{R_*} N_*$ is just the ordinary tensor product of $R_*$-modules. This is the only reasonable construction (we can always apply the operation of "forgetting the grading", which reduces the construction to this ordinary tensor product). -So the module structure is unique. That leaves the grading: observe that $M_* \otimes_{R_*} N_*$ is generated by homogeneous elements of the form $m \otimes n$. Clearly, the degree of such an element is $\deg(m) + \deg(n)$. -Moreover, the defining relations of $R_*$-bilinearity respect this rule -- multiplying by scalars raises the degree in the `intuitive' way -- which means it's well-defined to grade elements of $M_* \otimes_{R_*} N_*$ by (a) finding any way to represent them as sums of simple tensors, then (b) using the above rule. -Following this reasoning a few more steps leads to the nice computation done by Hanno.<|endoftext|> -TITLE: Is co-cohomology the same as homology? -QUESTION [8 upvotes]: Suppose I have a chain complex of chains $C_n$. Then one can obtain the homology groups of this complex. Now if I choose any abelian group $G$ and I consider the cochain group $C_n^*=Hom(C_n,G)$ then I can obtain the cohomology groups. Now the question is: If I form the cocohomology group by considering $C_n^{**}=Hom(Hom(C_n,G),G)$ and defining the cocoboundary maps in the obvious ways, will I obtain the Homology groups again? - -REPLY [5 votes]: Not in general. -Suppose that all of the $C_n = \oplus_{i = 0}^{\infty} \mathbb{k}$, with differentials $0$, and let $G = \mathbb{k}$. Then $Hom(C_n, \mathbb{k}) = \Pi_{i = 0}^{\infty} \mathbb{k}$. Taking the $\mathbb{k}$ dual again gives something containing as a subspace $\Pi_{i = 0}^{\infty} \mathbb{k}$, with zero as the differentials. The cohomology groups are different, as $(\Pi_{i = 0}^{\infty} \mathbb{k})^* \not = \oplus_{i = 0}^{\infty} \mathbb{k}$ -If you have torsion in your chain groups: for example $\mathbb{Z}/2$ as some component of some $C_n$, this data will disappear when you double dual. (You can concoct an example again with zero differentials.) -I guess one situation in which you can say that they will be the same is when you are working in the category of finite dimensional vector spaces. -In the more common algebraic topology situation (for example cellular homology), then $C_*$ is a bounded complex of finitely generated free $R$ modules, where $R$ is a PID. In this case one can say something by repeated applications of the universal coefficients theorem and computational facts about exts - the formula you'll get for ${}_iH$ (the ith cocohomology) will be some messy combination of direct sums and compositions of Exts and Homs, involving the groups $H^{i}, H^{i-1}, H_i, H_{i-2}$: https://en.wikipedia.org/wiki/Universal_coefficient_theorem -(Maybe somebody with more sophistication than me can interpret this in terms of composition of the left derived functors of $Hom(\_,G)$?) -Maybe you can play around with that to get some conditions under which the cocohomology agrees.<|endoftext|> -TITLE: How to draw a Mandelbrot Set with the connecting filaments visible? -QUESTION [6 upvotes]: The M-Set is connected. But the M-Set viewers I’ve found create cool pictures that don’t really show the connecting filaments. -This mini-Mandel beetle should be connected to a larger min-Mandel by a black filament going into its “butt crack”, but you can’t see it here: - -Monochrome pictures that show just the M-Set itself (not the colorful divergence contours) end up showing seemingly disconnected pieces where the filaments are still so thin they disappear between samples: - -Denser sampling helps, but it’s very expensive, and pictures end up showing a lot of grey “fuzz” where the filaments become sub-pixel thin, so we still can't see how the mini-Mandels are connected. - -I thought of using Mathematica to draw contour plots of $|z_n|==2$. For any given zoom level, there should be some high value of n for which the contour would be visually indistinguishable from the $z_\infty$ contour, right? I posted questions on the Mathematica SE site to get help on this approach, but using ContourPlot[] as a “zoom” function is tricky, though I haven't given up on this approach. My MMa-SE post on this approach is HERE - -I also thought of calculating huge numbers of Misiurewicz Points which lie on the boundary of the Mandelbrot Set. Surely enough of them will make a picture, right? But this plot of 17,723 points was as far as this clever approach went before numerically solving $2^{16}$-order irreducible polynomials proved slightly impractical: - -(If you like that picture, the MMa code to make it is on my question HERE) -So, does anyone have any other ideas for showing the filaments? Or fixes to my various failed ideas? - -REPLY [5 votes]: You want to colour the complement of the Mandelbrot set using the exterior distance estimate. For each pixel you calculate the running derivative (w.r.t. $c$) as well as the $z$ iterate, then at the end you combine them to get a distance estimate. Comparing this to the pixel spacing allows you to colour pixels close to the set black and pixels far from the set white. In practice I use $\tanh(d)$ (where $d$ is relative to pixel spacing) to give a smoother transition between white and black. - -Pseudo-code (use a larger escape radius for finer appearance): -foreach pixel c - while not escaped and iteration limit not reached - dz := 2 * z * dz + 1 - z := z^2 + c - de := 2 * |z| * log(|z|) / |dz| - d := de / pixel_spacing - plot_grey(tanh(d)) - -Your second image will look something more like this:<|endoftext|> -TITLE: What does this double sided arrow $\longleftrightarrow$ mean? -QUESTION [19 upvotes]: What is $\longleftrightarrow$ used for in mathematics? I know about $\iff$ being used for "If and only if". Are they the same thing? I was watching a YouTube video that said: -$$\sum^{\infty}_{n=1} {1\over n^x} \longleftrightarrow \int^{\infty}_{1} {1\over t^x} dt$$ -The teacher mentions convergence/divergence, but I was confused when the notation came up. - -REPLY [8 votes]: I understand it, in this context, as a sign corresponding to a "loose equivalence" between the convergence of both terms, under specific conditions that are not fully mentioned. The presenter writes that a sum diverges/converges if the corresponding integral diverges/converges. I would translate it as: "LHS property is (somehow) strongly related to the RHS one". -It is not an "if and only if", indeed this is not true in general in that case. -The $\leftrightarrow$ symbol appears after the Maclaurin–Cauchy integral test for convergence (the so-called Cauchy integral theorem is quite different). The standard test works under the following conditions: - -$f$ is continuous, defined on $[n_0, +\infty [$ for some integer $n_0$, -$f$ is monotone and decreasing. - -Then the infinite series $\sum_{n=n_0}^\infty f(n)$ -converges to a finite limit if and only if ($\Leftrightarrow$) the improper integral -$\int_N^\infty f(x)\,dx$ is finite. And if the integral diverges, then the series diverges as well. Here, the test works for the $p$-series, as $t \to \frac{1}{t^x}$ is continuous decreasing for $x >0$, and the convergence of the series depends on $x> 1$ or not. -As mentioned in comments, many mathematical symbols have several interpretations (e.g. bijection or logical biconditional).<|endoftext|> -TITLE: If $H$ and $\frac GH$ are connected so is $G$ -QUESTION [8 upvotes]: In this proposition: - -Where in the proof is the closedness of the normal subgroup $H$ used? - -REPLY [8 votes]: It's not used or required. But it's common to only talk about $G/H$ when $H$ is a closed subgroup, because otherwise $G/H$ will not be Hausdorff. In particular, the book in which that proof appears defines topological groups to be Hausdorff (see page 84).<|endoftext|> -TITLE: What is irrational number with the least/lowest irrationality? -QUESTION [5 upvotes]: The golden ratio has been called as "the most irrational number", based on a particular method called a continued fraction method. Using this continued fraction method the golden ratio has been stated as "the most irrational number". My question is: If there's a number with the greatest irrationality, then what is irrational number with the lowest irrationality among all irrational numbers ? - -REPLY [4 votes]: Theoretically, the concept measure of irrationality of a real number $\alpha$ is, technically speaking, the following specialized notion: it is the infimum of all real $\mu$ for which there is a positive constant $A$ such that for all rational $\frac pq\ne \alpha$ with $q>0$ one has -$$|\alpha - \frac pq|>\frac {A}{q^{\mu}}$$ This inequality indicates how “far” of the real $\alpha$ is a rational “close” to $\alpha$; in other words, all rational “near” to $\alpha$ determines its “distance” from $\alpha$; Or even, how rational can not approach $\alpha$. -See as example the “striking inequality” (Baker) discovered by Mahler in 1953 and today improved, -$$|\pi-\frac pq|>\frac{1}{q^{42}}$$ valid for every rational $\frac pq;\space q> 1$ -There is a whole Epica about this topic of measure of irrationality, beginning with Dirichlet and his approximation theorem (1842), Liouville and his discovery of the first known transcendental number (1844), passing through Thue (1909), Siegel (1929), Dyson (1947) and the gold brooch finisher with Klaus Friedrich Roth (1955) and his deep result which earned him the Fields Medal. -Theorem (Roth).- For all algebraic irrational $\alpha$ and all $\epsilon > 0$ the inequation -$$ |\alpha- \frac pq |< \frac{1}{q^{2+\epsilon}}$$ has only a finite number of solutions in irreducible rational $\frac pq$ i.e. for all $\epsilon>0$ there is a positive constant $C(\alpha,\epsilon)$ such that for all rational $\frac pq$; $q>0$ one has -$$|\alpha- \frac pq|>\frac {C(\alpha, \epsilon)}{q^{2+\epsilon}}$$ -“The achievement is one that speacks for itself: it closes a chapter, and a new chapter is now opened. Roth’s theorem settles a question which is both of a fundamental nature and of extreme difficulty. It will stand as a landmark in mathematics for as long as mathematics is cultivated” (Harold Davenport, in his presentation of Roth to the Fields Medal at the International Congress in Edinburgh,1958). -With Liouville, measure of irrationality of a real algebraic $\alpha$ was equal to its degree $n$ and $n$ was successively decreasing (with the discoveries of the above mentioned authors) till the optimal value $ 2 $ established by Roth.<|endoftext|> -TITLE: Characterisation of convergence of bounded sequences via ultra-filters -QUESTION [5 upvotes]: Let $\{a_n\}_{n\in\mathbb N}$ be a bounded sequence of real or complex numbers and $\mathscr F\subset\mathscr P(\mathbb N)$ be a non-principal ultra-filter. Then $a=\lim_{\mathscr F}a_n$ is well-defined and corresponds to a subsequential limit of $\{a_n\}_{n\in\mathbb N}$, i.e., there exists an infinite set -$$ -A=\{i_1 -TITLE: de Bruijn sequence in which order of subsquences doesn't matter -QUESTION [10 upvotes]: A de Bruijn sequence of of size $k$ will contain every single subsequence of length $n$ exactly once within a single cycle. Is there another similar type of sequences in which the order of the subsequence elements is not considered? -For instance, consider this de Bruijn sequence of alphabet size 3 and subsequence length 3: -$0\ 0\ 0\ 1\ 0\ 0\ 2\ 0\ 1\ 1\ 0\ 1\ 2\ 0\ 2\ 1\ 0\ 2\ 2\ 1\ 1\ 1\ 2\ 1\ 2\ 2\ 2\ (0\ 0)$ -In the case above, certain subsequences will have the same elements but in different order (e.g. $0\ 0\ 1$, $0\ 1\ 0$, $1\ 0\ 0$). -For the case above, I would like to find a sequence which would list all subsequences below exactly once before looping: -$0\ 0\ 0$ -$0\ 0\ 1$ -$0\ 0\ 2$ -$0\ 1\ 1$ -$0\ 1\ 2$ -$0\ 2\ 2$ -$1\ 1\ 1$ -$1\ 1\ 2$ -$1\ 2\ 2$ -$2\ 2\ 2$ -Does such a sequence exist? - -REPLY [2 votes]: When size of the alphabet (k) equals 1 and the length of the subsequences (n) is 1 there is a valid cycle, for higher n the additional repetitions from overlap invalidate the cycle (00"0" for instance has it's two 0's and then the loop adds a second instance of two 0's, in other words, it repeats.) When n is 0 you must have a valid cycle for any alphabet but this is trivial. When n=2 there is a valid cycle for k=3,5 (001122"0", 002113224330441"0") I have not found valid cycles for other k when n=2. I can confirm that if they exist they are of odd k. -When k is even each digit must touch an odd number of other digits, which is impossible since it must be a cycle, so each digit must touch 2 other digits, one on each side. there is the exception of the digit that touches itself, but those form pairs and do not solve the problem. I do not believe this holds for n other than 2, but it may, or it may hold on all even n. -While standard De Bruijn cycles are very simple to compute (for how complicated they are), but these are very difficult it seems. though that may just be because I have not yet found a method of doing so.<|endoftext|> -TITLE: Is $\mathbb{Z}[\sqrt{15}]$ a UFD? -QUESTION [13 upvotes]: Let $R=\mathbb{Z}[\sqrt{15}]=\{a+b\sqrt{15}:a,b\in\mathbb{Z}\}$. - - -How do I show that $(3,\sqrt{15})$ is a maximal ideal but not a principal ideal? -How do I show that $(3,\sqrt{15})^2$ is a principal ideal? -How do I show that $R$ is (not) a UFD? - - -What I have done: - -If $(3,\sqrt{15})$ is a maximal ideal, then I must show that $R/(3,\sqrt{15})$ is a field. I thought that this holds:$R/(3,\sqrt{15})=\mathbb{Z}/3\mathbb{Z}$. Is this correct? How do I proceed from here? -I know that $(3,\sqrt{15})^2=(9,3\sqrt{15},15)$. How can I use this? -I'm afraid that I don't know where to start with this one. Maybe one of the statements above can help? - -Thanks for taking the time! - -REPLY [7 votes]: $$\mathbb Z[\sqrt{15}]\simeq\mathbb Z[X]/(X^2-15),$$ -so -$$\mathbb Z[\sqrt{15}]/(3,\sqrt{15})\simeq\frac{\mathbb Z[X]/(X^2-15)}{(3,X,X^2-15)/(X^2-15)}\simeq\mathbb Z[X]/(3,X)\simeq\mathbb Z/3\mathbb Z.$$ Thus $(3,\sqrt{15})$ is a maximal ideal.<|endoftext|> -TITLE: What are the "numerator" and "denominator" of binomial coefficients called? -QUESTION [15 upvotes]: Do the numbers $n$ and $k$ in the binomial coefficient $\binom nk$ have a name? -For the fraction $\frac nk$ we would use numerator and denominator. But I have not seen some terminology for binomial coefficients used anywhere. -Are some names for these numbers occasionally used? -I would expect that situations, when you are talking with somebody about some result including binomial coefficients and you need to refer to one of these two numbers, arise quite commonly. For example, when describing Vandermonde's identity -$$\binom{m+n}r=\sum\limits_{k=0}^r \binom mk \binom n{r-k}$$ -you could say something like: "Notice that in each summand the sum of numerators of binomial coefficients is the same as on the L.H.S. The same is true for denominators. In the sum, one of the denominator is increasing, while the other one is decreasing, so that their sum remains constant." - -REPLY [2 votes]: The terms "upper index" and "lower index," while serviceable, are dependent on the particular notational convention $\binom nk$. For example, in the admittedly unfortunate notation ${}_nC_k$ which appears in some textbooks, "upper index" and "lower index" don't make sense. The primary objection to these terms is that they refer to symbolic (or syntactic) properties, rather than semantic properties. -Instead of "numerator" and "denominator", we can take "dividend" and "divisor" as an excellent precedent. For binomial coefficients, the analogous terms are "selectend" and "selector" (which are obviously better than "choosend" and "choosor".) To take it for a spin, let's describe Vandermonde's identity: -"Notice that in each summand, the sum of the selectends is the same as on the L.H.S. The same is true for the selectors. In the sum, one of the selectors is increasing, while the other one is decreasing, so that their sum remains constant."<|endoftext|> -TITLE: What is the significance of “Homomorphism”? -QUESTION [5 upvotes]: Certainly Homomorphism is a prerequisite to establish an “Isomorphism”(Bijection), but what does a Homomorphism tell independently when it is established between two sets? -Homomorphism relates two sets as it is defined. But does it tell anything else? Or it is a tool for relating two sets only. -It would be nice to have an example where Homomorophism plays a big role besides being a condition for Isomorphism? - -REPLY [3 votes]: You are given two sets $A$ and $B$, both provided with a binary operation $*\>$. This means that in $A$ as well as in $B$ for certain triples $x$, $y$, $z$ it is true that $z=x*y\>$; e.g., $13=5+8$, or $91=7\cdot 13$. A map $\phi:\>A\to B$ is a homomorphism if it preserves such "incidences": -$$z=x*y\quad\Longrightarrow\quad \phi(z)=\phi(x)*\phi(y)\ .$$<|endoftext|> -TITLE: Minimal cyclotomic field containing a given quadratic field? -QUESTION [6 upvotes]: There was an exercise labeled difficile (English: difficult) in the material without solution: - -Suppose $d\in\mathbb Z\backslash\{0,1\}$ without square factors, and $n$ is the smallest natural number $n$ such that $\sqrt d\in\mathbb Q(\zeta_n)$, where $\zeta_n=\exp(2i\pi/n)$. Show that $n=\lvert d\rvert$ if $d\equiv1\pmod4$ and $n=4\lvert d\rvert$ if $d\not\equiv1\pmod4$. - -It's easier to show that $\sqrt d\in\mathbb Q(\zeta_n)$, although I haven't worked out every epsilon and delta: First we can factor $d$ as a product of unit and prime numbers. Note that a quadratic Gauss sum $g(1,p)=\sum_{m=0}^{p-1}\zeta_p^{m^2}=\sqrt{(-1)^{(p-1)/2}p}\in\mathbb Q(\zeta_p)$, and that $\sqrt2\in\mathbb Q(\zeta_8)$. From this we can deduce that $\sqrt d\in\mathbb Q(\zeta_n)$, where $n=\lvert d\rvert$ if $d\equiv1\pmod4$ or $4\lvert d\rvert$ otherwise. -I have no idea how to show that $n$ is minimal. I hope we'll have some proof without algebraic number theory, which is all Greek to me. -Any help is welcome. Thanks! - -REPLY [5 votes]: As you've surmised, one can simply count all index-$ 2 $ subgroups of the Galois group, which we know how to compute using the Chinese remainder theorem. We have the following general result: let $ n = 2^{r_0} \prod_{i=1}^n p_i^{r_i} $ be the prime factorization of $ n $. We have the following for $ r_0 = 0, 1 $: -$$ \textrm{Gal}(\mathbf Q(\zeta_n) / \mathbf Q) \cong \prod_{i=1}^n C_{p_i^{r_i - 1}(p_i - 1)} $$ -and the following for $ r_0 \geq 2 $: -$$ \textrm{Gal}(\mathbf Q(\zeta_n) / \mathbf Q) \cong C_2 \times C_{2^{r_0 - 2}} \times \prod_{i=1}^n C_{p_i^{r_i - 1}(p_i - 1)} $$ -In the former case, we have $ 2^n - 1 $ surjective homomorphisms to $ C_2 $, which correspond to the obvious quadratic subfields generated by square roots of the square-free products of the (signed according to Gaussian period theory) odd primes dividing $ n $. In the latter case, we have $ 2^{n+2} - 1 $ surjective homomorphisms to $ C_2 $ if $ r_0 > 2 $, and $ 2^{n+1} - 1 $ if $ r_0 = 2 $, which correspond to to the following quadratic subfields $ \mathbf Q(\sqrt{d}) $: - -$ d = \pm \prod p_i $ for all odd primes dividing $ n $: $ 2^{n+1} - 1 $ quadratic subfields in total. -(For $ r_0 > 2 $) $ d = \pm 2 \prod p_i $ for all odd primes $ p_i $ dividing $ n $, $ 2^{n+1} $ quadratic subfields in total. - -where the primes $ p_i $ are again all signed according to Gaussian periods. (All of this can be summarized as "the only quadratic subfields are the obvious ones".) From all of this, we have completely classified the quadratic subfields of a cyclotomic field, and we are ready to attack the problem. Let $ \mathbf Q(\zeta_n) $ be a cyclotomic field containing $ \sqrt{d} $, where $ d $ is square-free. $ n $ must certainly be divisible by every prime factor of $ d $ by our above analysis, and thus must be divisible by $ d $. This means that $ \mathbf Q(\zeta_d) \subset \mathbf Q(\zeta_n) $. If $ d $ is $ 1 $ modulo $ 4 $, then primes that are $ 3 $ modulo $ 4 $ come in pairs, therefore the negative signs in the square roots vanish when we take a product, and thus $ \sqrt{d} \in \mathbf Q(\zeta_d) $, which shows that this is the minimal cyclotomic field containing $ \sqrt{d} $ in this case. -If $ d $ is $ 2 $ modulo $ 4 $, then our above analysis shows that the multiplicity of $ 2 $ in $ n $ must be at least $ 3 $, therefore $\mathbf Q(\zeta_{4d}) \subset \mathbf Q(\zeta_n) $ (note that $ \textrm{lcm}(8, d) = 4d $!) On the other hand, it is easily seen that $ \mathbf Q(\zeta_{4d}) $ contains $ \sqrt{d} $, so it is the minimal such cyclotomic field. -Finally, if $ d $ is $ 3 $ modulo $ 4 $, then $ \sqrt{-d} \in \mathbf Q(\zeta_d) \subset \mathbf Q(\zeta_n) $, and hence $ \sqrt{-1} = \zeta_4 \in \mathbf Q(\zeta_n) $. From our above classification, we know that this implies $ r_0 \geq 2 $, so that $ n $ is divisible by $ 4 $. Once again we see that $ \mathbf Q(\zeta_{4d}) \subset \mathbf Q(\zeta_n) $, and clearly $ \sqrt{d} = \zeta_4 \sqrt{-d} \in \mathbf Q(\zeta_{4d}) $, concluding the proof. -This proof can be significantly shortened if one uses ramification theory for the above analysis instead of a direct computation using the Galois groups. Nevertheless, the above proof is purely Galois theoretic.<|endoftext|> -TITLE: Is the pairing induced by the wedge product and integration nondegenerate on de Rham forms? -QUESTION [6 upvotes]: Let $M$ be a compact, oriented, smooth $n$-manifold and let $\Omega^*_{\mathrm{dR}}(M)$ be the commutative differential graded algebra of de Rham forms on $M$. We can define a pairing: -\begin{align} -\langle -,- \rangle : \Omega^k_{\mathrm{dR}}(M) \otimes \Omega^{n-k}_{\mathrm{dR}}(M) & \to \mathbb{R} \\ -\alpha \otimes \beta & \mapsto \int_M \alpha \wedge \beta -\end{align} -Question. Is this pairing non-degenerate? In other words, is the map $\alpha \mapsto (\beta \mapsto \langle \alpha, \beta \rangle)$ an isomorphism $\Omega^k_{\mathrm{dR}}(M) \to \operatorname{Hom}_{\mathbb{R}}(\Omega^{n-k}_{\mathrm{dR}}(M), \mathbb{R})$? -This is true on the level of cohomology, a result known as Poincaré duality. Thus, given a closed $k$-form $\alpha$ which is not a coboundary, there exists a closed $(n-k)$-form $\beta$ with $\int_M \alpha \wedge \beta \neq 0$ (the cohomology groups of a compact manifold are finite dimensional so this is an equivalent characterization of nondegeneracy). -But I haven't been able to find anything on whether this is true on the level of de Rham forms directly; in fact I rather expect it to be false. - -REPLY [2 votes]: Actually, the answer is almost yes on the level of forms themselves. You've provided an inner product on $\Omega^k(M)$, and once you pass to its Hilbert space completion (the so-called space of $L^2$ forms) it is true, by the Riesz representation theorem, that any continuous linear functional $\Omega^k_{L^2}(M) \to \Bbb R$ is uniquely represented by $\alpha \mapsto \langle \alpha, \beta \rangle$; that is, it's uniquely represented by integration against $*\beta$; that is, it's uniquely represented by integration against an $(n-k)$-form. (An $L^2$ form, to be more careful.) To be more precise yet, the map $\Omega^k_{L^2}(M) \to \left(\Omega^k_{L^2}\right)^*$, given by $\alpha \mapsto \langle \cdot,\alpha\rangle$ is an isometry. -The problem with the example you gave, $f \mapsto f(0)$, is that it is not continuous in the $L^2$-topology, so you definitely should not expect it to be given by integration against any kind of form. But given an $L^2$-continuous functional $\Omega^k(M) \to \Bbb R$, you know it's given by integration against an $(n-k)$-form - but only necessarily an $L^2$ one, as an artifact of the non-completeness of $\Omega^k(M)$.<|endoftext|> -TITLE: Basis for $\text{Mat}_2(\mathbb{Z})$ as a $\mathbb{Z}[i]$-module -QUESTION [5 upvotes]: Let $M=\text{Mat}_2(\mathbb{Z})$ a $\mathbb{Z}[i]$-module with scalar multiplication -$$(a+bi)\begin{pmatrix}x&y\\z&w\end{pmatrix}\equiv\begin{pmatrix}a&-b\\b&a\end{pmatrix}\begin{pmatrix}x&y\\z&w\end{pmatrix}.$$ -Let $N=\Big\{\begin{pmatrix}x&x\\z&z\end{pmatrix}:x,z\in\mathbb{Z}\Big\}$ be a submodule of $M$. - - -What is a basis of $M$ over $\mathbb{Z}[i]$? -$N$ is a cyclic module, what is a generator of $N$? -How do I show that $M/N$ as a $\mathbb{Z}[i]$-module is isomorphic with $\mathbb{Z}[i]$? - - -What I have done: - -I must find linearly independent elements of $M$ such that every element of $M$ can be written as a combination of those. Are these elements the four matrices with $1$ and all zeroes? -I'm afraid I simply don't see this one (yet). :( -I'm guessing that $M/N=\{\begin{pmatrix}x&-y\\y&x\end{pmatrix}:x,y\in\mathbb{Z}\}$, though I can't immediately see why. - -REPLY [3 votes]: For (1), the four matrices you've described are not linearly independent: for instance you have that $$\begin{pmatrix}1 & 0 \\ 0 & 0 \end{pmatrix} + i \begin{pmatrix}0 & 0 \\ 1 & 0 \end{pmatrix} = 0.$$ -However, if you write down $$z\begin{pmatrix}1 & 0 \\ 0 & 0 \end{pmatrix} + w \begin{pmatrix}0 & 1 \\ 0 & 0 \end{pmatrix} = \begin{pmatrix}z_1 & w_1 \\ z_2 & w_2 \end{pmatrix}$$ for arbitrary $z = z_1 + i z_2$ and $w=w_1 + iw_2$ you see that these two matrices generate all of $M$ in a unique way. -For (2), as you've already seen, the matrix $\begin{pmatrix}1 & 1 \\ 0 & 0 \end{pmatrix}$ generates $N$. -For (3), I claim that the single element $$ \mu =\begin{pmatrix}1 & 0 \\ 0 & 0 \end{pmatrix} + N $$ -forms a basis for $M/N$. Indeed, we have -$$ \begin{pmatrix}x & y \\ z & w \end{pmatrix} + N = \begin{pmatrix}x-y & 0 \\ z-w & 0 \end{pmatrix} + N = ((x-y)+i(z-w))\mu$$ -for any element of $M/N$. Furthermore, $\{\mu\}$ is linearly independent because for any $z = z_1 + iz_2$ we have -$$z\mu = 0 \text{ in } N \iff \begin{pmatrix}z_1 & 0 \\ z_2 & 0 \end{pmatrix} \in N \iff z_1 = z_2 = 0.$$ -Thus $M/N$ is a free $1$-dimensional $\mathbb{Z}[i]$-module, hence it is isomorphic to $\mathbb{Z}[i]$.<|endoftext|> -TITLE: Trace and the coefficients of the characteristic polynomial of a matrix -QUESTION [6 upvotes]: Let $A\in M(\mathbb F)_{n \times n}$ -Prove that the trace of A is minus the coefficient of $\lambda ^{n-1}$ in the characteristic polynomial of A. -I had several ideas to approach this problem - the first one is to develop the characteristic polynomial through the Leibniz or Laplace formula, and from there to show that the contribution to the coefficient of $\lambda ^{n-1}$ is in fact minus the trace of A, but every time i tried it's a dead end. -Another approach is to use induction on a similar matrix to ($\lambda I-A$) from an upper triangular form, which has the eigenvalues of A on its diagonal, and of course the same determinant and trace, to show that for every choice of n this statement holds. -I think my proof doesn't hold for all fields, so any thought on the matter will be much appreciated, or an explanation to why this statement is true. - -REPLY [15 votes]: The determinant is a sum of (signature-weighted) products of $n$ elements, where no two elements share the same row or column index. From this, it follows, that there is no term with $(n-1)$ terms on the diagonal (if $n-1$ terms of a product are on the diagonal, then the last one must be too, because all other rows and columns are taken). So... the only term that can possibly include a power of $\lambda^{n-1}$ is the product of the main diagonal. Therefore, the $\lambda^{n-1}$ coefficient of $\det A$ equals the $\lambda^{n-1}$ coefficient of $\prod_i (\lambda-A_{ii})$ for which it's easy to show, the coefficient equals $-\sum_i A_{ii}$. -This definition doesn't make assumptions about the field over which the matrix is defined, because the field operations + and * are used directly (with no assumptions about inverses and distribution laws). - -REPLY [3 votes]: Suppose the eigenvalues of $A$ are $\lambda_1,\ldots,\lambda_n$. Then the factored form of the characteristic polynomial is -$$(x-\lambda_1)(x-\lambda_2)\cdots(x-\lambda_n).$$ -Try using this with induction on $n$.<|endoftext|> -TITLE: Could there be an "$n$-th root" of the category $\mathsf{Set}$? -QUESTION [9 upvotes]: Here is a thought experiment: Suppose we did not know what sets and functions are. The general idea of a topos is, that it somehow serves as a foundation for mathematics. So let there be an alternate world and a well known topos $\mathcal{A}$, "in which everyone does mathematics", in this world. Eventually, some category theorist claims: "Oh, our mathematical universe is not as 'fundamental' as we think. I found a topos that I henceforth shall call $\mathsf{Set}$ and an $n\in \mathbb{N}$ (not $1$) such that $\mathcal{A}$ is equivalent to $\mathsf{Set}^n$" (because $\mathcal{A}$ was $\mathsf{Set}^{45634}$ all along, for example). -Let us come back to this world with the obvious question: - -Could there be a category $\sqrt[n]{\mathcal{\mathsf{Set}}}$ and an $n\in \mathbb{N}, n>1$ with $\sqrt[n]{\mathcal{\mathsf{Set}}}^n $ equivalent to $\mathsf{Set}$? - -What if instead the $n$ is any small category besides $1$? -There might be an issue with this question: I am not sure, what "category" really should mean in this context. Possibly, the "right" answer assumes, that category theory deals with categories internal to $\sqrt[n]{\mathcal{\mathsf{Set}}}$. I do not know that, since I am no expert by any means. - -REPLY [9 votes]: It is not possible. -Let $\mathcal{C}$ be a category. First things first: by a symmetry argument, we see that the diagonal of $\mathcal{C}^n$ is closed under whatever limits or colimits exist in $\mathcal{C}$; in particular, if $\mathcal{C}^n$ has finite limits and colimits, then so does $\mathcal{C}$. Suppose $\mathcal{C}$ has a terminal object $1$ and the binary coproduct $1 + 1$. Then, -$$\mathcal{C}^n ((1, \ldots, 1), (1 + 1, \ldots, 1 + 1)) \cong \mathcal{C} (1, 1 + 1)^n$$ -and in particular, if this set is finite, then the number of elements is an $n$-th power. But if $\mathcal{C}^n$ is equivalent to $\mathbf{Set}$, this forces $n = 1$.<|endoftext|> -TITLE: If $(x,y)$ satisfies $x^2+y^2-4x+2y+1=0$ then the expression $x^2+y^2-10x-6y+34$ CANNOT be equal to -QUESTION [6 upvotes]: If $(x,y)$ satisfies $x^2+y^2-4x+2y+1=0$ then the expression $x^2+y^2-10x-6y+34$ CANNOT be equal to -$(A)\frac{1}{2}\hspace{1cm}(B)8\hspace{1cm}(C)2\hspace{1cm}(D)3$ - -$(x,y)$ satisfies $x^2+y^2-4x+2y+1=0$ -$(x,y)$ satisfies $(x-2)^2+(y+1)^2=2^2$ -The expression $x^2+y^2-10x-6y+34$ can be written as $(x-5)^2+(y-3)^2$ -But i do not know how to further solve it.Please help. - -REPLY [2 votes]: Two circles have at least one common point only if the distance between their centers is equal to or smaller than the sum of their radii. -All you need to do is rewrite equations as circles, extract the center, the radius of each circle and check. -$$(x-2)^2+(y+1)^2=4, a_{0}=2, b_{0}=-1, r_{0}=2$$ -$$(x-5)^2+(y-3)^2=\frac{1}{2}, a_{1}=5, b_{1}=3, r_{1}=\frac{\sqrt{2}}{2}$$ -$$(x-5)^2+(y-3)^2=8, a_{2}=5, b_{2}=3, r_{2}=2\sqrt{2}$$ -$$(x-5)^2+(y-3)^2=2, a_{3}=5, b_{3}=3, r_{3}=\sqrt{2}$$ -$$(x-5)^2+(y-3)^2=3, a_{4}=5, b_{4}=3, r_{4}=\sqrt{3}$$ -For example you need to prove: -$$\sqrt{(5-2)^2+(3+1)^2}=5>2+\frac{\sqrt{2}}{2}$$ -$$5>2+2\sqrt{2}$$ -$$5>2+\sqrt{2}$$ -$$5>2+\sqrt{3}$$ -All of them pretty much obvious.<|endoftext|> -TITLE: Area of a circle from the edge to a point offset from the center -QUESTION [5 upvotes]: I am trying to come up with a way to calculate the cross-sectional area of the shape shown in the figure below. My first method would be to subtract the circle from the rectangle like this: $$(Y)\left(\frac{OD-ID}{2}\right)-A_{circle}$$, however, I do not know how to calculate the area of this circle since the OD and ID are not tangent to it. Also, I would prefer to use the $'X'$ dimension since $'Y'$ is a reference only. - -REPLY [2 votes]: You’re looking for the area of a circular segment of radius $R$ and chord length $C={OD-ID\over2}$. This can be found by subtracting the area of an isosceles triangle from the area of a sector of a circle: $$A=\frac12R^2(\theta-\sin\theta),$$ where the angle $\theta$ can be found via the relationship $C=2R\sin{\frac\theta2}$.<|endoftext|> -TITLE: Is $\frac{z-\alpha}{1-\overline{\alpha}z}$ some special function in complex analysis? -QUESTION [5 upvotes]: Many homework problems seem to use the following function (or something very close to it): -$$F_\alpha(z)=\frac{z-\alpha}{1-\overline{\alpha}z}$$ -Does it serve some special purposes in complex analysis? -It does revolve around the unit circle. - -REPLY [3 votes]: This is "the" conformal map from $\mathbb{D}$ to itself that sends $a$ with $0$, and is unique up to rotation. Applications include things such as the Schwarz-Pick theorem.<|endoftext|> -TITLE: Existence of bounded analytic function on unbounded domain? -QUESTION [5 upvotes]: Given any proper open connected unbounded set $U$ in $\mathbb C$.Does there always exist a non constant bounded analytic function $ f\colon U \to \mathbb C$ ? - -Edit: $U$ is any arbitrary domain. I don't have idea to do it. Please help. - -REPLY [9 votes]: No not always. Take $ U= \mathbb{C} \setminus \{0\}$. Take a bounded analytic function on $U$. As it is bounded it can only have a removeable singularity at $0$. Thus it extends to an entire function, which must be constant. -On the other hand if the closure of $U$ is not all of $\mathbb{C}$ take a $z_0$ outside the closure of $U$ and consider $(z-z_0)^{-1}$. -This is not a full classification of all $U$ though, but you did not ask for this. - -REPLY [6 votes]: No. Take $U=\mathbb{C}\setminus \{p\}$, and take $f$ bounded holomorphic on $U$. Then we can extend $f$ to the whole complex plane (a point is removable), but being bounded and entire, $f$ has to be constant. - -REPLY [5 votes]: Take $f(z) = {1 \over z} $ on $U=\{z \mid |z|>1 \}$. -This example can be extended to any $U$ such that $U^c$ contains an open set.<|endoftext|> -TITLE: $\{ (x,y) \in R^2 \mid x^2 + y^2 -2x + 4y - 11 = 0 \}$ is closed and bounded -QUESTION [7 upvotes]: As a part of an exercise It would help me if I could prove formally that the set $\{ (x,y) \in R^2 \mid x^2 + y^2 -2x + 4y - 11 = 0 \}$ is closed and bounded. -Plotting it with a software I can see this immediately but I am bit doubtful on a formal proof, any help? - -REPLY [7 votes]: Since the function $f(x,y) = x^2 + y^2 -2x + 4y - 11$ is continuous, it follows -that $f^{-1}(\{0\})$ is closed. -Since $ \lim_{\|(x,y)\| \to \infty} f(x,y) = \infty$, we see that the set must be bounded. -To see why this implies that the set is bounded, note that we can find some -$R$ such that if $\|(x,y)\| > R$, then $f(x,y) > 1$. Hence -the set $\{(x,y) \mid f(x,y) \le 1 \}$ is contained in $\{(x,y) \mid \|(x,y)\| \le R \}$. - -REPLY [7 votes]: Hint $(x-1)^2+(y+2)^2=16$ is the set so it is a circle and bounded and closed - -REPLY [2 votes]: As you seen, it is a circle with certain center $x_0=(1,-2)$ and radio $r=4$. -For the boundness, show that, every point in your set has lenght $<7\sqrt{2}$. -For the closeness, show that the complement is open (you only have to follow the geometry of your set). -Can you continue? -Edit -$\begin{eqnarray} -\sqrt{x^2+y^2}&=&\sqrt{(x-1+1)^2+(y+2-2)^2}\\ -&=&\sqrt{(x-1)^2+2(x-1)+1+(y+2)^2-4(y+2)+4}\\ -&=&\sqrt{16+2(x-1)-4(y+2)+5}\\ -&=&\sqrt{21+2x-2-4y-8}\\ -&=&\sqrt{11+2x-4y} -\end{eqnarray}$ -Newly, can you continue?<|endoftext|> -TITLE: Explicit construction of $n/2$ by $n$ circulant partial Hadamard matrices -QUESTION [5 upvotes]: In Circulant partial Hadamard matrices by Craigen, Faucher, Low, and Wares it is stated in Theorem 9 that there is a $(p+1)$ by $2(p+1)$ circulant partial Hadamard matrix for every prime power $p$. This is very interesting but I would really like an explicit construction and I can't work out if or where one is given. - -Is there an explicit construction that gives a $(p+1)$ by $2(p+1)$ - circulant partial Hadamard matrix for prime power $p$? - -REPLY [2 votes]: The result you quote is true because negacyclic $C$-matrices of order $p+1$ exist. In On orthogonal matrices, J. Math. and Phys. 12 (1933), Paley gave a construction of $C$-matrices using the Legendre symbol $\chi$ of the Galois field GF$(p)$. A variation of this construction leads to a negacyclic form for these Paley matrices. The reference for these results is Delsarte, Goethals, and Seidel, Orthogonal matrices with zero diagonal. II, Can. J. Math., Vol. XXIII, No. 5 (1971).<|endoftext|> -TITLE: $A\otimes_{\mathbb C}B$ is finitely generated as a $\mathbb C$-algebra. Does this imply that $A$ and $B$ are finitely generated? -QUESTION [15 upvotes]: Consider $A$ and $B$ two $\mathbb C$-algebras such that $A\otimes_{\mathbb C}B$ is finitely generated as a $\mathbb C$-algebra. Does this imply that $A$ and $B$ are finitely generated? -I know that for general algebras, this is false. Indeed $\mathbb Q$ is infinitely generated over $\mathbb Z$ but the tensor product $ \mathbb Q\otimes_\mathbb Z \mathbb Z_2 =0$. For $\mathbb C$-algebras however, I just can't seem to find a counter-example. - -REPLY [9 votes]: Yes, it does, as long as $A$ and $B$ are both not the zero ring (obviously $A\otimes 0=0$ is finitely generated for any $A$). Choose a finite set of generators of $A\otimes_\mathbb{C} B$; each of these is a finite sum of tensors $a\otimes b$. Let $A_0\subseteq A$ be the subalgebra generated by all the $a$'s appearing in these tensors. Then $A_0$ is finitely generated, and we see that the natural map $A_0\otimes_\mathbb{C} B\to A\otimes_\mathbb{C} B$ is surjective (since its image contains all of the tensors $a\otimes b$ in our generators). This means $A/A_0\otimes_\mathbb{C} B=0$, so as long as $B\neq 0$, we must have $A/A_0=0$ and so $A_0=A$. Thus $A$ is finitely generated. By the same argument, $B$ is also finitely generated. -This argument clearly works with $\mathbb{C}$ replaced by any field. Much more generally, a similar argument shows that if $R$ is any base ring and $A$ and $B$ are $R$-algebras such that $B$ is faithfully flat over $R$, then if $A\otimes_R B$ is finitely generated as a $B$-algebra (in particular, if it is finitely generated as an $R$-algebra), then $A$ is finitely generated as an $R$-algebra.<|endoftext|> -TITLE: Non-isomorphic groups with identical structure-description -QUESTION [5 upvotes]: I constructed the non-abelian groups of order $16$ and listed the structure descriptions. The result was : -16 -(C4 x C2) : C2 -C4 : C4 -C8 : C2 -D16 -QD16 -Q16 -C2 x D8 -C2 x Q8 -(C4 x C2) : C2 - -The group (C$4$ x C$2$) : C$2$ appears twice. Obviously, two non-isomorphic groups with this structure exist. - -What are these groups and how do they differ ? - -A similar result appears for order $20$ : -20 -C5 : C4 -C5 : C4 -D20 - - -What are the non-isomorphic groups with structure $C5:C4$ ? - -REPLY [10 votes]: StructureDescription will -- despite what was claimed in older implementations -- not identify groups up to isomorphism, but just indicate a decomposition. For example $C_5$ has an automorphism group of order 4. So there are two semidirect (even if the spell-checker wants the word to be to be semidried) products $C_5:C_4$, namely one where $C_4$ acts as the automorphism of order 4, and one where it acts as the square of this automorphisms (that is the element of order 2 acts trivially). -Similar things happen in the other cases, E.g. if $C_4\times C_2=\langle a,b\rangle$ two different automorphisms of order 2 are $a\mapsto a^{-1}$ or $a\mapsto ab$ (both times fixing $b$), thus leading to non isomorphic semidirect products. -What this means is that one can use StructureDescription as an aid towards understanding a groups structure, but it is useless for determining isomorphism.<|endoftext|> -TITLE: Is Set "prime" with respect to the cartesian product? -QUESTION [30 upvotes]: (Motivated by Stefan Perko's question here) - -Suppose $C, D$ are two categories such that $\text{Set} \cong C \times D$. Is either $C$ or $D$ necessarily equivalent to the terminal category $1$? - -I haven't thought about it much, but my guess is that the answer is yes. Consider, for example, the special case that $C$ and $D$ are categories of sheaves on spaces. We have $\text{Sh}(X) \times \text{Sh}(Y) \cong \text{Sh}(X + Y)$ (where $+$ denotes coproduct), and $\text{Sh}(1) \cong \text{Set}$, so (after restricting to sober spaces) being able to write $\text{Set}$ as a product of sheaf topoi corresponds to decomposing the one-point space $1$ as a disjoint union, but of course no nontrivial such decomposition is possible. -I can show that either $C$ or $D$ must have a zero object, which settles the question if $C$ and $D$ are required to be topoi. - -REPLY [22 votes]: Suppose you have an equivalence $Set\cong C\times D$ where $C$ and $D$ are both nontrivial. Let $(0_C,0_D)$ be a pair of objects corresponding to the empty set. Then $0_C$ and $0_D$ must be initial and we must have that if there exist maps $c\to 0_C$ and $d\to 0_D$, then $c\cong 0_C$ and $d\cong 0_D$ (since no nonempty set can map to the empty set). In particular, taking $d=0_D$, we find that every object of $C$ that maps to $0_C$ is initial, and similarly for $0_D$. Since $C$ and $D$ are both nontrivial, we can choose objects $c\in C$ and $d\in D$ which are not initial. Note that there are then no maps $(c,0_D)\to(0_C,d)$, since there are no maps $c\to 0_C$. But this is impossible, because $(0_C,d)\not\cong (0_C,0_D)$ so $(0_C,d)$ must correspond to a nonempty set, and every set can map to every nonempty set. - -Here's another argument, which is more similar in spirit to the arguments for other categories I sketched in comments below. Let $(0_C,0_D)$ be as above and let $(1_C,1_D)$ be the terminal object. Note that $(1_C,1_D)$ is the coproduct of $(1_C,0_D)$ and $(0_C,1_D)$. Since the terminal object of $Set$ cannot be written as a coproduct unless one of the summands is initial, this implies either $1_C\cong 0_C$ or $1_D\cong 0_D$; suppose $1_C\cong 0_C$. Then for any $c\in C$, note that there is exactly one map from $(0_C,1_D)\cong (1_C,1_D)$ to $(c,1_D)$. But this means that as a set, $(c,1_D)$ has one point, so it is terminal, so $c\cong 1_C$. Thus every object in $C$ is terminal and $C$ is trivial.<|endoftext|> -TITLE: Showing that $f(x)^p=f(x^p)$ in field of characteristic $p$ -QUESTION [5 upvotes]: I am trying to show that for any $f(x)\in F[x]$, where $F$ is a field of characteristic $p$, we have $f(x)^p=f(x^p)$. -I figured that if $f(x)=\sum a_ix^i$, then $f(x)^p=\sum a_i^px^{ip}$ and $f(x^p)=\sum a_ix^{ip}$, but I'm not sure how to get that -$a_i^p-a_i=0$. -One can certainly use Fermat's little theorem if $F\cong \mathbb{Z}_p$ but what about when $\mathbb{Z}_p$ is contained in $F$, for example, $F=\mathbb{Z}_p(x)$? - -REPLY [8 votes]: You're having difficulty proving it because it isn't true. In fact, if $a\in F$ is such that $a^p=a$, then $a$ must be in $\mathbb{Z}_p$ (proof: the polynomial $x^p-x$ can only have $p$ roots in $F$, and every element of $\mathbb{Z}_p$ is a root). What is true is that $f(x)^p=g(x^p)$, where $g$ is the polynomial obtained from $f$ by replacing each coefficient by its $p$th power.<|endoftext|> -TITLE: Proof relating inverse to determinant -QUESTION [13 upvotes]: I'm reading a paper regarding the consistency of a statistical estimator, and the author claimed that the following is an identity: -$$ \mathbf{x}^\top (\Sigma + \mathbf{x}\mathbf{x}^\top)^{-1}\mathbf{x} = 1- \frac{\det (\Sigma)}{\det (\Sigma+\mathbf{x}\mathbf{x}^\top)}$$ -Here $\mathbf{x}$ is a vector and $\Sigma$ is a covariance matrix. Apart from that, I don't think the author has specified any more assumptions/constraints. -While superficially it appears that this result might be provable via the Sherman-Morrison formula, I haven't been able to work it out. -Any help/pointers would be appreciated ! -Thanks. - -REPLY [4 votes]: Whenever a leading/trailing principal submatrix is invertible, the determinant of a matrix is the product of the determinants of that submatrix and of its Schur complement in the enclosing matrix. Let $M=\pmatrix{1&x^T\\ x&\Sigma+xx^T}$. If $\Sigma+xx^T$ is invertible, by considering the Schur complements of respectively $1$ and $\Sigma+xx^T$ in $M$ respectively, we see that -$$ -\det(\Sigma+xx^T)\det\left(1-x^T(\Sigma+xx^T)^{-1}x\right) -=\det(M) -=\det(1)\det(\Sigma). -$$ -Now the result follows immediately.<|endoftext|> -TITLE: Existence of an injective continuous function $\Bbb R^2\to\Bbb R$? -QUESTION [9 upvotes]: Let's say $f(x,y)$ is a continuous function. $x$ and $y$ can be any real numbers. Can this function have one unique value for any two different pairs of variables? In other words can $f(a,b) \neq f(c,d)$ for any $a$, $b$, $c$, and $d$ such that $a \neq c$ or $b \neq d$? I don't think there can at least not if the range of $f$ is within the real numbers. Could someone please offer a more formal proof of this or at least start me off in the right direction. - -REPLY [2 votes]: Take any pair of distinct points $p,q$ of $\Bbb R^2$ and link them by two disjoint (except in the end points) arcs. For instance take $p=(1,0)$ and $q=(-1,0)$, linked by the upper and lower halves of the unit circle. Now by hypothesis $f(p)\neq f(q)$, and the restriction of $f$ to either arc is a continuous function (of the parameter for the arc). By the intermediate value theorem, every value in the interval between $f(p)$ and $f(q)$ is taken by $f$ on either of the arcs, but this contradicts the assumed injectivity.<|endoftext|> -TITLE: Proof that the cross product is not associative without using components -QUESTION [20 upvotes]: I need to show that the cross product is not associative without using components. -I understand how to do it with components, which leads to an immediate counterexample, but other than that I am not sure how to do it. - -REPLY [34 votes]: Consider two non-zero perpendicular vectors $\def\v#1{{\bf#1}}\v a$ and $\v b$. We have -$$(\v a \times\v a)\times\v b=\v0\times\v b=\v0\ .$$ -However $\v a\times\v b$ is perpendicular to $\v a$, and is not the zero vector, so -$$\v a\times(\v a\times \v b)\ne\v 0\ .$$ -Therefore -$$(\v a \times\v a)\times\v b\ne\v a\times(\v a\times \v b)\ .$$<|endoftext|> -TITLE: What is the difference between advanced calculus, vector calculus, multivariable calculus, multivariable real analysis and vector analysis? -QUESTION [9 upvotes]: What is the difference between advanced calculus, vector calculus, multivariable calculus, multivariable real analysis and vector analysis? -What I think I know - -Vector calculus and multivariable calculus are the same. -Multivariable real analysis and vector analysis are the same and both -are the formalization of multivariable/vector calculus. - -Am I right? what's the difference between advanced calculus and these other subjects? - -REPLY [11 votes]: The issues are terminology of courses. Someone can technically say that calculus is real analysis. But it doesn't mean anything in terms of courses you take, books you read, etc. So the issue is somewhat of a terminology concern, not a formal mathematical one. -Here is my take of a stereotypical (most common) use of the terms. Once you know that, you can look at what other people say as deviations from it. - -Calc 1 = differential calculus. Roughly a semester of differential calculus (derivatives, emphasis on techniques, support of use in physics). -Calc 2 = integral calculus. Same thing as above but for integrals. Note, you may do a little baby diff EQs or series. And the border of differential and integral may not be 100% at the semester break. But close to it. -Calc 3 = multivariable calculus = vector analysis. A semester mostly working on partial derivatives, surface integrals, stuff like that. Introduction of Stokes and Green's thereoms. -Differential equations. (occasionally jokingly called "Calc 4"). A semester of ordinary differential equations. ($y$ as a function of $x$. Not multivariable diff EQs.) - -This pretty much finishes the curriculum for a basic science major. Engineers or physicists may have another semester or two of "math methods", which will be a whirlwind tour of partial differential equations, linear algebra, and perhaps complex analysis. - -"Real analysis" is theoretical calculus. You prove a lot of the things you already learned in regular calculus. It's a math major course. Engineers, physicists, etc. won't bother taking it. You won't learn many new techniques that are useful to applied problems or following physics derivations (maybe a little in series). - -Advanced calculus is another term for real analysis. Usually used in titles of older books. Usually a bit less emphasis on proofs and disdain for applications. But still mostly covering territory that is not that useful for applications.<|endoftext|> -TITLE: A construction with ruler and rusty compass -QUESTION [7 upvotes]: In the book Geometry: Euclid and beyond, the exercise 2.20 says: - -Using a ruler and rusty compass, given a line $l$ and given a segment $AB$ more than one inch long, construct one of the points $C$ which the circle of center $A$ and radius $AB$ meets $l$.(Rusty compass's radius is 1 inch) - -But I don't think it can be done use ruler and rusty compass. As the picture below shown, we need to get point $C$, use ruler and rusty compass, we can easily twice long $BA$ to point $B'$ and construct perpendicular $BG,B'H$, but in order to get length of $GC$, we need to solve a quadratic equation and need to use square root operation, but some results before said that we can only get the length in $\mathbf{Q}$ use ruler and rusty compass. I am very confused about that. Is this exercise solvable? - -REPLY [3 votes]: Apologies, but I didnt like MathManiacs answer. To say the least its confusingly written, and fails to carry a construction through to the end, and makes assumptions about what is provided. Some vital details seem to be brushed over. I attempted to follow his construction and failed; I dont know why. I couldnt see what direction he was going and the instructions themselves dont make a whole lot of sense to me when you actually try to carry it out. Perhaps its my failure but I thought Id give a different perspective, filling some gaps. I of course dont mean any disrespect to the community but I didnt feel as though this 3 year old question was ever truly answered. I only came across this question in an Ask Jeeves search because I particularly love geometry. - -Here is my own construction. Hopefully you can follow along with the animated GIF image. This isn't a proof; it's just a method of construction. - -We have an arbitrary line m, and an arbitrary line segment AB defining the center, A, and the radius AB of a circle. No circle is drawn there because circle A(B) is of any size and you only have a rusty compass, which is defined for you in the upper-right corner. We wish to find the points of intersection on line m of where circle A(B) intersects the line. - -Segment AB can be extended to line m, intersecting it at C. - - -Naturally for this construction to work line AB cannot be parallel with line m. See the notes toward the bottom for dealing with this case. - -Draw a circle A(r) centered at point A. This circle will intersect the extended line AB at some point D. -Draw a completely arbitrary line n, passing through point A but not coincident with AB, intersecting the constant circle A(r) at point E. -Points D and E define a new line, DE. -Construct two new lines, parallel to DE, passing through B and C, intersecting line n at points F and G, respectively. -Points D and F define a new line, DF. -Construct a new line parallel to DF, but passing through G. This line intersects the extended line AB at point H. -Draw a line parallel to line m but passing through H. This line intersects the constant circle A(r) at two points X1 and X2. -Lines AX1 and AX2 intersect the line m at two points I1 and I2. These are your desired intersections. - -If X1 and X2 dont exist then neither do the intersections I1 and I2, for obvious reasons. -I have no citation for this construction. It is based on the notion of shrinking down the scale, so that the circle A(B) becomes A(r), and line m becomes a new line that can be directly intersected with A(r) (producing the X intersections). The two I points - the intersections of interest - are projections of the two X points, away from the point A and onto the line m again at appropriate scale. Circle A(r) is to circle A(B) what the horizontal line passing through H is to the line m. The bulk of the construction is scaling line m down in proportion. -Creating a Parallel Line -To draw a parallel line using a straightedge and a single circle, the Wikipedia has an article entitled Poncelet-Steiner Theorem which contains an animated GIF depicting the construction. It requires 5 intermediary lines to be drawn for each parallel line to be made, and 1 circle (of any size) for each line from which a parallel is to be made. -The Case of Parallel AB -In the case that line segment AB is parallel to line m, a separate construction could be implemented to rotate AB into a new non-parallel position. You then restart this line-circle intersection construction using a new line segment AB'. -That said, the construction for this case is actually easier than just explained. You dont need to rotate AB at all. I wont show you the modification, but you can swap the roles of lines m and n in the original construction and adapt the construction appropriately. This can be done fairly readily and with no additional cost. - -I absolutely love geometric constructions and in particular restricted constructions. Send me more. The aforementioned Poncelet-Steiner theorem, and of course the Mohr-Mascheroni Theorem, are fascinating restricted constructions in their own right.<|endoftext|> -TITLE: Why do natural transformations express the fact that a vector space is canonically embedded in its double-dual but not in its dual? -QUESTION [15 upvotes]: I've been struggling for quite a while to understand why a vector space is considered to be "canonically embedded" into its double dual, but not its dual. As has been remarked in many other places, the distinction between whether an (iso-)morphism is "natural" can often seem vague and unintuitive. For me particularly, I think that part of the problem is that this sort of statement seems to run entirely counter to something I was taught early in Abstract Algebra as a Profound and Fundamental Lesson: "Isomorphic structures are exactly the same in all respects. When two things are isomorphic, all the things that can be said about one carry over verbatim to the other. There is no distinction between them." However, moving into more abstract linear algebra, a sort of about-face is being made, and now we are making the distinction of effectively saying, "My isomorphism is better than yours." In order to justify this apparent contradiction, the argument is typically made that the (iso-)morphism into the double dual does not require any "choices", while any embedding into the dual will require some "choice" to be made. However, this seems... unconvincing. So what if you can jury-rig a bilinear form out of whatever embedding/isomorphism I pick? Do we really have to pay attention to that? Again, this seems rather vague and unintuitive. -To make the argument more precise then, it is claimed that the ultimate answer lies in that Fountain of Eternal Truth - Category Theory. More specifically, it is claimed that the fact that there is a natural transformation from the identity functor on vector spaces to the double-dual functor justifies the claim that the embedding into the double-dual is "natural", while the fact that there is no such transformation between the identity and the dualizing functor shows that any such embedding into the dual is "not natural". This is elucidated beautifully in this thread. However, I claim that this is still not the final nail in the coffin of doubt. More specifically, I do not understand how natural transformations actually express the idea that a construction is (quotation marks) "natural". - -How does this business with commuting diagrams make precise the idea that the embedding of a vector space into its double dual is "natural"? How does the implication that any association with the dual is "not natural" stem from a theorem saying that a certain collection of diagrams will never commute? - -Another thing to note, that took me a little by surprise, is that the content of these arguments depends not only on the construction of the dual and double dual spaces, but also on this other construction called the transpose, which associates a linear map $f^*: F^* \to E^*$ to every linear map $f: E \to F$. So the fact that a map between a space and its dual is not "natural" also depends on the fact that we define an association between linear maps, that we package together with the dual operation to form the dualizing functor; however, this association between linear maps seems rather external to the association between a vector space and its dual. This is also bothering me. -I will not deny that the transpose operation does seem like a very natural thing to pair along with the dual operation, but what does seem odd is that the intrusiveness of this transpose operation should make or break the "naturalness" of something strictly between a vector space and its dual - honestly, who ordered that? Why can't I concoct some other association between linear maps - one that's covariant, at that, and package that together with the dual space to make something that admits a natural transformation from the identity? Note however that this would also break the "theorem" that a vector space is canonically embedded in its double dual, so this sort of train of thought is a double-edged sword. -Ultimately, I feel that I don't understand natural transformations in general very well; this example is really just the biggest one that sticks out to me and the one that I care about the most. I may post another question about the general case of understanding natural transformations, depending on how well this one goes and also whether I can manage to formulate it in a manner that seems intriguing and not simply lost and confused. At any rate, I look forward to any potential answers and would greatly appreciate whatever illumination you may be able to provide. - -REPLY [9 votes]: I am answering as somebody who has struggled through a related matter, as you noted in the OP. I do not think I will be able to satisfy every one of your related threads of dissatisfaction and I am not sure I will be able to satisfy any at all. On the flip side, as the question is a year old, you may have resolved it for yourself long ago. -But let's give it a whirl. I love the question. -First of all, separate from the question about how the category language speaks to (or doesn't speak to) matters, it seems to me you are not convinced that there even is a substantive difference between the isomorphism of a finite dimensional vector space to its dual and the isomorphism to its double dual, a propos of your Profound and Fundamental Lesson of Abstract Algebra -- aren't they both isomorphisms? So, before even engaging the category theory, let me speak to this: -(1) I think you will gain useful insight about the situation from studying cases where the substance of the difference between a space and its dual is felt. user254665 mentioned one such instance in her/his answer. In general, the infinite-dimensional topological vector spaces of functional analysis provide an abundant source of examples. While the dual of a finite dimensional vector space is finite-dimensional of the same dimension, and therefore isomorphic, the dual of a Banach space is typically a different Banach space. For example the dual of $L^p$ is $L^q$ with $p^{-1}+q^{-1} = 1$, which are two different Banach spaces unless $p=2$. The dual of the space of continuous, compactly supported functions on a locally compact Hausdorff space is a space of measures i.e. it is not even a space of functions! -Even in these situations where the dual is really a different animal, the original space does embed in its double dual, as usual by mapping a vector to the functional on functionals obtained by evaluation at that vector. (I will avoid controversy by not lionizing this embedding as "natural".) In many cases, the embedding is proper, i.e. the double dual is bigger than the original space. Nonetheless, there's often no obvious embedding of the original space in the (single) dual at all. -I am not a functional analyst, but a place I've encountered this substance in my own life is in the difference between a locally compact abelian group and its character group, i.e. its Pontryagin dual. Like vector spaces, this is a situation where finiteness causes a non-canonical isomorphism to the dual, and there is a canonical isomorphism to the double dual. A finite abelian group $A$ is isomorphic to its dual $\hat A$, but not an infinite group. For example, the additive group $\mathbb{Z}$ of integers and the circle group $S^1 = \{z\in\mathbb{C}^\times \mid |z| = 1\}$ are Pontryagin duals of each other, and they don't even have the same cardinality. In the finite case, where they are isomorphic, I've still "bumped into" the difference between $A$ and $\hat A$, for example in trying to understand the relationship between an action of a group $G$ of automorphisms on $A$ and the induced action of $G$ on $\hat A$, e.g. see this question. -All of this is to say that study of such examples can help convince one that the dual is really not the same as the original object, so that even when they're isomorphic it's worth keeping track of which is which. (More so than it is worth distinguishing the object from its double-dual when they are isomorphic.) -(2) How to make sense of this difference in light of your Profound and Fundamental Lesson (PaFL), that isomorphic objects are to all intents and purposes the same. -This is a question about the scope of the PaFL. -The PaFL is the right way to see things when you view the objects in isolation from their surroundings and each other. Let $A$ and $B$ be isomorphic objects (e.g. vector spaces or groups). Any specific isomorphism $\phi:A\rightarrow B$ gives you a dictionary to translate statements about the isolated object $A$ to statements about the isolated object $B$ and vice versa. For example: if $A,B$ are vector spaces, then $\phi$ carries bases to bases, so there is a perfect bijective correspondence between bases of $A$ and bases of $B$. It carries linear transformations of $A$ to linear transformations of $B$ (via $T\mapsto \phi T\phi^{-1}$) so there is a bijection between such transformations. If we think of $\phi$ as a "renaming", then we can think of $B$ as just $A$ with different names. -From this point of view, $A$ and $B$ are "the same", and any "renaming" $\phi$ works as well as any other to show this. This is the PaFL. -But. If we allow $A$ and $B$ to interact with other objects (even each other!), then distinct isomorphisms start to feel very different! For example: -Let $A = \mathbb{R}^2$, seen as a real vector space. Let $B$ be $A$'s vector space dual, i.e. the space of linear functionals $A\rightarrow \mathbb{R}$, with pointwise addition and scalar multiplication. $B$ is isomorphic to $A$ since it is also a 2-dimensional real vector space. One has a wide choice of isomorphisms: fixing a basis of $A$, one can send it to any basis of $B$. There is a 4-dimensional manifold's worth of choice. -Now along comes a linear transformation $T$ acting on $A$, say by scaling the $x$-axis by a factor of $2$. One can pick some isomorphism $\phi:A\rightarrow B$ and translate $T$ into a transformation of $B$ as above (i.e. $\phi T \phi^{-1}$). But there is another (natural??) way that $T$ acts on $B$, irrespective of any choice of $\phi$, which is to send a functional $f:A\rightarrow\mathbb{R}$ to the functional $f\circ T$. Now one can ask about any given $\phi$: does the transformation of $B$ into which it translates $T$ equal this (natural??) action of $T$ on $B$? I.e. does $\phi T \phi^{-1} (f) = f\circ T$ for all $f\in B$? A priori, some $\phi$'s may be compatible with the action of $T$ on $B$ in this respect, and some may not. -One could go further. I chose a specific $T$ at the front end of this. But one could ask if there is a $\phi$ such that $\phi T\phi^{-1}(f)$ will equal $f\circ T$ regardless of the choice of $T$. This $\phi$, if it existed, would clearly (?) be "awesome" in some way that other isomorphisms aren't. -Perhaps you respond by saying, well, why did you bring $T$, and especially its action on $B$ by $f\mapsto f\circ T$, into it? This is a perfectly legitimate question. From the point of view where you only look at $A$ and $B$ as self-contained systems, there's no reason to. But my point is that mathematical objects are often embedded in a network of other mathematical objects (such as $T$, or a wide variety of choices of $T$, and their related actions on $A$ and $B$), and when we bring these other objects and the interactions between them into it, it complicates the (overly?) simplistic picture drawn by the PaFL. Maybe some isomorphisms play better than others with the network of relationships in which $A$ and $B$ are embedded. -(3) This is a segue into the matter of categories. A natural isomorphism between two functors is not an isomorphism between two isolated objects. It is some kind of construction that works simultaneously across an entire category, in such a way that the isomorphisms all interact well with a bunch of other maps. -Thus, the way in which the categorical language translates the word "natural" is, loosely, "working simultaneously across all the objects of a whole category, in such a way that it cooperates with the other relevant maps in the category." The naturality lies in the everywhere-at-once-ness and in the fits-in-with-what-was-already-going-on-ness. -To get specific to the case. Let $\mathscr{V}$ be the category of finite dimensional $\mathbb{R}$-vector spaces. -Let's try to carry out what you proposed in the penultimate paragraph of the OP, i.e. try to reconstruct the dualizing functor as a covariant functor; call it $D$. We are already given the map on objects: it sends $V\in\operatorname{Obj}\mathscr{V}$ to its dual $V^*$. We need to design, for every $T\in \operatorname{Hom}(V,W)$, a map $D(T):V^* \rightarrow W^*$, in such a way that the identity map always gets sent to the identity map, and for any $U\xrightarrow{S} V\xrightarrow{T}W$ occurring in $\mathscr{V}$, we have $D(TS) = D(T)D(S)$. -It seems to me that this is actually possible, modulo some axiom-of-choice typed issues. If we separately chose an isomorphism $\phi_V:V\rightarrow V^*$ for each $V\in \operatorname{Obj}\mathscr{V}$, then we could send $T:V\rightarrow W$ to $D(T) = \phi_W T\phi_V^{-1}$, which maps $V^*$ to $W^*$. Furthermore, it seems to me that the maps $\phi_V:V\rightarrow V^*$ would then constitute a natural isomorphism from the identity functor to our new "dualizing functor" $D$. -I think some readers will be given pause by the fact that this construction needs some form of the axiom of choice to be carried out. (I'm out of my set-theoretic league on what's needed. It seems to me that the category at hand is not a small category; thus we need an even stronger axiom like global choice, right?) But you've indicated that the need to make choices doesn't strike you as a barrier to "naturalness," so I assume that this high degree of nonconstructiveness of the construction won't be a problem. However, I see another issue as well: -This construction loses any information related to the fact that $V^*$ is supposed to be the dual of $V$. It completely ignores the fact that the elements of $V^*$ are supposed to be functionals on $V$. We could replace $V^*$ with any other vector space of the same dimension and carry out the same construction. Thus it seems to me $D$ doesn't really send $V$ to its dual in any meaningful sense. Thus, while it uses a nonconstructive axiom (global choice?) to get past the category-theoretic insistence that a natural transformation happen "all at once across a whole category", it doesn't (honestly anyway, it seems to me) meet the second condition that it "cooperates with what was already going on." -This is where the transpose (also called the adjoint) comes in. You ask, "who ordered that?" I.e. isn't the adjoint map extrinsic to the relationship between $V$ and its dual? I contend it's actually essential. If $T:V\rightarrow W$ is a map between vector spaces, then the adjoint $T^*:W^*\rightarrow V^*$ between their duals is defined as $f\overset{T^*}{\mapsto} f\circ T$. This $T^*$ cooperates with what was already going on! I.e. it transforms the dual space in accordance with what the elements in the dual space are supposed to mean. Without a relationship like that between $T$ and $T^*$ that incorporates the fact that the elements of $V^*$ are supposed to be the contents of $\operatorname{Hom}(V,\mathbb{R})$, a functor sending $V$ to $V^*$ is only meaningfully sending it to some other vector space of the same dimension, not actually its dual. -Thus a natural isomorphism to the dual really should somehow respect the adjoint, or something like it. Otherwise, what makes the dual the dual? -Obviously the question was soft and this is a soft answer. So let me know if any of this speaks to any of the issues you outlined.<|endoftext|> -TITLE: Solve $\sin(3x)=\cos(2x)$ -QUESTION [6 upvotes]: Question: Solve $\sin(3x)=\cos(2x)$ for $0≤x≤2\pi$. - -My knowledge on the subject; I know the general identities, compound angle formulas and double angle formulas so I can only apply those. -With that in mind -\begin{align} -\cos(2x)=&~ \sin(3x)\\ -\cos(2x)=&~ \sin(2x+x) \\ -\cos(2x)=&~ \sin(2x)\cos(x) + \cos(2x)\sin(x)\\ -\cos(2x)=&~ 2\sin(x)\cos(x)\cos(x) + \big(1-2\sin^2(x)\big)\sin(x)\\ -\cos(2x)=&~ 2\sin(x)\cos^2(x) + \sin(x) - 2\sin^2(x)\\ -\cos(2x)=&~ 2\sin(x)\big(1-\sin^2(x)\big)+\sin(x)-2\sin^2(x)\\ -\cos(2x)=&~ 2\sin(x) - 2\sin^3(x) + \sin(x)- 2 \sin^2(x)\\ -\end{align} -edit -\begin{gather} - 2\sin(x) - 2\sin^3(x) + \sin(x)- 2 \sin^2(x) = 1-2\sin^2(x) \\ - 2\sin^3(x) - 3\sin(x) + 1 = 0 -\end{gather} -This is a cubic right? -So $u = \sin(x)$, -\begin{gather} 2u^3 - 3u + 1 = 0 \\ - (2u^2 + 2u - 1)(u-1) = 0 -\end{gather} -Am I on the right track? -This is where I am stuck what should I do now? - -REPLY [6 votes]: $$\cos2x=\sin3x=\cos\left(\dfrac\pi2-3x\right)$$ -$$\iff2x=2m\pi\pm\left(\dfrac\pi2-3x\right)$$ where $m$ is any integer -Alternatively, $$\sin3x=\cos2x=\sin\left(\dfrac\pi2-2x\right)$$ -$$3x=n\pi+(-1)^n\left(\dfrac\pi2-2x\right)$$ where $n$ is any integer \ No newline at end of file