diff --git "a/stack-exchange/math_stack_exchange/shard_103.txt" "b/stack-exchange/math_stack_exchange/shard_103.txt" deleted file mode 100644--- "a/stack-exchange/math_stack_exchange/shard_103.txt" +++ /dev/null @@ -1,9297 +0,0 @@ -TITLE: The universal cover of a path-connected, locally path-connected space $X$ covers any other covering space -QUESTION [8 upvotes]: I'm currently reading Hatcher's Algebraic topology book. In page 68 he says: - -A consequence of the lifting criterion is that a simply-connected covering space of a path-connected, locally path-connected space $X$ is a covering space of every other path-connected covering space of $X$ A simply-connected covering space of $X$ is therefore called a universal cover. It is unique up to isomorphism, so one is justified in calling it the universal covering. - -Let's precise that a little bit. Let $X$ be a path-connected, locally path-connected space and $x_0\in X$. Let $p_1:(\tilde{X}_1,\tilde{x}_1)\to (X,x_0)$ be a simply-connected covering space and $p_2:(\tilde{X}_2,\tilde{x}_2)\to (X,x_0)$ be any other path-connected covering space. Using the fact that $\tilde{X}_1$ is simply connected and the lifting criterion one can find a map $\tilde{p}_1:(\tilde{X}_1,\tilde{x}_1)\to(\tilde{X}_2,\tilde{x}_2)$ such that $p_2\tilde{p}_1=p_1$ and $\tilde{p}_1$ must be the desired covering map. Note that this is where one uses the hypothesis $X$ path connected and locally path-connected and $\tilde X_1$ and $\tilde X_2$ both path-connected -However I'm having some trouble proving it. Let $x\in \tilde{X}_2$ then $p_2(x)\in X$ and there is a nbd $U$ of $p_2(x)$ such that $p^{-1}_1(U)=\cup_i V_i$ where the $V_i$ are open and disjoint subsets of $\tilde{X}_1$ and $p|V_i:V_i\to U$ is an homeomorphism. The desired nbd of $x$ that makes $\tilde{p}_1$ into a covering map should be $p^{-1}_2(U)$. We have $\tilde{p}_1^{-1}(p^{-1}_2(U))=p^{-1}_1(U)$ and then one expects that $\tilde{p}_1|V_i:V_i\to p_2^{-1}(U)$ is an homeomorphism to make this work. But I can't prove that last statement, I'm not even sure it's surjective (Hatcher doesn't assume covering spaces to be surjective). -Is this the right approach? Or maybe there is a simpler way to do it. - -REPLY [10 votes]: Lemma: Given the commutative diagram -$$\begin{array}{ccccccccc} \widetilde{X} & \\ -\downarrow{\small{p}} & {\searrow}^{q} \\ -X_1 & \!\!\!\!\! \xleftarrow{p_1} & \!\!\!\! X_2\end{array}$$ -where $p_1, p$ are covering maps, then so is $q$, where $X_1, X_2, \tilde{X}$ are all path-connected and locally path-connected. -Proof: - -$q$ is surjective: $\sigma$ be a path in $X_2$ from $x_0$ and $x$. Pushforward by $p_1$ to get a path $p_1 \circ \sigma$ in $X_1$ from $p_1(x_0) = x_0'$ to $p_1(x)$. Lift to $\widetilde{X}$ to get a path $\widetilde{\sigma}$ starting from some point $x_0''$ in the fiber over $x_0'$. Pushforward by $q$ to get path $q \circ \widetilde{\sigma}$ starting at $x_0$. Uniqueness of path-lifing says $q \circ \widetilde{\sigma} \simeq \sigma$, so that $q$ maps the endpoint of $\tilde{\sigma}$ to the endpoint $x$ of $\sigma$. As $X_2$ is path connected, we can apply this argument for all $x \in X_2$ to prove $q$ is surjective. - -$q$ is a covering map: Pick $x \in X_2$. Pushforward by $p_1$ to get $p_1(x)$ in $X_1$. There is a path-connected neighborhood $\mathscr{U}$ of $p_1(x)$ evenly covered by $p_1$ and $p$ (take neighborhoods evenly covered by $p_1$ and $p$ and take intersection). $\mathscr{V}$ be the slice in $p_1^{-1}(\mathscr{U})$ containing $x$. $\{\mathscr{U}_\alpha\}$ be the slices in $p^{-1}(\mathscr{U})$. $q$ maps each slice $\mathscr{U}_\alpha$ to distinct slices in ${p_{1}}^{-1}(\mathscr{U})$. $q^{-1}(\mathscr{V})$ is then union of slices in $\{\mathscr{U}_\alpha\}$ which are mapped homeomorphically onto $\mathscr{V}$. I claim all $\mathscr{U}_\alpha$ are mapped homeomorphically on $\mathscr{V}$ by $q$. This can be proved slicewise, recalling that given a commutative diagram with any two arrows as homeomorphism, so is the third. $\blacksquare$ - - - -If $\widetilde{X}$ is simply connected, $p : \widetilde{X}\to X$ the universal cover, $p_1 : X_2 \to X_1$ a covering map, then as $p_*(\pi_1(\widetilde{X}))$ fits inside ${p_1}_*(\pi_1(X_2))$, being the trivial group, we can lift $p$ to $\tilde{p} : \widetilde{X} \to X_1$. By previous discussion, $\tilde{p}$ is a covering map, since it fits inside a commutative diagram like above. Thus, $\widetilde{X}$ covers $X_2$, as desired.<|endoftext|> -TITLE: Is there a reason for different nomenclature on Calculus of Variations? -QUESTION [8 upvotes]: While sightseeing aspects of Calculus of Variations, the following fact elludes me: there is a plethora of new definitions which seem redundant to me. This phenomenom happens, of course, with other subjects: for instance, one can argue that a vector space is a module over a field instead of making a "new" definition for a vector space (this is not so good of an example due to one being often introduced to vector spaces before modules, but it gets my core idea). However, when such phenomena happens in these other cases, there usually is a nice reference in the literature which makes the correspondence of definitions clear. But in all references on calculus of variations I've seen, a "variation" is a new object that is defined, and I can't see why one should not regard this as simply a case of Fréchet-derivation. -This happens even if one take a path-space of paths connecting a point $a$ to $b$, for instance. Let's consider the space $C^1([0,1], \mathbb{R}^n, a,b)$ the space of $C^1$ paths with initial point $a$ and endpoint $b$. This is an affine space over the normed vector space $C^1([0,1], \mathbb{R}^n, 0,0)$, so we have a bona-fide Fréchet-derivative, and hence we can talk about critical points. The "variations" are simply elements of the vector space. -Therefore, my questions are: Am I missing something? More precisely, is my point of view lacking or incorrect in some aspect? -If not, why isn't this approached in this way? - -REPLY [3 votes]: Specialists in the calculus of variations don't necessarily consider it a subfield of functional analysis. They are entitled to using terminology of their own choice. In a similar spirit, Halmos objected to the logicians' use of terms like interpretation (of a theory in a model), and sought to replace it by homomorphism in a suitably defined type of algebraic formalism. Ultimately his work in polyadic algebras proved to be of little consequence. -Replacing variations by Frechet derivative may be useful if one can then proceed to apply general results about Frechet derivatives and get meaningful consequences for variational calculus; moreover it is quite possible that such results do exist. However, replacing a finite framework by an infinite one usually requires justification. If you take a look at my postings you will notice that I am not opposed to infinity :-) but the question of motivation has to be addressed.<|endoftext|> -TITLE: What's so special about characteristic 2? -QUESTION [52 upvotes]: I've often read about things which do not work in a field with a characteristic $2$, mainly things which have to do with factoring, or similar things. I'm not exactly sure why, but the only example of such a field I could think of is $\mathbb{Z}/2\mathbb{Z}$, which itself is an interesting field because it contains only the identity elements for the two groups, and naturally, it is a cyclic field. Do these properties lead to the fact that many things don't work if the charateristic is $2$ -Any examples of things which break in such a field are also welcome. - -REPLY [2 votes]: Normally, in a field, each element with a square root (other than zero) has two of them: x2-a2 = (x+a)(x-a), so both a and -a are roots. So by the pigeonhole principle, in a finite field (of odd characterisitic) half the nonzero elements have two square roots, and the other half have none. But in characteristic 2, a = -a, so all the elements have exactly one square root. -If you know the bare basics of elliptic curve cryptography, you might think this would make ECC impossible in fields of characteristic 2, but that isn't the case. -Some polynomial factoring algorithms over finite fields work more simply when the characterisitic is odd, just because 2 is a divisor of the number of nonzero elements (which is the order of the cyclic multiplicative group of the field). For characteristic 2, with size 22k, 3 is a divisor, and those factoring algorithms work with little modification. But with size 22k+1, 22k+1-1 might even be prime, requiring more major modifications for efficient factoring.<|endoftext|> -TITLE: On Greatest Common Divisors -QUESTION [8 upvotes]: Let $F$ be a field and $r$ and $s$ positive integers. Prove that, in $F[x]$, -$\gcd(x^r-1,x^s-1)=x^{\gcd(r,s)}-1$. -If $r$ and $s$ were known numbers, I could be able to attempt the problem, but I don't know how to do about this. - -REPLY [2 votes]: Lemma. Let $I$ be any ideal of $F[X]$, and let $H$ be the set of $n \in \mathbb{N}$ such that $X^n - 1 \in I$. Then $H$ is the intersection with $\mathbb{N}$ of a subgroup of $\mathbb{Z}$. -Proof of Lemma. Let $A = F[X]/I$, and let $a = \bar{X} \in A$. If $a$ is not invertible in $A$, then $H = \{0\}$. Otherwise, $H$ is the intersection with $\mathbb{N}$ of the kernel of the morphism $n \mapsto a^n$ from $\mathbb{Z}$ to $A^{*}$. -Applying the lemma to the ideal $(X^r-1,X^s-1)$, we find (by Bezout's lemma) that $X^{\gcd(r,s)} - 1 \in (X^r-1,X^s-1)$. Conversely, applying the lemma to $(X^{\gcd(r,s)} - 1)$, we see that $X^r - 1, X^s - 1 \in (X^{\gcd(r,s)} - 1)$. Thus $(X^r-1,X^s-1) = (X^{\gcd(r,s)} - 1)$.<|endoftext|> -TITLE: Counting the number of numbers -QUESTION [5 upvotes]: Problem In each of the following $6$ digit numbers: $333333, 201102, 123123$; every digit appears at least twice. Find the number of such $6$-digit natural numbers. - - -I have done this problem using case work. - -Those numbers containing exactly 3 distinct digits, -those numbers containing exactly 2 distinct digits, and similarly -those numbers containing only 1 digit - -Also Case 1 and 2 involved two sub-cases each for numbers with/without 0. -But I want to know if there is a less bashy solution. -Please help. - -REPLY [2 votes]: I can't go straight to the answer, but yes, I can reduce cases and simplify computations. -The idea is to always keep $A$ at start, and let $B$ and $C$ assume any value including $0$. -So the 1st multiplier will always be $9$, and we only need to choose $B,C$ and permute the 5 digits. -$A|AAAAA: 9$ -$A|AAABB: 9\binom91\binom{5}{3,2}$ -$A|AABBB: 9\binom91\binom{5}{2,3}$ -$A|ABBBB: 9\binom91\binom{5}{1,4}$ -$A|ABBCC: 9\binom92\binom{5}{1,2,2}$ -Do check for typos in the formulations !<|endoftext|> -TITLE: How can we calculate the degree of angle made by the matches? -QUESTION [5 upvotes]: I was playing a game on my phone when a question pop up on my screen coming from one of my best mathematics masters: -If we know that all of the matches are in the same size, what would be the alpha's degree? - -REPLY [2 votes]: Consider this diagram. - -Looking at isosceles triangles and straight angles, and starting with $\measuredangle{GAH}=\alpha$, we get these facts in this order. -$\measuredangle{AED}=\alpha$ -$\measuredangle{ADE}=180°-(\alpha+\alpha)=180°-2\alpha$ -$\measuredangle{EDH}=180°-(180°-2\alpha)=2\alpha$ -$\measuredangle{DHE}=2\alpha$ -$\measuredangle{DEH}=180°-(2\alpha+2\alpha)=180°-4\alpha$ -$\measuredangle{GEH}=180°-[\alpha+(180°-4\alpha)]=3\alpha$ -$\measuredangle{EGH}=3\alpha$ -$\measuredangle{EHG}=180°-(3\alpha+3\alpha)=180°-6\alpha$ -$\measuredangle{AHG}=2\alpha+(180°-6\alpha)=180°-4\alpha$ -Comparing $\measuredangle{AGH}$ with $\measuredangle{AHG}$, -$180°-4\alpha=3\alpha$ -Solving, - -$\alpha=\dfrac{180°}7\approx 25.714285714°$ - -I triple-checked that, with a dynamic diagram in Geogebra (above) and with a trigonometric argument using the law of cosines. I'll skip the details.<|endoftext|> -TITLE: Prove that $\int_0^1 f(x)dx=0$ if $f(\frac{1}{n})=1$ for $n=1,2,3,\ldots$ and $f(x)=0$ for all other $x$ -QUESTION [5 upvotes]: Prove that $\int_0^1 f(x)dx=0$ if $f(\frac{1}{n})=1$ for - $n=1,2,3,\ldots$ and $f(x)=0$ for all other $x$. - - -Lemma: If $f:[a,b]\rightarrow \mathbb{R}$ is a function such that $f(x)=\mathbb{1}_{\{c\}}$ for some $a0$. Then the absolute value of Riemann sum corresponding to this partition is less than $2 \delta $. So for any $\epsilon>0$ we can choose $\delta=\frac{\epsilon}{2}$ and we will have $|S|<\epsilon$ whenever $S$ is a Riemann sum corresponding to a partition of width less than $\delta$. -Proof of the main result: -Choose $\epsilon>0$. From the above lemma and linearity of the Riemann integral, we know that $$\int_{\epsilon/2}^1f(x)dx=0 \mbox{.}$$ -Thus there is a step function $g:[\frac{\epsilon}{2},1]\rightarrow \mathbb{R}$ such that $0\le f(x)\le g(x)$ for all $x\in[\frac{\epsilon}{2},1]$ and $$ \int_{\epsilon/2}^1 g(x)dx <\epsilon /2 \mbox{.}$$ -Define a new step function $h:[0,1]\rightarrow \mathbb{R}$ such that $h(x)=g(x)$ if $x\in [\frac{\epsilon}{2},1]$ and $h(x)=1$ if $x\in [0,\frac{\epsilon}{2})$. It is clear that for all $x\in [0,1]$ we have -$$0\le f(x) \le h(x) \mbox{.}$$ -Also $$ \int_{0}^1 h(x)dx = \int_{0}^{\epsilon/2} dx + \int_{\epsilon/2}^1 g(x)dx <\epsilon /2+ \epsilon /2 =\epsilon \mbox{.}$$ -Thus we proved that $f$ is integrable. It remains to show that the integral $\int_0^1 f(x)dx$ is equal to $0$. We know that $\int_0^1 f(x)dx$ exists and however small $\alpha$ we choose, the integral $\int_{\alpha}^1 f(x)dx$ also exists and is equal to $0$. The result follows from continuity of the integral. -I would be very grateful if somebody verified my proof, I'm quite not sure about the very last part. Thank you. - -REPLY [3 votes]: Your proof looks OK to me, perhaps a little long. At the end, you certainly can use continuity if you like, but I don't think you need to. You have $f$ Riemann integrable and $0\le f\le h.$ Thus $0\le\int_0^1f \le \int_0^1 h < \epsilon.$ Since $\epsilon$ is arbitrarily small, $\int_0^1f=0.$ -The upper/lower sums approach to the Riemann integral might be a simpler route to the result. For $n\in \mathbb N,$ let $P_n$ be the uniform partion of $[0,1]$ into subintervals of length $1/n^2.$ We then have -$$0 = L(P_n,f)\le U(P_n,f) = \sum_{k=1}^{n^2}M_k\cdot \frac{1}{n^2} = \sum_{k=1}^{n}M_k\cdot \frac{1}{n^2} + \sum_{k=n+1}^{n^2}M_k\cdot \frac{1}{n^2}.$$ -The first sum on the right is $\le n\cdot 1 \cdot (1/n^2).$ For the second sum, think about the points $1,1/2,\dots ,1/n.$ Each of these points can lie in at most two subintervals determined by $P_n.$ Thus the second sum is at most $2\cdot n\cdot 1 \cdot (1/n^2).$ Adding these up gives -$$0 = L(P_n,f)\le U(P_n,f) \le \frac{n+2n}{n^2} =\frac{3}{n}.$$ -Letting $n\to \infty$ shows the difference between upper and lower sums can be made arbitrarily small, which implies $f$ is Riemann integrable on $[0,1].$ Because $U(P_n,f) \to 0$ and $\int_0^1f \le U(P_n,f) $ for any $n,$ we have $\int_0^1f = 0$ as desired.<|endoftext|> -TITLE: Partition of a rectangle into smaller rectangles, with their diagonals forming a loop -QUESTION [6 upvotes]: The Big and the Small Kingdom are both rectangular islands and divided into rectangular landscape. In each province there is a road that runs along one of the diagonals. On each island exist roads that make a closed route, which does not go through any point several times. The picture shows the Little Kingdom, which has six area: - -The Great Kingdom has an odd number of landscapes. How many landscapes does the Great Kingdom have at least? - -REPLY [5 votes]: If one requires the route go through all provinces, then $9$ suffices (as shown by diagram below). -$\hspace1in$ -Update -In fact, $9$ is the smallest odd number that does the job. The basic arguments go like this: - -Consider the "four" rectangles covering the four corners of kingdom. They must be distinct from each other. Otherwise, there will be a rectangle that cover an edge of kingdom. Independent of which diagonal one pick, one of the endpoints of the diagonal will be in a position impossible to extend. -Over each edge, there can be rectangles between the corner rectangles. e.g. in the left edge of above diagram, there are two rectangles (magenta and green) between the red rectangle at top and olive-green rectangle at bottom. -The key is in order for the route to be possible to extend, the number of such filling rectangles on each edge need to be even. -This means the total number of rectangles on the edges $N$ is an even number $\ge 4$. -$N$ cannot be $4$, Otherwise, the four corner rectangles are next to each other. There is only one way to pick the diagonals but that will lead to -a closed route which one cannot add more diagonals. -Next, let us consider the case $N = 6$ and look at the diagonals of magenta and green rectangles in above diagram for inspiration. One will discover no matter how one place the $6$ rectangles on the edge, there is only one legal way to pick the $6$ diagonals. After you pick the diagonals, the two dangling endpoints are either lying on a horizontal or a vertical line. This means we need at least $3$ more edges (i.e 9 edges) to construct a closed route of odd length. -Finally, if $N \ge 8$, we need at least one more edge to construct a closed route of odd length. Once again, this means we need at least $9$ edges to do the job. - -Combine $5.$ and $6.$ and the diagram above, we can conclude $9$ is the smallest odd number that one can construct a closed route of odd length.<|endoftext|> -TITLE: Prove the triangle is equilateral given that a quadrilateral related to its circumcircle is a kite -QUESTION [5 upvotes]: Let $\triangle ABC$ be a triangle. Let $Γ$ be its circumcircle, and let $I$ be it’s incenter. Let the internal angle bisectors of $∠A,∠B,∠C$ meet $Γ$ in $A',B',C'$ respectively. Let $B'C'$ intersect $AA'$ at $P$, and $AC$ in $Q$. Let $BB'$ intersect $AC$ in $R$. Suppose the quadrilateral $PIRQ$ is a kite; that is, $IP = IR$ and $QP = QR$. Prove that $\triangle ABC$ is an equilateral triangle. - -I can prove upto isosceles triangle. How to prove equilateral? - -REPLY [3 votes]: Hope you can complete your task with the aid of the above diagram. -Note: (1) $O$ will lie on the line $BIRB'$ making $BB'$ the diameter of the red circle; and (2) $QB' = QA$ provides further assistance in making each green marked angle (irrespective of their degree of shades) $= 30^0$.<|endoftext|> -TITLE: Distinct real N tuples can be dotted by another to give distinct real numbers -QUESTION [5 upvotes]: Given $ x_1,x_2, ..., x_n$ distinct real $N$ tuples. Show that there exists a $N$ tuple $a$ such that $(x_i . a)^n_{i=1}$ are all distinct where . is the real dot product. -Thoughts: I tried proving the contrapositive using the pigeonhole principle and the non-degeneracy of the real dot product. However I was only able to show (wlog) that $x_1.a=x_2.a$ for infinitely many $a$. But this doesn't give $x_1=x_2$. -A comment on background: -Let $H$ be a diagonalisable operator on a real/complex vector space $V$. Then we know that $V=\oplus V_i $ where $V_i$ is the eigenspace corresponding to distinct eigenvalues $\lambda_i$ of $H$. That the sum is direct relies on the fact that the $\lambda_i$'s are distinct. -In Lie algebras we often consider the eigen-decomposition of a space acted upon by not just one diagonalisable operator $H$ but by a vector space of mutually commuting diagonalisable operators (the vector space we have in mind is the Cartan subalgebra). In this situation, the analogous eigenspaces are known as the weight spaces and to show their sum is direct, it is sufficient to prove what I have asked. - -REPLY [3 votes]: Given two distinct $x_i, x_j$, the set $B_{ij}$ of tuples $b$ such that $b\cdot x_i=b\cdot x_j$ is a hyperplane that goes through the origin. There are at most $\frac{n(n-1)}{2}$ distinct such $B_{ij}$, which means that the set $$A=\Bbb R^N\setminus\bigcup_{1\leq i -TITLE: Is it possible to compute factorials by converting to matrix multiplications? -QUESTION [6 upvotes]: An $n$-th term of the Fibonacci sequence can be computed by a nice trick by converting the recurrence relation in a matrix form. Then we compute $M^n$ in $O(\log n)$ steps using exponentiation by squaring. -Would it be possible to use such a trick to compute factorials? If not, can it be proved? I figured out how to compute any polynomial in $n$ using this approach, but for factorials I wasn't able to express the factorial's recurrence relation as a linear transformation. - -REPLY [3 votes]: Well, this is probably not the answer you are looking for, but it might be interesting. -Let $D_k$ be the $k \times k$ matrix with zeros everywhere except on the superdiagonal, where it has the values $1, 2, \dots, k - 1$. Then the factorial $(k-1)!$ is equal to the first element in the vector -$$D_k^{k-1} e_k$$ -where $e_k$ is the unit vector with a one at position $k$. Equivalently, the element at position $(1,k)$ in $D_k^{k-1}$ is $(k-1)!$. -Example -$$D_5 = \left( -\begin{array}{ccccc} - 0 & 1 & 0 & 0 & 0 \\ - 0 & 0 & 2 & 0 & 0 \\ - 0 & 0 & 0 & 3 & 0 \\ - 0 & 0 & 0 & 0 & 4 \\ - 0 & 0 & 0 & 0 & 0 -\end{array} -\right)$$ -$$D_5^4 = \left( -\begin{array}{ccccc} - 0 & 0 & 0 & 0 & 24 \\ - 0 & 0 & 0 & 0 & 0 \\ - 0 & 0 & 0 & 0 & 0 \\ - 0 & 0 & 0 & 0 & 0 \\ - 0 & 0 & 0 & 0 & 0 -\end{array} -\right)$$ -Explanation -The matrix $D_k$ acts on the space of polynomials of degree $\leq k$ as differentiation. Then one can use the following characterization of the factorial: -$$n!= \frac{d^n}{dx^n} x^n$$ -and get the result above. -Observations -The matrix $D_k$ can also be used to generate an upper triangular Pascal matrix $U_k$ through $U_k = e^{D_k}$, using the matrix exponential. -A closer look at $D_k$ -When you put $D_k$ on Jordan normal form. For example with $D_5$ above you get $D_5 = S^{-1}JS$: -$$D_5 = \underbrace{\left( -\begin{array}{ccccc} - 1 & 0 & 0 & 0 & 0 \\ - 0 & 1 & 0 & 0 & 0 \\ - 0 & 0 & 2 & 0 & 0 \\ - 0 & 0 & 0 & 6 & 0 \\ - 0 & 0 & 0 & 0 & 24 -\end{array} -\right)}_{=S^{-1}} -\underbrace{\left( -\begin{array}{ccccc} - 0 & 1 & 0 & 0 & 0 \\ - 0 & 0 & 1 & 0 & 0 \\ - 0 & 0 & 0 & 1 & 0 \\ - 0 & 0 & 0 & 0 & 1 \\ - 0 & 0 & 0 & 0 & 0 -\end{array} -\right)}_{=J} -\underbrace{ -\left( -\begin{array}{ccccc} - 1 & 0 & 0 & 0 & 0 \\ - 0 & 1 & 0 & 0 & 0 \\ - 0 & 0 & \frac{1}{2} & 0 & 0 \\ - 0 & 0 & 0 & \frac{1}{6} & 0 \\ - 0 & 0 & 0 & 0 & \frac{1}{24} -\end{array} -\right)}_{=S} -$$ -where $J$ is one big Jordan block (which is not surprising since $D_k$ is nilpotent), and $S^{-1}$ is a diagonal matrix with factorials on the diagonal! -Investigating this a little but further reveals that this is not so strange. $D_k$ has only one eigenvalue, namely 0, with algebraic multiplity $k$ and geometric multiplicity 1. The eigenvector belonging to this eigenvalue is $e_1$. -When we put something on Jordan normal form, we use generalized eigenvectors, which are vectors $v$ that satisfy $(D_k - \lambda I)^n v = 0$ for some $k$. In our case $\lambda = 0$, so we just look at $D_k^n v = 0$. This gives us that $e_k$ is a generalized eigenvector. Remember our original formulation? -$$D_k^{k-1}e_k = (k-1)! e_1$$ -which explains why we have factorials in $S^{-1}$.<|endoftext|> -TITLE: Infinitude of primes in 10 consecutive integers -QUESTION [9 upvotes]: Do there exist infinitely many sets of 10 consecutive positive integers where exactly one is a prime? -By Dirichlet's Theorem, if $a$ and $d$ are relatively prime, then there infinitely many primes in the arithmetic sequence $a+d, 2a+d, 3a+d, \cdots$. -Let $n+1, n+2, n+3, \cdots, n+10$ be ten consecutive integers, then I want to construct 9 composite integers (say $n+1$ up to $n+9$) and a prime $n+10$, but then I have no idea how to proceed. - -REPLY [7 votes]: A totally different proof: -Assume there is an M > 0 such that there is no prime p > M with p+1, p+2, ..., p+9 all composite. There are two possibilities: -Either there is no prime > M at all. That's false because there is an infinite number of primes. -Or there is a prime p > M, then another prime p1 among p+1, ..., p+9, then another prime p2 among p1+1, ..., p1+9 and so on. Which means lim inf (pi (n) / n) ≥ 1/9, but we know that the limit is indeed 0. -This also works for any length of the interval.<|endoftext|> -TITLE: Exists homeomorphism which carries each fiber isomorphically to itself, composition? -QUESTION [9 upvotes]: Let $\mu$ and $\mu'$ be two different Euclidean metrics on the same vector bundle $\xi$. How do I see that there exists a homeomorphism $f: E(\xi) \to E(\xi)$ which carries each fiber isomorphically onto itself, so that the composition $\mu \circ f: E(\xi) \to \mathbb{R}$ is equal to $\mu'$? -Thoughts so far. Every positive definite matrix $A$ can be expressed uniquely as the square of a positive definite matrix $\sqrt{A}$. The power series expansion$$\sqrt{tI + X} = \sqrt{t}\left( I + {1\over{2t}} - {1\over{8t^2}}X^2 + \dots\right),$$is valid provided that the characteristic roots of $tI + X = A$ lie between $0$ and $2t$. This shows that the function $A \mapsto \sqrt{A}$ is smooth. But I am not sure how to complete. Can anyone help? -Notation. Let $B$ denote a fixed topological space, which will be called the base space. A real vector bundle $\xi$ over $B$ consists of the following: - -a topological space $E = E(\xi)$ called the total space, -a (continuous) map $\pi: E \to B$ called the projection map, and -for each $b \in B$ the structure of a vector space over the real numbers in the set $\pi^{-1}(b)$. - -A Euclidean vector bundle is a real vector bundle $\xi$ together with a continuous function$$\mu: E(\xi) \to \mathbb{R}$$such that the restriction of $\mu$ to each fiber of $\xi$ is positive definite and quadratic. The function $\mu$ itself will be called a Euclidean metric on the vector bundle $\xi$. - -REPLY [4 votes]: There are several questions asking about the same problem on stackexchange: -Different Euclidean metrics on a vector bundle, -Isometry of two Euclidean structures on the same vector bundle, -but I haven't seen a complete correct answer. This answer https://math.stackexchange.com/a/1209444/251687 is correct but lacks some details; the others I saw were wrong. -The question is Problem 2-E in Milnor's book Characteristic Classes, and I think the author's hint is crucial, which I list as the following lemma. -Lemma. For any positive definite matrix $A$, there exists a unique positive definite matrix $B$ such that $B^2=A$. If we write $\sqrt{A}:=B$, then the map $A\mapsto\sqrt{A}$ is smooth. -Equivalently, for a positive definite matrix $A$, $\sqrt{A}$ is the unique positive definite matrix such that $\sqrt{A}^t\sqrt{A}=A$. -Now we can give two proofs of the problem. The first one is in terms of local trivializations; the second one is formulated in an intrinsic manner. -Proof. (Define isometries locally and check they agree on overlaps.) -By choosing local $\mu$-orthonormal basis vectors, we can cover the base manifold $M$ by $\{U_\alpha\}$ such that there are local isometric trivializations $(E|_{U_\alpha},\mu)\xrightarrow{\phi_\alpha}(U_\alpha\times\mathbb{R}^n,\mu_{\mathbb{R}^n})$, where $\mu_{\mathbb{R}^n}$ is standard metric. Let $g_{\alpha\beta}:U_\alpha\cap U_\beta\to O(n)$ be the transition maps. In each local trivialization, the inner product $((\phi_\alpha^{-1})^*\mu^\prime)|_{U_\alpha\times\mathbb{R}^n}=:\mu^\prime_{\mathbb{R}^n_\alpha}$ is given by a positive definite matrix-valued function $A_\alpha$, and $A_\alpha=g_{\alpha\beta}A_\beta g_{\alpha\beta}^{-1}$. Now define $\psi_\alpha:U_\alpha\times\mathbb{R}^n\to U_\alpha\times\mathbb{R}^n$ by $\psi_\alpha(x,v)=(x,\sqrt{A_\alpha(x)}v)$, then for any two vectors $v,w\in \mathbb{R}^n=\phi_\alpha(E_x)$, $(\psi_\alpha^*\mu_{\mathbb{R}^n})(v,w)=v^t\sqrt{A_\alpha}^t\sqrt{A_\alpha}w=v^tA_\alpha w=\mu^\prime_{\mathbb{R}^n_\alpha}(v,w)$, and hence $\mu^\prime|_{E|_{U_\alpha}}=(\phi_\alpha^{-1}\psi_\alpha\phi_\alpha)^*(\mu|_{E|_{U_\alpha}})$. So $\varphi_\alpha:=\phi_\alpha^{-1}\psi_\alpha\phi_\alpha$ gives an isometry $(E|_{U_\alpha},\mu^\prime)\to(E|_{U_\alpha},\mu).$ Next we want to check such $\varphi_\alpha=\varphi_\beta$ on overlap. (Here the notation is different from what I meant in my comment on Mike's answer.) This means $\phi_\alpha^{-1}\psi_\alpha\phi_\alpha=\phi_\beta^{-1}\psi_\beta\phi_\beta$, i.e. $\psi_\alpha=g_{\alpha\beta}\psi_\beta g_{\alpha\beta}^{-1}$. By definition of $\psi_\alpha$, what we want is $\sqrt{A_\alpha}=g_{\alpha\beta}\sqrt{A_\beta} g_{\alpha\beta}^{-1}$. But $\sqrt{A_\alpha}$ and $g_{\alpha\beta}\sqrt{A_\beta} g_{\alpha\beta}^{-1}$ are both positive definite matrices whose square equals $A_\alpha$, so by uniqueness, we are done. -In the same spirit, we can give a more intrinsic proof. -Alternative Proof. (More intrinsic.) -We can rephrase the lemma as: Given any finite dimensional real vector space $V$ and two inner products $\mu,\mu^\prime$ on $V$, there exists a unique $\varphi\in GL(V)$, depending smoothly on $\mu,\mu^\prime$, such that - -$\varphi$ is self-adjoint w.r.t. $\mu$; -$\varphi:(V,\mu^\prime)\to(V,\mu)$ is an isometry. - -So now for our vector bundle $E$, one can define a unique isometry $\varphi_x:(E_x,\mu^\prime_x)\to(E_x,\mu_x)$ on each fiber $E_x$ s.t. $\varphi_x$ is self-adjoint w.r.t. $\mu_x$. This gives a global smooth isometry $\varphi:(E,\mu^\prime)\to(E,\mu)$. -Remark. The same conclusion holds true in the setting of Hermitian metrics on complex vector bundles. In other words, given a vector bundle $E\to M$, the general linear gauge group $GL(E)$ acts transitively on the space of Hermitian/Euclidean metrics on $E$. So one can study the orbit space of such action, which is $GL(E)/U(E,h_0)$ where $h_0$ is a prescribed metric. This is one way of thinking about Hermitian-Yang-Mills connections on stable holomorphic bundles.<|endoftext|> -TITLE: Proving $\lim_{n \to \infty} \frac {\log{p_n}} {\log n} = 1$ -QUESTION [5 upvotes]: How do I show: -$$\lim_{n \to \infty} \frac {\log{p_n}} {\log n} = 1$$ -where $p_n$ is the $n$th prime number without using the Prime Number Theorem? -Some context: The reason I can not use the PNT (or at least the form one might try to use) is because this is actually what I am trying to prove, or rather a certain form of the prime number theorem. The PNT states that $\pi(n) \sim \frac n {\log n}$, i.e. -$$\lim_{n \to \infty} \frac {\pi(n) \log n } n = 1$$ -where $\pi(n)$ is the prime counting function. -Substituting $n \to p_n$ one has: -$$\lim_{n \to \infty}\frac {n \log p_n} {p_n} = 1$$ -Now I would like to show that $p_n \sim n \log n$, i.e. -$$\lim_{n \to \infty} \frac {n \log n } {p_n} = 1$$ -which requires the proof I am asking for. - -REPLY [3 votes]: It suffices to prove the PNT up to a multiplicative constant. This is much easier than proving the PNT and was in fact done by Chebyshev. -Actually something slightly weaker suffices. A good enough upper bound can be extracted from the proof of Bertrand's postulate (not Bertrand's postulate, really the proof), as explained here; you get $p_n \le C n \log n$ for some constant $C$. And of course $p_n \ge n$ suffices for the upper bound.<|endoftext|> -TITLE: No. of possible dense subsets of a metric space -QUESTION [17 upvotes]: Let $X$ be a metric space ; then which of the following is possible ? -1) $X$ has exactly $3$ dense subsets -2) $X$ has exactly $4$ dense subsets -3) $X$ has exactly $5$ dense subsets -4) $X$ has exactly $6$ dense subsets -I know that if $X$ has a proper dense subset then for some $a \in X$ , we should have $X \setminus \{a\}$ is dense in $X$ and then $\{a\}$ is not open in $X$ ; but I can't relate this to no. of dense subsets except that if $X$ has only finitely many dense subsets then the topology of $X$ cannot be discrete . Please help . Thanks in advance - -REPLY [22 votes]: It must be that $X$ is almost discrete. Because if $p$ is not an isolated point of $X$, $C(p) = X \setminus \{p\}$ is open and dense. The intersection of finitely many open dense subsets is open and dense as well. -If $X$ has one non-isolated point $p$, then $X$ and $C(p)$ are the only dense subsets (as every dense subset must contains all isolated points, and those are $C(p)$). So this does not qualify. -So if $X$ has two non-isolated points $p \neq q$, then $X$, $C(p)$, $C(q)$ and $C(p) \cap C(q)$ are the only dense sets. So 4 of them. -If $X$ has 3 non-isolated points $p,q,r$, then every dense set contains $X\setminus \{p,q,r\}$ (all isolated points) and we can add any subset of $\{p,q,r\}$ to get different dense subsets, so we have 8 of them. -So 4 is the only one that can occur, among your list. E.g. for the metric space -$X = \{0\} \cup \{\frac{1}{n}: n = 1,2,3,\ldots\} \cup \{2\} \cup \{2 + \frac{1}{n}: n =1,2,3.\ldots\}$ as a subspace of the reals.<|endoftext|> -TITLE: $A \in M_3(\mathbb Z)$ be such that $\det(A)=1$ ; then what is the maximum possible number of entries of $A$ that are even ? -QUESTION [7 upvotes]: Let $A \in M_3(\mathbb Z)$ be such that $\det(A)=1$ ; then what is the maximum possible number of entries of $A$ that are even ? - -REPLY [11 votes]: Clearly $I_3$ is an example where we can have $6$ even entries with $\det(A)=1$, if there are $7$ or more even entries then there must be at least one row having all entries as even integers, expand the $\det$ along that row, you'll get an even $\det(A)$, which is a contradiction. Thus maximum even entries possible are $6.$<|endoftext|> -TITLE: $S$ be a collection of subsets of $\{1,...,100\}$ ; any two sets in $S$ has non-empty intersection , what is the maximum possible value of $|S|$? -QUESTION [10 upvotes]: Let $S$ be a collection of subsets of $\{1,2,...,100\}$ such that any two sets in $S$ has non-empty intersection . Then what is the maximum possible cardinality of $S$ ? - -REPLY [8 votes]: Consider the collection $S_1$ of all subsets that contain the number $1.$ It satisfies the condition and its cardinality is $2^{99}.$ -On the other hand let $S$ be such a collection and consider the partition into two subcollections $S_y$ and $S_n$ of sets according to whether they do, or do not, contain the number $1.$ -$S_n$ has at most $2^{99}$ elements because those elements are subsets of $\{2,\ldots,100\}.$ -But $S_y$ cannot contain the complement of any set in $S_n$, which rules out exactly $2^{99}-|S_n|$ possibilities. -Therefore $S=S_y\cup S_n$ has at most $2^{99}$ elements.<|endoftext|> -TITLE: Entropy of matrix vector product -QUESTION [6 upvotes]: Consider a random $n$ by $n$ matrix $A$ whose entries are chosen from $\{0,1\}$ and a random $n$ dimensional vector $x$ whose entries are also chosen from $\{0,1\}$. Assume $n$ is large. - -What is the (base 2) Shannon entropy of $Ax$? That is, can we give a large $n$ approximation for $H(Ax)$? - -It feels like $H(Ax)$ should be at least $n$ as that is the entropy of $x$ and $A$ is very likely to be non-singular. We also know $H(Ax) \leq n \log_2{n}$ as we can encode $Ax$ in $n \log_2{n}$ bits (the entries of $Ax$ are no larger than $n$). -Is the entropy of the form $n$ or of the form $n\log_2{n}$ or something in between? - -REPLY [2 votes]: Let $A$ has size $m\times n$ (more general), $y=A x$, $y=(y_1, y_2, \cdots y_m)$. Let $s=\sum_{i=1}^n x_i$ -Then $$H(y)= H(y \mid s) + H(s) - H(s \mid y) \tag{1}$$ -and we can bound: -$$ H(y \mid s) \le H(y) \le H(y \mid s) + H(s) \tag{2} $$ -To compute, $H(y \mid s)$, note that while $y=(y_1, y_2, \cdots y_m)$ are not independent, they are independent if conditioned on $s$. Hence -$$H(y \mid s) = m \, H(y_1 \mid s)$$ -Further, $y_1 |s \sim B(s,1/2)$ (Binomial), and $s$ is also Binomial $B(n,1/2)$ Hence -$$H(y \mid s) = m \sum_{s=0}^n \frac{1}{2^n}{n \choose s} h_B(s) \tag{3}$$ -$$H(s) = h_B(n) \tag{4}$$ -where $$h_B(t)= - \frac{1}{2^t} \sum_{k=0}^t {t\choose k} \log\left(\frac{1}{2^t}{t \choose k}\right) = t - \frac{1}{2^t} \sum_{k=0}^t {t \choose k} \log\left({t \choose k}\right) \tag{5}$$ is the entropy of a Binomial of size $t$ and $p=1/2$. -(all logs are in base $2$ here). -Expressions $(3)$ $(4)$, together with $(2)$, provide exact bounds. We can obtain an approximation by taking the central term in $(3)$ and using the asymptotic $h_B(t) \approx \frac{1}{2} \log(t \, \pi e /2)$. We then get -$$H(y|s) \approx \frac{m}{2} \log(n \pi e /4) \tag{6}$$ -$$H(s) \approx \frac{1}{2} \log(n \pi e /2) \tag{7}$$ -This strongly suggests that, when $m=n$, $H(y)$ grows as $\frac{n}{2} \log(n)$ -The graps shows both bounds and the approximation $(6)$ for the lower bound.<|endoftext|> -TITLE: Minimum guard problem -QUESTION [6 upvotes]: Placing 1x2 dominoes on an 8x8 chess board, in non-overlapping way, what is the lowest possible number of dominoes to lock (guard) the board, so that no further dominoes can be placed on the board? The aim here is to cover as little as possible and save the maximum number of unused dominoes -Thank you - -REPLY [4 votes]: Regarding the original question, what is the lowest possible number of dominoes to lock the board (assuming that dominoes are actually $2\times1$, not $2\times2$): -The lowest possible number is 22. Here is an example of an optimal placing: -.11.22.3 -44.55.63 -.77.886. -99.AA.BB -.CC.DD.E -FF.GG.HE -.II.JJH. -KK.LL.MM - -Regarding the question how to get this number: -I got that answer using dynamic programming (Python code). -Suppose we have a board partly filled by dominoes and the "emptyness markers", see fig. 1 in the image below. -The generalized problem is: what is the lowest possible number of dominoes can be placed on the rest of the board to lock it? The answer depends only on the last two rows (green ones in fig. 2; note that it doesn't matter how occupied cells pair into dominoes, it only matters if the cell is occupied, or has an "emptyness marker" in it, or neither; and grey cells don't matter at all). -So, if we have seen the same two green rows before, we use the known result for this problem. Otherwise, we recursively try (fig. 3) to put an "emptyness marker" (impossible in this particular case because it would create two nearby empty cells), a horizontal domino, and a vertical domino in the first free cell (moving from top to bottom and from left to right in every row), and count the minimal number for these three cases; the result is stored in the cache. -Finally, we apply this method to the original problem when the board has nothing in it. The program checks all intermediate configurations and gives the answer. By the way, in this particular case of $8\times8$ board (perhaps in other cases, too) the optimal placing given above is the very first locking placing obtained by the sequential addition of the emptiness markers, the horizontal dominoes, and the vertical dominoes, as described above. So it could be obtained by simple greedy algorithm (but the optimality of that placing would require proof anyway, of course).<|endoftext|> -TITLE: Is the exponential map to the indefinite special orthogonal groups $SO^+(p,q)$ surjective? -QUESTION [8 upvotes]: Is the exponential map to the identity component of the special indefinite orthogonal groups -$$ \mathrm{exp} \colon \mathfrak{so}(p,q) \to SO^+(p,q)$$ -surjective? - -REPLY [7 votes]: For the special orthogonal group $SO(d)$ and the restricted Lorentz group $SO^+(d,1)\cong SO^+(1,d)$, the exponential maps -$$\exp: so(d)~~\longrightarrow~~ SO(d), \qquad\exp: so(d,1)~~\longrightarrow~~ SO^+(d,1)\tag{1} $$ are surjective. See also this related Phys.SE post and links therein. -Simplest counterexample. The exponential map -$$\exp: sl(2,\mathbb{R})\oplus sl(2,\mathbb{R})~~\longrightarrow~~ SL(2,\mathbb{R})\times SL(2,\mathbb{R})\tag{2} $$ -for the split group -$$SO^+(2,2)~\cong~[SL(2,\mathbb{R})\times SL(2,\mathbb{R})]/\mathbb{Z}_2 \tag{3}$$ -is not surjective. Here the $\mathbb{Z}_2$-action identifies -$$ (g_L,g_R)~~\sim~~(-g_L,-g_R), \qquad g_L,g_R~\in~SL(2,\mathbb{R})~:=~\{g\in {\rm Mat}_{2\times 2}(\mathbb{R}) \mid \det g~=~1\}.\qquad\tag{4}$$ -One may show that a pair $$(g_L,g_R)~\in~SL(2,\mathbb{R})\times SL(2,\mathbb{R})\tag{5}$$ with $${\rm tr}(g_L)<-2\quad\text{and}\quad{\rm tr}(g_R)>2\tag{6}$$ (or vice-versa $L\leftrightarrow R$) is not in the image of the exponential map, even after $\Bbb{Z}_2$-modding. -More generally, one may prove for the indefinite orthogonal groups $SO^+(p,q)$, where $p,q\geq 2$, that the exponential map -$$\exp: so(p,q)~~\longrightarrow~~ SO^+(p,q)\tag{7} $$ is not surjective, cf. e.g. Ref. 1. - -References: - -D.Z. Dokovic & K.H. Hofmann, Journal of Lie Theory 7 (1997) 171. The pdf file is available here.<|endoftext|> -TITLE: Constructing a vector bundle built out of kernels of the Jacobian? -QUESTION [5 upvotes]: A smooth map $f: M \to N$ between smooth manifolds is a submersion if each Jacobian$$Df_x: DM_x \to DN_{f(x)}$$is surjective. How do I construct a vector bundle $\kappa_f$ built out of the kernels of the $Df_x$? - -REPLY [3 votes]: Clearly the subset of $TM$ defined by this has a vector space above each fiber and a projection map to $M$; what you want to prove is local triviality of the projection. But this follows from the implicit function theorem: in a chart $U \subset M$, the map $f$ is of the form of a standard projection $\Bbb R^n \to \Bbb R^k$. Then the kernel of the Jacobian here is $\Bbb R^n \times \Bbb R^{n-k} \subset \Bbb R^n \times \Bbb R^n = T\Bbb R^n$, which is trivial - hence the vector bundle is indeed locally trivial, as desired. -(Actually, if you put a Riemannian metric on $M$, you can construct an isomorphism $\kappa_f \oplus f^*TN \cong TM$.)<|endoftext|> -TITLE: Discuss the convergence of $\int_0^\infty x \sin e^x \, dx$ -QUESTION [7 upvotes]: $$\int_0^\infty x \sin e^x \, dx$$ -I have tried applying the Dirichlet test, Comparison Principle, integration by parts and substitution, but all have failed. None of these prove that the integral is divergent though, so I'm not really sure how to show that this converges/diverges. -My work: -Dirichlet: Fails, because neither $f(x)=x$ nor $g(x)=\sin e^x$ goes to zero -Comparison: Fails. $\sin e^x \le1$, therefore $\int_0^\infty x \, dx\ge \int_0^\infty x \sin e^x \, dx$. However, $\int_0^\infty x \, dx$ does not converge, so this idea is unhelpful -IBP: $\int_a^b FG'=(F(b)G(b)-F(a)G(a))-\int_a^bGF'$ $$F=x,\quad F'= dx,\quad G' = \sin e^x, \quad G =\text{?}$$ -Substitution: $u(x)=e^x$ $du=e^x$ therefore: $$\int_0^\infty x \sin e^x \, dx=\int_0^{\infty} \ln {u(x)} \sin u(x) \, dx$$ From here, you can use IBP, resulting in: $$F=\ln(u), \quad F'=\frac 1x, \quad G'= \sin u(x), \quad G = -\cos u(x)\cdot u'(x)$$ $$-\ln u(x) \cos u(x) u'(x)|_0^\infty-\int_0^\infty \frac{-\cos u(x) u'(x)}{u(x)}$$ But I feel like this integral is far too complicated for the scope of the question. Additionally, $\ln\infty$ would go to infinity anyway, so I feel like that is not an acceptable way to solve the problem. Graphically, my calculator says that the integral should be equal to 0.411229, a number which appears to have no numerical significance. Is there any other way to integrate this function? - -REPLY [2 votes]: Let $y=e^x$. Then -$$ -\int_0^\infty x \sin e^x \: dx=\int_1^\infty \frac{\ln y}{y}\sin y\:dy=\int_{1}^{\pi}\frac{\ln y}{y}\sin y\:dy+\sum_{n=1}^{\infty}\int_{n\pi}^{(n+1)\pi}\frac{\ln y}{y}\sin y\:dy -$$ -Since $\frac{\ln y}{y}\to0$ as $y\to\infty$ and is monotonic decreasing, as well as $|\int_{n\pi}^{(n+1)\pi}\sin y\:dy|=2$ -$$ -\sum_{n=1}^{\infty}\int_{n\pi}^{(n+1)\pi}\frac{\ln y}{y}\sin y\:dy=\sum_{n=1}^{\infty}(-1)^na_n -$$ -is an alternating series with -$$ -2\frac{\ln (n+1)}{n+1}\leqslant a_n\leqslant 2\frac{\ln n}{n}\quad\text{and }\quad a_n=O(\frac{\ln n}{n})\to0 -$$ as $n\to\infty$. So it is converges by Leibniz Criterion. Since -$$ -\frac{\ln n}{n}=O(\frac1{n^{1-\epsilon}}) -$$ -It is not absolute convergent.<|endoftext|> -TITLE: Every bridgless planar 3-regular graph is 3-edge colorable -QUESTION [5 upvotes]: How to prove an implication - -Every bridgless planar 3-regular graph G is 3-edge colorable. - -I know: - -From Vizing Theorem, that I can color G with 3 or 4 colors. -I have a hint to use that we have an embeeding in plane (as a corrolary of 4CT). -Induction is clearly not a right way since G-v does not have to be 2-connected. -If it is 3-edge colorable, I need to use all 3 edge colors in every vertex. - -What I do not know: - -Obviously, a full solution. :) -How should I use an assumption of 2-connectivity (this seems to me as an essential thing). - -What I think could work out: - -Prove that G is Hamiltonian, then my exercise is a simple corrolary. - -Any help? - -REPLY [4 votes]: Let $G$ be a bridgless planar $3$-regular graph and suppose it is given a plane embedding. Then by the four-colour theorem, there exists a proper $4$-face-colouring of $G$, say using $(1,2,3,4)$. -Now notice that since $G$ is bridge-less, all edges see two distinct faces which have different colours. -Colour the edges which see faces coloured $1,2$ or $3,4$ blue. Colour edges which see faces coloured $1,3$ or $2,4$ red. Colour the edges which see faces coloured $1,4$ or $2,3$. Note this colours all edges in our graph as every edge sees two distinct faces. -Since $G$ is $3$-regular, this colouring is a $3$-edge-colouring of $G$. If two adjacent edges were given the same colour, then either they both see the same coloured faces, or they each see two different faces. -In the case where both see the same coloured faces, this implies $G$ has a vertex of degree $2$. In the other case where the edges see all different faces, that implies that $G$ has a vertex with degree greater than $3$.<|endoftext|> -TITLE: Evaluation of $\sum_{n=0}^{\infty}\frac{1}{(n^4+n^2+1)n!}$ -QUESTION [13 upvotes]: I am wondering how to evaluate the following sum: -$$\sum_{n=0}^{\infty}\frac{1}{(n^4+n^2+1)n!}.$$ -In wolfram alpha I find it is equal to $e/2$ . -I have used the residue method but I didn't succeed and also using digamma function is still hard for me, my problem is treating the $n!$. - -REPLY [20 votes]: We can rewrite the prefactor as -$$\frac{1}{n^4+n^2+1}=\frac{1}{(n^2-n+1)(n^2+n+1)}=a_{n+1}-na_n+\frac12,$$ -with $\displaystyle a_n=\frac{n}{2(n^2-n+1)}$. Now it is easy to understand that $a_n$'s give a sum that telescopes to $0$, so that we are left with -$$\frac12\sum_{n=0}^{\infty}\frac{1}{n!}=\frac e2.$$ - -Added on request of OP: -$$\sum_{n=0}^{\infty}\frac{-na_n+a_{n+1}}{n!}=-\frac{0\cdot a_0}{0!}+{\color{red}{\frac{a_{1}}{0!}-\frac{1\cdot a_1}{1!}}}+{\color{blue}{\frac{a_2}{1!}-\frac{2\cdot a_2}{2!}}}+{\color{magenta}{\frac{a_3}{2!}-\frac{3\cdot a_3}{3!}}}+\frac{a_4}{3!}+\ldots=0.$$<|endoftext|> -TITLE: For which values of $\alpha$ and $\beta$ does the integral $\int\limits_2^{\infty}\frac{dx}{x^{\alpha}\ln^{\beta}x}$ converge? -QUESTION [10 upvotes]: I'm trying to find out for which values of $\alpha$ and $\beta$ the integral $\int\limits_2^{\infty}\frac{dx}{x^{\alpha}\ln^{\beta}x}$ does converge. I know that when $\alpha=1$ then $\beta$ must be greater than $1$. I tried to use integration by parts but It didn't work, so I would appreciate some hints. Thanks in advance. - -REPLY [5 votes]: Covergence: -(1) $\alpha>1$, and $\beta\in \mathbb R$; -(2)$\alpha=1$, and $\beta>1$. -All other cases are divergent. -You have already know the case (2), so let me explain case (1). The key point is to see $x^{\alpha}$ is always the dominate term. -If $\alpha>1$, then $\frac{\alpha+1}{2}>1$, and $\frac{\alpha-1}{2}>0$. So we have $$\frac{1}{x^{\alpha}ln^{\beta}x}=\frac{1}{x^{\frac{\alpha+1}{2}}}\frac{1}{x^{\frac{\alpha-1}{2}}ln^{\beta}x}.$$ -But notice that $\int_2^{\infty}\frac{1}{x^{\frac{\alpha+1}{2}}}dx<\infty$, and the term $\frac{1}{x^{\frac{\alpha-1}{2}}ln^{\beta}x}$ is bounded as the limit $$\lim_{x\to \infty}\frac{1}{x^{\frac{\alpha-1}{2}}ln^{\beta}x}=0$$ for any $\beta$. -Therefore in this case the integral -$$\mid\int_2^{\infty}\frac{1}{x^{\alpha}ln^{\beta}x}dx\mid\le\int_2^{\infty}\mid \frac{1}{x^{\alpha}ln^{\beta}x}\mid dx=\int_2^{\infty}\mid \frac{1}{x^{\frac{\alpha+1}{2}}}\mid \mid \frac{1}{x^{\frac{\alpha-1}{2}}ln^{\beta}x}\mid dx\le \int_2^{\infty}\frac{M}{x^{\frac{\alpha+1}{2}}}dx\le \infty$$ -And the case when $\alpha<1$ then divergence is similar.<|endoftext|> -TITLE: Probability with unknown variables -QUESTION [7 upvotes]: An urn contains $10$ red marbles and $10$ black marbles while a second urn contains $25$ red marbles and an unknown number of black marbles. A random marble will be selected from each urn and the probability that both marbles are the same will be determined. A hint was given by the teacher: the probability does NOT depend on the number of unknown marbles. Verify that this is the case. -Let's call $N$ the unknown number of marbles. -I wrote out all possible ways to select a marble from each urn, selecting a red marble from both urn $1$ and urn $2$, and selecting a black marble from urn $1$ and $2$ and this is what I got: - -Number of ways to select a marble from each urn: $ \binom{20}{1}\binom{25+N}{1}$ -Number of ways to select $1$ red marble from both urn $1$ and urn $2$: $\binom{10}{1}\binom{25}{ 1}$ -Number of ways to select $1$ black marble from urn $1$ and urn $2$: $\binom{10}{1}\binom{N}{ 1}$ - -And this is what I got as my final equation to finding out the probability of selecting the same color marble from each urn: $\dfrac{\binom{10}{1}\binom{25}{ 1}+\binom{10}{1}\binom{N}{ 1}}{\binom{20}{1}\binom{25+N}{1}} $ -I am confused on how the probability doesn't depend on the unknown number of black marbles in urn $2$? Any help would be much appreciated, thank you so much! -PS: I also searched through stack exchange for a problem similar to this and I couldn't find one. If this question was asked already, then I apologize! - -REPLY [5 votes]: Hint: -$\dfrac{\binom{10}{1}\binom{25}{ 1}+\binom{10}{1}\binom{N}{ 1}}{\binom{20}{1}\binom{25+N}{1}} =\frac{\binom{10}{1}\cdot\left[\binom{25}{ 1}+\binom{N}{ 1} \right]}{20\cdot(25+N)}=\frac{10\cdot (25+N)}{20\cdot (25+N)}$<|endoftext|> -TITLE: Find all holomorphic functions s.t. $f(0) = 0$ and $f'(z) = f(z)g(z)$ for all $z \in U$ -QUESTION [6 upvotes]: Let $U$ be a simply connected, open subset of $\Bbb C$ (not all of $\mathbb{C}$) containing $0$ and $1$. -Given a holomorphic function $g: U \rightarrow \mathbb{C}$ with $g(0) \neq g(1)$, how do you find all holomorphic functions $f: U \rightarrow \mathbb{C}$ such that -$f(0) = 0$, and -$f'(z) = f(z)g(z)$ for all $z \in U$? - -REPLY [2 votes]: The only such function is $f=0$. Because if $f$ does not vanish -identically then $f$ has a zero of some finite order $n$ at the -origin. Hence $f'$ has a zero of order $n-1$. But $fg$ has a zero -of order at least $n$, so $f'=fg$ is impossible.<|endoftext|> -TITLE: Prove there are infinitely many primes in $\mathbb{Z}[i]$ -QUESTION [9 upvotes]: I saw a proof online there are infinitely many primes in $\mathbb{Z}$. The Euler product let's us factor the harmonic series: -$$ \prod \left( 1 - \frac{1}{p} \right) = \sum \frac{1}{n}$$ -I wonder if this extends to $\mathbb{Z}[i]$. Joan Baez Week 216 of This Week's Finds defines the zeta function of number field as the sum over non-zero ideals basically: -$$ \zeta_{\mathbb{Z}[i]}(s) = \sum \frac{1}{(m^2 + n^2)^s} = \prod_{p \in \mathbb{Z}[i]} \left( 1 - \frac{1}{|p|^s} \right) $$ -Here $2 = (1+i)(1-i)$ means that $2$ is ramified in $\mathbb{Z}[i]$. When does this converge? -$$ \sum_{m, n \geq 1} \frac{1}{(m^2 + n^2)^s} \approx \frac{\pi}{4}\int \frac{ dr }{r^{2s-1}} < \infty$$ -This zeta function converges for $s > 1$ since we are adding more numbers. So for $s = 1$ -$$ \zeta_{\mathbb{Z}[i]}(1) = \sum \frac{1}{m^2 + n^2} = \prod_{p \in \mathbb{Z}[i]} \left( 1 - \frac{1}{|p|} \right) = \infty $$ -We should also check that $\mathbb{Z}[i]$ has unique factorization, because it has a Euclidean algorithm - -Did we just prove $\mathbb{Z}[i]$ has infinitely many primes? -If that doesn't work, we show that $\zeta(2)$ is irrational. In fact $\zeta_{\mathbb{Z}}(2) = \frac{\pi^2}{6}$ but what about $\zeta_{\mathbb{Z}[i]}(2) $ ? - -Different ways to prove there are infinitely many primes? -GCD of gaussian integers - -REPLY [2 votes]: HINT: -Modify slightly the argument of Euclid by taking $q= 4 p_1\cdot \ldots \cdot p_n-1$ to show that $q$ has a prime factor $\equiv -1 \mod 4$ which is also different from all the $p_i$'s. Therefore, there exist infinitely many prime numbers of form $4 k-1$. These will also be prime elements in $\mathbb{Z}[i]$ ( see Gaussian integers)<|endoftext|> -TITLE: The Schwartz function and the sobolev space $W^{2,p}$ -QUESTION [7 upvotes]: How do you prove the Schwartz functions in $\mathbb{R}^n$ are dense in the space $W^{2,p}(\mathbb{R}^n)?$ -Terrence tao has a version of the proof of -The space $C_c^{\infty}(\mathbb{R}^d)$ of test function is a dense subspace of $W^{k,p}(\mathbb{R}^d)$, then the fact $\mathcal{S}(\mathbb{R}^d)$ is dense in $L^p(\mathbb{R}^d)$ is a corollary from that. I do not understand his proof. (See lemma2) -enter link description here - -REPLY [4 votes]: The Schwartz space contains in particular $C^\infty_0(\mathbb R^n)$ and $C^\infty_0(\mathbb R^n)$ is by definition dense in $W^{k,p}_0(\mathbb R^n)$. But we have $W^{k,p}_0(\mathbb R^n) = W^{k,p}(\mathbb R^n)$. Thus the Schwartz space is dense in $W^{k,p}(\mathbb R^n)$. -The fact that $W^{k,p}(\mathbb R^n)= W^{k,p}_0(\mathbb R^n)$ can be found in Adam's Sobolev spaces (Corollary 3.23). The following is part of the proof of Theorem 3.22 in the book. -Let $f : C^\infty_0(\mathbb R^n)$ be a smooth function so that $0\le f\le 1$ and -$$f(x) = \begin{cases} -1 & \text{ when }|x|\le 1 \\ -0 & \text{ when }|x|\ge 2, -\end{cases}$$ -For each $\epsilon >0$, let $f_\epsilon(x) = f(\epsilon x)$. Then all derivatives of $f_\epsilon$ by bounded independent of $\epsilon <1$. -For all $u\in W^{k,p}(\mathbb R^n)$, consider $u_\epsilon = uf_\epsilon$. Then using the product rule, we have -$$\|u-u_\epsilon\|_{W^{k,p}(\mathbb R^n)} \le C \|u\|_{W^{k,p}(\Omega_\epsilon)},$$ -where $\Omega_\epsilon = \{x\in \mathbb R^n : |x| \ge 1/\epsilon\}$. As $\epsilon\to 0$, the right hand side converges to $0$. Thus $u $ can be approximated by elements $u_\epsilon$ with compact support. Using mollifiers, this $u_\epsilon$ can be approximated by elements in $C^\infty_0(\mathbb R^n)$. Thus $C^\infty_0(\mathbb R^n)$ is dense in $W^{k,p}(\mathbb R^n)$. -In general it is not true that $W^{k,p}_0(\Omega) = W^{k,p}(\Omega)$.<|endoftext|> -TITLE: Curl of unit normal vector on a surface is zero? -QUESTION [7 upvotes]: I have a scalar field $\phi$. From this field, I define an iso-surface $\phi=\phi_{iso}$. -The unit normal vector on this surface is -$\vec{n}=\left(\frac{\nabla\phi}{|\nabla\phi|}\right)_{\phi_{iso}}$ -I have a book, where the author says: For any surface unit normal vector the following is true: -$\nabla\times\vec{n}=0$ -I found some other sources saying the same: -rot(n)=0 ("since rot(n)=0") -curl(n)=0 on page 4 between eq. 20 and 21 -Is this true for any surface? Because if I try to prove this myself, I get stuck really quick: -$\nabla\times\vec{n}=\nabla\times\left(\frac{\nabla\phi}{|\nabla\phi|}\right)= -\nabla\left(\frac{1}{|\nabla\phi|}\right)\times\nabla\phi+\frac{1}{|\nabla\phi|}\underbrace{\nabla\times\nabla\phi}_{=0}=\frac{1}{|\nabla\phi|^2}\nabla(|\nabla\phi|)\times\nabla\phi$ -I can't see why this is supposed to be zero, since $|\nabla\phi|\ne\text{const}$ generally. Does anyone have an idea? - -REPLY [3 votes]: It really depends on how you define the vector field $\vec{n}$ AWAY from the surface $\phi = \phi_{iso}$. On the surface $\vec{n}$ is well-defined (up to choice of orientation). - -Choice one: define $\vec{n}$, as you did, to be globally the normalized gradient of $\phi$. That is, set $\vec{n} = \frac{\nabla \phi}{|\nabla\phi|}$. In this case $\nabla\times \vec{n} = 0$, when evaluated at the surface $\{\phi = \phi_{iso}\}$, if and only if $|\nabla \phi|$ is constant along the surface. -Choice two: forget more or less about the function $\phi$. Define the function -$$ \psi = \frac{1}{|\nabla \phi|} (\phi - \phi_{iso}) $$ -Observe that the surface you are interested in is the surface $\{ \psi = 0\}$. Computing the gradient $\nabla \psi$ you have that -$$ \nabla \psi = \frac{\nabla\phi}{|\nabla \phi|} - \frac{(\phi-\phi_{iso}) \nabla \phi \cdot \nabla^2\phi}{|\nabla\phi|^3} $$ -The key is that the second term vanishes on the surface, since there $\phi = \phi_{iso}$. So $\nabla\psi$ restricted to the surface is still the unit normal vector field. But $\nabla \times (\nabla \psi)$ is clearly zero. (Note, however, $\nabla\psi$ is not guaranteed to be a unit vector field away from the surface.) - - -More generally: given a compact smooth surface $\Sigma\subset \mathbb{R}^3$, there exists a radius $r > 0$ such that on the set $S = \{ x\in \mathbb{R}^3: \mathrm{dist}(x,\Sigma) < r\}$ we can solve the eikonal equation $|\nabla \Psi| = 1$ to get a function $\Psi:S \to\mathbb{R}$ such that $\Sigma = \Psi^{-1}(0)$ and $\nabla \Psi$ is the unit normal vector field for any level set $\Psi^{-1}(c)$. Then in this formulation we see that the unit normal vector field $\vec{n} = \nabla \Psi$ is curl-free everywhere in $S$. The number $r$, which is generically finite, is related to the radius of curvature of $\Sigma$.<|endoftext|> -TITLE: What closed 3-manifolds have fundamental group $\Bbb Z$? -QUESTION [24 upvotes]: For certain small groups, it is easy (and desirable) to classify closed (and orientable if necessary) 3-manifolds with that group as their fundamental group. (Essentially due to Waldhausen is that for "large" 3-manifold groups, indecomposable under free product, the 3-manifold is determined up to homeomorphism by its fundamental group, though this was only proved in full generality after geometrization. Large means, precisely, infinite and not $\Bbb Z$). -For finite groups, as a corollary of elliptization, $\Bbb Z/p$ only appears as the fundamental group of lens spaces $L(p,q)$, and one can classify the other finite fundamental groups (before elliptization this was harder and one could only classify up to connected sum with a homotopy sphere.) as having either at most two manifolds per group (and precisely what they are). -$\Bbb Z$ is the simplest infinite group I know. What's the classification of closed 3-manifolds with fundamental group $\Bbb Z$? - -REPLY [2 votes]: In fact something significantly stronger is true. If $Y$ is closed orientable and $\Bbb Z$ injects into the abelianization $\pi_1(Y)$, either $Y$ has an $S^1 \times S^2$ factor or $Y$ has a $\pi_1$-injective surface of genus at least one (An injection $\pi_1(\Sigma_g) \to \pi_1(Y^3)$ for $\Sigma_g$ a genus $g$ surface with $g\geq 1$ induced from an embedding $\Sigma_g \to Y$). -Consider a prime factor of $Y$, $Y'$ with $\Bbb Z$ injecting into the abelianization of $\pi_1(Y')$. By Poincare duality $H_2(Y')\ne 0$, and by a very famous result of Gabai there exists a taut foliation on $Y'$ with a closed leaf $\Sigma$. By a theorem of Novikov, the induced map $\pi_1(\Sigma) \to \pi_1(Y')$ is injective and by another theorem of Novikov if $\Sigma$ is $S^2$ if and only if $Y'$ is $S^1\times S^2$.<|endoftext|> -TITLE: Constructing Martingales from Markov Processes -QUESTION [7 upvotes]: I know that for a Markov process $X_t$ with generator $L$ and $f,f^2\in D(L)$, $$M_t=f(X_t)-\int_0^t Lf(X_s)\ ds$$ is a martingale (w.r.t. $P^x$). And I want to show that $$M_t^2-\int_0^t (Lf^2(X_s)-2f(X_s)Lf(X_s))\ ds$$ is a martingale. Using the first martingale, I know that $f^2(X_t)-\int_0^t Lf^2(X_s)\ ds$ and so this would reduce the problem to showing that $$M_t^2 = f^2(X_t) + \int_0^t 2f(X_s)Lf(X_s)\ ds,$$ but I am having trouble showing that. Is this the right avenue of attack or should I try something else? - -REPLY [7 votes]: Unfortunately, we cannot expect that the equality -$$M_t^2 = f^2(X_t) + \int_0^t 2f(X_s) L f(X_s) \, ds$$ -holds, so we have to use a different approach. - -By the very definition of $M_t$, we have -$$f^2(X_t) = \left( M_t+ \int_0^t Lf(X_r) \, dr \right)^2,$$ -i.e. -$$M_t^2 = f^2(X_t) - 2 M_t \int_0^t Lf(X_r) \, dr - \left( \int_0^t Lf(X_r) \, dr \right)^2.$$ -Obviously, this implies -$$M_t^2 - \int_0^t (Lf^2(X_r)-2f(X_r) Lf(X_r)) \, dr = \left[ f^2(X_t)- \int_0^t L f^2(X_r) \, dr \right] - N_t$$ -where -$$N_t := 2 M_t \int_0^t Lf(X_r) \, dr + \left( \int_0^t Lf(X_r) \, dr \right)^2 -2 \int_0^t f(X_r) Lf(X_r) \, dr.$$ -Since we already know that $(f^2(X_t) - \int_0^t L f^2(X_r) \, dr)_t$ is a martingale, it suffices to show that $(N_t)_{t \geq 0}$ is a martingale. This is a rather messy calculation. First of all, since $(M_t)_{t \geq 0}$ is a martingale we obtain from the tower property that -$$\begin{align*}&\quad \mathbb{E}\left( M_t \int_0^t Lf(X_r) \, dr \mid \mathcal{F}_s \right) \\&= \mathbb{E}(M_t \mid \mathcal{F}_s) \int_0^s Lf(X_r) \, dr + \int_s^t \mathbb{E} \bigg[ \mathbb{E}(M_t Lf(X_r) \mid \mathcal{F}_r) \mid \mathcal{F}_s \bigg] \, dr \\ &= M_s \int_0^s Lf(X_r) \, dr + \mathbb{E} \left( \int_s^t M_r Lf(X_r) \, dr \mid \mathcal{F}_s \right). \end{align*}$$ -The first term at the right-hand side is rather convenient, but we have to rewrite the second one. It follows from the definition of $M_t$ that -$$\begin{align*} &\quad \int_s^t M_r Lf(X_r) \, dr \\ &= \int_s^t f(X_r) Lf(X_r) \, dr - \int_s^t \int_0^r Lf(X_v) \, dv Lf(X_r) \, dr \\ &= \int_s^t f(X_r) Lf(X_r) \, dr - \int_s^t \int_0^s Lf(X_v) Lf(X_r) \, dv \, dr - \int_s^t \int_s^r Lu(X_v) Lf(X_r) \, dr \, dv \\ &= \int_s^t f(X_r) Lf(X_r) \, dr -\frac{1}{2} \left( \int_0^t Lf(X_r) \, dr \right)^2 + \frac{1}{2} \left( \int_0^s Lf(X_r) \, dr \right)^2 \end{align*}$$ -for any $r \leq s \leq t$. In the last step we have used that$$2 \int_s^t \int_s^r Lf(X_v) Lf(X_r) \, dv \, dr = \left( \int_s^t Lf(X_r) \, dr \right)^2 \tag{1}$$ -implies -$$\begin{align*} &\quad \int_s^t \int_0^s Lf(X_v) Lf(X_r) \, dv \, dr + \int_s^t \int_s^r Lf(X_v) Lf(X_r) \, dr \, dv \\ &\stackrel{(1)}{=} \int_s^t \int_0^s Lf(X_v) Lf(X_r) \, dv \, dr + \frac{1}{2} \left( \int_0^t Lf(X_r) \, dr - \int_0^s Lf(X_r) \, dr \right)^2 \\ &= \frac{1}{2} \left( \int_0^t Lf(X_r) \, dr \right)^2 + \frac{1}{2} \left( \int_0^s Lf(X_r) \, dr \right)^2 \end{align*}$$ -Adding all up, we find that $(N_t)_{t \geq 0}$ is a martingale. - -Remark: If you are interested in more general results, then have a look at the so-called carré-du-champ operator (or mean field operator).<|endoftext|> -TITLE: What are some major open problems in Galois theory? -QUESTION [7 upvotes]: Few days back one of my friend and I were discussing about Galois life and his ideas. Though we are not trained in Galois theory, but I am recently started to learn it by myself and hope to take up research on it some times soon. So I am wondering about the open problems in Galois theory? Is there any? Please add a reference to it. Thanks. - -REPLY [15 votes]: One of the most active problems in Galois theory is the so called "Inverse Galois Problem" concerning whether or not every finite group appears as the Galois group of some extension of the rational numbers. It is a problem not only concerning Galois theory but also High Level Finite Group theory. This is an old problem but it is still unsolved. -To have a brief introduction to this subject the Wikipedia's article is pretty easy to understand: https://en.wikipedia.org/wiki/Inverse_Galois_problem -Hope It helped.<|endoftext|> -TITLE: Prove that the orthogonal projection operator is idempotent -QUESTION [7 upvotes]: Let $\{u_{1},u_{2},\cdots,u_{n}\}$ e an orthonormal basis for a subspace $U$ in an inner product space $X$. -Define the orthogonal projection of $X$ onto $U$, $P:X \to U$, to be $Px = \sum_{i=1}^{n}\langle x, u_{i} \rangle u_{i}$, where $\langle \cdot, u_{i} \rangle$ represents the inner product. -I need to prove that $P = P^{2}$; i.e., that $P$ is idempotent. I have already proven that $P$ is linear, and am therefore free to use it. -So far, I set up what I am trying to show as follows: - -$P^{2}x = \sum_{i=1}^{n} \langle Px, u_{i}\rangle u_{i} =\sum_{i=1}^{n}\langle \sum_{i=1}^{n}\langle x, u_{i} \rangle u_{i},u_{i}\rangle u_{i}$ - -Then, I thought that perhaps expanding out the inner sum might be helpful, and then somewhere along the line I might be able to use linearity to get $\sum_{i=1}^{n}\langle x, u_{i}\rangle u_{i}$ eventually on the RHS. -This is about as far as I got playing around with the sums: - -$\sum_{i=1}^{n}\langle \langle x, u_{1}\rangle u_{1}+\langle x, u_{2}\rangle u_{2} + \cdots + \langle x, u_{n} \rangle u_{n}, u_{i} \rangle u_{i} = \sum_{i=1}^{n}\left(\langle \langle x, u_{1} \rangle u_{1}, u_{i} \rangle + \langle \langle x, u_{2}\rangle u_{2}, u_{i} \rangle + \cdots + \langle \langle x, u_{n} \rangle u_{n}, u_{i} \rangle \right)u_{i} = \sum_{i=1}^{n}\left[\left(\langle \langle x, u_{1} \rangle u_{1}, u_{i}\rangle u_{i}\right) + \left(\langle \langle x, u_{2} \rangle u_{2}, u_{i} \rangle u_{i}\right) + \cdots + \left(\langle \langle x, u_{n} \rangle u_{n}, u_{i} \rangle u_{i} \right)\right] = \sum_{i=1}^{n} \langle \langle x, u_{1} \rangle u_{1}, u_{i}\rangle u_{i} + \sum_{i=1}^{n}\langle \langle x, u_{2} \rangle u_{2}, u_{i} \rangle u_{i} + \cdots + \sum_{i=1}^{n}\langle \langle x, u_{n} \rangle u_{n}, u_{i} \rangle u_{i}$ - -But, it's still not looking anywhere closer to where I need to be. -Could somebody please help me finish this? -Thank you. - -REPLY [3 votes]: An approach might be: let's show that $P|_U=id_U$. -This is sufficient, because, since $\operatorname{im}P\subseteq U$, $$P^2=P\circ P=P|_U\circ P=id_U\circ P=P$$ -Indeed, since $\{u_1,\cdots,u_n\}$ is a basis, you only need to show that $P(u_i)=u_i$. But $$P(u_i)=\sum_{j=1}^n\langle u_i,u_j\rangle u_j=\langle u_i,u_i\rangle u_i=u_i$$ $\square$<|endoftext|> -TITLE: Unique factorization of manifolds? -QUESTION [6 upvotes]: I wonder if there is a result on the unique factorization of manifolds. -Call a topological manifold to be indecomposable if it is not homeomorphic to a product of manifolds of positive dimension. Is every manifold a unique (up to order) product of indecomposable ones? -I couldn't find any statements on this simple question. Are there any results on this? Any result in different categories (smooth, complex, Riemannian or whatever) or with extra conditions is fine. -[edit] -The answer seems to be No in most cases. Can we impose strong conditions so that the answer is positive? - -REPLY [7 votes]: Generally the answer is no. For example, $TS^2$ is indecomposible. But $TS^2 \times \mathbb R \simeq S^2 \times \mathbb R^3$, so $S^2 \times \mathbb R^3$ splits as a product of indecomposibles in several different ways. -You could use $\mathbb C$ instead of $\mathbb R$ if you want complex manifolds. -You get similar things happening for Riemann manifolds as well.<|endoftext|> -TITLE: Presentation of Groups -QUESTION [7 upvotes]: I have troubles to solve this kind of exercises. For example: - -Let $$G_1=\langle x,y |x^3=y^4=1\rangle,~~~G_2=\langle x,y |x^6=y^6=(xy)^3=1\rangle. $$ I want to check that $G_1$ is an infinite nonabelian group and in $G_2$ we have $xy^2x \neq 1$ . - -For the first part, I have seen that it is useful to define a group homomorphism and then see that the image (that we know) is infinite and nonabelian. For the second part, similarly we can define a $\phi$ such that $\phi (xy^2x) \neq 1$. -How can I define this homomorphisms? There is any general procedure for this? -Thanks, - -REPLY [2 votes]: Let $H = {\rm SL}(2,{\mathbb Z})$ and define $\phi:G_1 \to H$ by -$$ \phi(x) = \left(\begin{array}{rr}0&1\\-1&-1\end{array}\right), -\phi(y) = \left(\begin{array}{rr}0&1\\-1&0\end{array}\right).$$ -Let $K=S_6$ and define $\psi:G_2 \to K$ by -$$\psi(x)=(1,2,3,4,5,6),\,\psi(y)=(1, 2, 6, 5, 4, 3).$$<|endoftext|> -TITLE: Function which takes every value uncountably often -QUESTION [12 upvotes]: Is there a function such that for all $y\in \mathbb{R}$ there exist uncountably many $x\in\mathbb{R}$ with $f(x)=y$? -A function for which countably many $x$ exist is for example $\tan$, but I fail to see how to take this a step further. So any help is highly appreciated. - -REPLY [2 votes]: If $\mu$ is a singular continuous measure (e.g. take the haar measure on the cantor set in the unit interval) then $f(z)=\int \frac{d\mu (\lambda )}{\lambda -z}$ is analytic in the upper half plane and its limit as $z = x+i\epsilon$ goes to x defines a measurable function on $\mathbb{R}$. This function can take every real value y uncountably often in any finite sub-interval of $\mathbb{R}$. Look up papers by D.B. Pearson on value distribution for details.<|endoftext|> -TITLE: Evaluation of $\iint_{D}\left(x\frac{\partial f}{\partial x}+y\frac{\partial f}{\partial y}\right)\mathrm{d}x\mathrm{d}y$ -QUESTION [6 upvotes]: Suppose that $f(x,y)$ is defined on $D=\{(x,y)\mid x^2+y^2\le1\}$ and has continuous second-order partial derivatives in $D$. If -$$\frac{\partial^2f}{\partial x^2}+\frac{\partial^2f}{\partial y^2}=\mathrm{e}^{-(x^2+y^2)}$$ -then find $\iint_{D}\left(x\frac{\partial f}{\partial x}+y\frac{\partial f}{\partial y}\right)\mathrm{d}x\mathrm{d}y$. -I have tried the polar coordinate form, obtained -$$ -\frac{\partial^2f}{\partial x^2} =\cos^2\theta\frac{\partial^2f}{\partial r^2}-\frac{2\sin\theta\cos\theta}{r}\cdot\frac{\partial^2f}{\partial r\partial \theta}+\frac{\sin^2\theta}{r^2}\frac{\partial^2f}{\partial\theta^2}+\frac{\sin^2\theta}{r}\frac{\partial f}{\partial r}+\frac{2\sin\theta\cos\theta}{r^2}\frac{\partial f}{\partial\theta}\\ -\frac{\partial^2f}{\partial y^2} =\sin^2\theta\frac{\partial^2f}{\partial r^2}+\frac{2\sin\theta\cos\theta}{r}\cdot\frac{\partial^2f}{\partial r\partial \theta}+\frac{\cos^2\theta}{r^2}\frac{\partial^2f}{\partial\theta^2}+\frac{\cos^2\theta}{r}\frac{\partial f}{\partial r}-\frac{2\sin\theta\cos\theta}{r^2}\frac{\partial f}{\partial\theta}\\ -\frac{\partial^2f}{\partial x^2}+\frac{\partial^2f}{\partial y^2} =\frac{\partial^2f}{\partial r^2}+\frac1r\frac{\partial f}{\partial r}+\frac1{r^2}\frac{\partial^2f}{\partial\theta^2}=\mathrm{e}^{-r^2} -$$ -and -\begin{equation*} -x\frac{\partial f}{\partial x}+y\frac{\partial f}{\partial y}=r\frac{\partial f}{\partial r} -\end{equation*} -so, the double integral becomes -$$\iint_D r\frac{\partial f}{\partial r}r\mathrm{d}r\mathrm{d}\theta$$ -how to solve this? - -REPLY [2 votes]: @H.R. posted a solid solution that used vector analysis. I thought it would be useful to some readers to see a solution that relies on scalar analysis only. To that end we proceed. -Let $D$ be the unit disk, centered at the origin, and let $I$ be the integral of interest defined as -$$\begin{align} -I&=\int_{D}\left(x\frac{\partial f}{\partial x}+y\frac{\partial f}{\partial x}\right)\,dS\\\\ -&=\int_{-1}^1 \left(\int_{-\sqrt{1-y^2}}^{\sqrt{1-y^2}}x\frac{\partial f}{\partial x} \,dx\right)\,dy+\int_{-1}^1 \left(\int_{-\sqrt{1-x^2}}^{\sqrt{1-x^2}}y\frac{\partial f}{\partial y} \,dy\right)\,dx \tag 1 -\end{align}$$ -We integrate by parts the inner integral of the first integral on the right-hand side of $(1)$. We let $u= \frac{\partial f(x,y)}{\partial x}$ and $v=\frac12\left(x^2+y^2\right)$ to reveal -$$\begin{align} -\int_{-\sqrt{1-y^2}}^{\sqrt{1-y^2}}x\frac{\partial f}{\partial x} \,dx&=\left .\left(\frac12\left(x^2+y^2\right)\frac{\partial f}{\partial x}\right)\right|_{x=-\sqrt{1-y^2}}^{\sqrt{1-y^2}}-\int_{-\sqrt{1-y^2}}^{\sqrt{1-y^2}} \left(\frac12\left(x^2+y^2\right)\frac{\partial^2 f}{\partial x^2}\right)dx\\\\ -&=\frac12\left .\left(\frac{\partial f}{\partial x}\right)\right|_{x=-\sqrt{1-y^2}}^{\sqrt{1-y^2}}-\int_{-\sqrt{1-y^2}}^{\sqrt{1-y^2}} \left(\frac12\left(x^2+y^2\right)\frac{\partial^2 f}{\partial x^2}\right)dx \tag 2 -\end{align}$$ -An analogous development of the second integral on the right-hand side of $(2)$ reveals -$$\begin{align} -\int_{-\sqrt{1-x^2}}^{\sqrt{1-x^2}}y\frac{\partial f}{\partial y} \,dy&=\frac12\left .\left(\frac{\partial f}{\partial y}\right)\right|_{y=-\sqrt{1-x^2}}^{\sqrt{1-x^2}}-\int_{-\sqrt{1-x^2}}^{\sqrt{1-x^2}} \left(\frac12\left(x^2+y^2\right)\frac{\partial^2 f}{\partial y^2}\right)dy \tag 3 -\end{align}$$ -Combining the results of $(2)$ and $(3)$ yields -$$\begin{align} -I&=\frac12\oint_{x^2+y^2=1}\left(\frac{\partial f}{\partial x}dy-\frac{\partial f}{\partial y}dx\right)-\frac12\int_D \left(x^2+y^2\right)\left(\frac{\partial^2 f}{\partial x^2}+\frac{\partial^2 f}{\partial y^2}\right)\,dS\\\\ -&=\frac12 \int_D \left(\frac{\partial^2 f}{\partial x^2}+\frac{\partial^2 f}{\partial y^2}\right)\,dS-\frac12\int_D \left(x^2+y^2\right)\left(\frac{\partial^2 f}{\partial x^2}+\frac{\partial^2 f}{\partial y^2}\right)\,dS\\\\ -&=\frac12\int_0^{2\pi}\int_0^1 \left(1-\rho^2\right)e^{-\rho^2}\,\rho\,d\rho\,d\phi\\\\ -&=\frac12 \pi e^{-1} -\end{align}$$<|endoftext|> -TITLE: Universal Enveloping Algebra of $\mathfrak{gl}(n,\Bbb R)$ -QUESTION [9 upvotes]: I am just learning about universal enveloping algebras, and I am wondering about the following. - -Question: Is the universal enveloping algebras of $\mathfrak{gl}(n,\Bbb R)$ just $\mathfrak{gl}(n,\Bbb R)$ itself? - -It is an associative algebra with matrix multiplication. But I am not sure that it satisfies the universal property. -Given an $\Bbb R$-algebra $A$ and a Lie algebra homomorphism -$$\varphi:\mathfrak{gl}(n,\Bbb R)\to A$$ -we must show that it is also an $\Bbb R$-algebra homomorphism. That is -$$\varphi([X,Y])=\varphi(X)\varphi(Y)-\varphi(Y)\varphi(X),\quad\forall X,Y\quad\implies\quad \varphi(XY)=\varphi(X)\varphi(Y),\quad\forall X,Y.$$ -I am not sure this is true. - -REPLY [5 votes]: It is not true. For example, consider the trace operator -$$\mathrm{tr}:\mathfrak{gl}(n,\Bbb R)\to \Bbb R.$$ -Then, -$$\mathrm{tr}([X,Y])=\mathrm{tr}(XY)-\mathrm{tr}(YX)=0=\mathrm{tr}(X)\mathrm{tr}(Y)-\mathrm{tr}(Y)\mathrm{tr}(X).$$ -However, it is not true in general that -$$\mathrm{tr}(XY)=\mathrm{tr}(X)\mathrm{tr}(Y).$$<|endoftext|> -TITLE: Mean value theorem for twice differentiable function -QUESTION [6 upvotes]: Let $f:(0,\infty)\to \Bbb R$ be a twice differentiable function. In this answer, it is asserted that the MVT lets one write $$f(x+h)=f(x)+f'(x)h+\frac12 f''(\xi)h^2$$ for some $\xi\in (x,x+h)$. -It is not clear to me why this should be the case. Using the MVT, one can write $f''(\xi)h=f'(x+h)-f'(x)$ for some $\xi\in (x,x+h)$. Using that, the claim rephrases to $$f(x+h)=f(x)+\frac{1}{2}f'(x)h+\frac{1}{2}f'(x+h)h$$ and I don't see why that should hold. -I'm sure I'm being stupid, therefore I much welcome clarification. - -REPLY [2 votes]: The basic arguments of the exact proof (which I do not give here) that this works, go as follows: Taylor's theorem says that $$f(x+h)=f(x)+f'(x)h+\underbrace{\frac12f''(x)h^2+ο(h^2)}$$ If we replace $x$ with $ξ$ (where $ξ$ is given by the MVT) in the underlined term we can get rid of the remainder $ο(h^2)$ and achieve an exact calculation of $f(x+h)$: -$$f(x+h)=f(x)+f'(x)h+\frac12f''(ξ)h^2$$ Why then the approximation in the first place? The MVT says there exists such an $ξ$ but does not give a way to find it, so the approximation in Taylor's theorem is indeed useful.<|endoftext|> -TITLE: If $XYZ=ZXY$ does $e^Xe^Ye^Z=e^Ze^Xe^Y$? -QUESTION [5 upvotes]: It is well known that if $X,Y$ are commuting matrices, then their exponential commute: -$$XY=YX\quad\implies\quad e^Xe^Y=e^Ye^X.$$ -Now, I am wondering if the following generalization holds: - -Question: If $XYZ=ZXY$, does $e^Xe^Ye^Z=e^Ze^Xe^Y$? - -Note that if $Z$ commutes with both $X$ and $Y$, then it is obvious. - -REPLY [12 votes]: OP is asking if - -$$ [XY,Z]~=~0\qquad \stackrel{?}{\Rightarrow}\qquad [e^Xe^Y,e^Z]~=~0~? \tag{1}$$ - -In a comment above, user1551 has already pointed out obvious counterexamples if $X=0$ xor $Y=0$. -Here we will give a counterexample with invertible $2\times 2$ matrices, namely the Pauli matrices: -$$ X~=~ i\pi \sigma_x, \qquad Y~=~ \frac{i\pi}{2} \sigma_y, \qquad Z~=~ \frac{i\pi}{2} \sigma_z, \tag{2}$$ -$$ e^X~=~ -{\bf 1}_{2\times 2}, \qquad e^Y~=~ i\sigma_y, \qquad e^Z~=~ i\sigma_z. \tag{3}$$ -Now $XY$ is proportional to $\sigma_z$ and therefore commutes with $Z$; while $e^Xe^Y$ is proportional to $\sigma_y$, and hence anticommutes with $e^Z$ (rather than commutes).<|endoftext|> -TITLE: Discontinuous surjective linear map which is not open -QUESTION [6 upvotes]: The following statement is true: -Assume that $X$ and $Y$ are topological vector spaces where $Y$ is finite-dimensional Hausdorff, if $A:X\rightarrow Y$ is a continuous surjective linear map then $A$ is an open map. -With the same setting, I am looking for a discontinuous $A$ which is not an open map, can anyone suggest any of such examples? -Moreover, what about any example that satisfies -1) $Y$ finite-dimensional but not Hausdorff, $A$ continuous surjective linear map but not open. -2) $Y$ finite-dimensional but not Hausdorff, $A$ discontinuous surjective linear map but not open. -3) Similar cases apply on infintie dimensional $Y$. - -REPLY [2 votes]: With the same setting, I am looking for a discontinuous $A$ which is not an open map, can anyone suggest any of such examples? - -That doesn't exist. If $A\colon X \to Y$ is a surjective linear map, where $X,Y$ are topological vector spaces and $Y$ is finite-dimensional and Hausdorff, then $A$ is open, regardless of continuity. One way to see that is to note that if we replace the topology on $X$ by a finer one, it becomes easier for a map to be continuous, but harder to be open. If we endow $X$ with the finest vector space topology on $X$, then $A$ is continuous, hence open by what you know. But every open set in the topology on $X$ we started with is open in the finest vector space topology, hence its image under $A$ is open. Another way to see it is to consider any section of $A$, that is a linear map $S \colon Y \to X$ such that $A\circ S = \operatorname{id}_Y$. Since $Y$ is finite-dimensional Hausdorff, $S$ is continuous. But for an open set $U\subset X$ we have $A(U) = A(U + \ker A) = S^{-1}(U + \ker A)$, which is open by the continuity of $S$. -Coming to the examples with non-Hausdorff $Y$, let $E$ denote $\mathbb{R}$ endowed with the standard topology, and let $F$ denote $\mathbb{R}$ endowed with the indiscrete topology. Then $E$ and $F$ are finite-dimensional topological vector spaces, hence so is $E\times F$. -Ad 1), we can take $\operatorname{id} \colon E \to F$. It's clearly continuous and surjective, but of course not open. -Ad 2), we can take $\operatorname{id}\times \operatorname{id} \colon F\times E \to E\times F$. Again it's clearly surjective, but the first component is not continuous, so the whole map is not continuous, and the second component is not open, which makes the entire map not open. -For the finite-dimensional case, all examples are somewhat similar to these examples, since every finite-dimensional topological vector space over $\mathbb{R}$ is topologically isomorphic to $E^m\times F^n$ for some $m,n\in \mathbb{N}$. -For the infinite-dimensional case, we can easily modify the examples (take a product of a Hausdorff and an indiscrete space). -Somewhat more interesting may be the following example: Let -$$c_{00} = \bigl\{ f \colon \mathbb{N}\to \mathbb{R} \mid \bigl(\exists k\bigr)\bigl(n \geqslant k \implies f(n) = 0\bigr)\bigr\}$$ -be the space of all real sequences with finite support. Endow it with the subspace topology induced by your favourite $\ell^p(\mathbb{N})$, and consider the map $A \colon c_{00} \to c_{00}$ given by $(Af)(n) = 2^{-n}\cdot f(n)$. Then $A$ is continuous and bijective, but not open. If we endow the codomain with a coarser non-Hausdorff topology, the map remains continuous and non-open.<|endoftext|> -TITLE: What is a constant field? -QUESTION [5 upvotes]: I am looking at the following: - - -Could you explain to me what a constant field is? -$$$$ -P.S. I found this in the paper of T. Honda, "Algebraic differential equation" (pages 170-176). - -REPLY [3 votes]: The field of constants of a differential field is the subfield of elements a with $∂a=0$, see here.<|endoftext|> -TITLE: How do I find the maximum volume for a box when the corners are cut out? -QUESTION [10 upvotes]: The question reads : -A box (with no top) is to be constructed from a piece of cardboard of sides $A$ and $B$ by cutting out squares of length $h$ from the corners and folding up the sides as in the figure below: - -Suppose that the box height is $h = 3 in.$ and that it is constructed using $134 in.^2$ of cardboard (i.e., $AB = 134$). Which values $A$ and $B$ maximize the volume? -How do I approach this question when there are the corners cut out? I understand I need to label important things with variables and find an appropriate formula, however what do I do when it comes to the corners? - -REPLY [5 votes]: How do I approach this question when there are the corners cut out? I understand I need to label important things with variables and find an appropriate formula, however what do I do when it comes to the corners? - -When you remove the four corners of the cardboard, you obtain exactly the unfolded box. The base of the box is the rectangle defined by the four inner vertices. The rest of the carboard are the folded front, back, left and right sides. If you fold up the sides the box is the open parallelepiped sketched on the right. - -Given that the length of the four squares is $h=3\,\text{in}$, the base of the box is a rectangle whose lenght is $A-2h=A-6\,\text{in}$ and width is $B-2h=B-6\,\text{in}$. Therefore the base has area $A_{\text{base}}=(A-6)(B-6)$ $\text{in}^2$. -Since $AB = 134$ $\text{in}^2$, we conclude that $B=134/A$ $\text{in}$, and -$$A_{\text{base}}=\left(A-6\right)\left(\frac{134}{A}-6\right)=170-6A-\frac{804}{A}\text{ in}^2.$$ -The hight of the folded up box is $h$ (see sketch); hence its volume is $V(A)=A_{\text{base}}\times 3\text{ in}^3$. Then -$$V(A)=3\left(170-6A-\frac{804}{A}\right)\text{in}^3.$$ - -Which values $A$ and $B$ maximize the volume? - -We just need to find $V'(A)=\frac{dV}{dA}$ and solve for $A$ the equation $V'(A)=0$.<|endoftext|> -TITLE: Multiple choice exercise on $f(x)= \frac {\sin x}{|x|+ \cos x}$ -QUESTION [5 upvotes]: Let $f : \Bbb R \to \Bbb R$ be the function defined by $f(x)= \frac {\sin x}{|x|+ \cos x}$. Then -A.$f$ is differentiable at all $x \in \Bbb R$. -B.$f$ is not differentiable at $x =0$. -C.$f$ is differentiable at $x=0$ but $f'$ is not continuous at $x=0$. -D.$f$ is not differentiable at $x=\frac {\pi}{2}$. - -Using the standard definition of derivative, I get that $f$ is differentiable at $0$ as well as $\frac {\pi}2$. This is as follows, -$\lim_{h \to 0} \frac {f(0+h)-f(0)}{h}=1=\lim_{h \to 0} \frac {f(0-h)-f(0)}{-h}$ -and -$\lim_{h\to 0} \frac {f(\frac {\pi}2 +h)-f(\frac {\pi}{2})}{h}=0=\lim_{h \to 0} \frac {f(\frac {\pi}2 -h)-f(\frac {\pi}2)}{-h}$ using L'Hospital rule at one stage. -Hence I eliminate options B and D. -Since $0$ was the only doubtful point to check about differentiability, I choose option A as the answer. -I regret a little about not knowing how to check validity of option C. Can you tell me how to prove it wrong? - -REPLY [3 votes]: As per the discussion with @Quintic in the above comments, -$f'(x)=\frac {1+|x|\cos x-\frac {x}{|x|} \sin x}{(|x|+\cos x)^2}$. -Now, $\lim_{h \to 0} f'(0+h)=\lim_{h \to 0} \frac {1+|h|\cos h-\frac {h}{|h|}\sin h}{(|h|+\cos h)^2}=\lim_{h \to 0}\frac {1+h\cos h-\sin h}{(h+\cos h)^2}$. (Where $h \gt 0$) -$\Rightarrow \lim_{h\to 0} f'(0+h)=\frac {1+0-0}{(0+1)^2}=1.$ -Similarly, $\lim_{h\to 0} f'(0-h)=\lim_{h\to 0} \frac {1+h\cos h-\sin h}{(h+\cos h)^2}$ (Where $h \gt 0)$ -$\Rightarrow \lim_{h\to 0} f'(0-h)=1$. -Hence $\lim_{h\to 0} f'(0+h)=\lim_{h\to 0} f'(0-h) \Rightarrow$ $f'$ is continuous at $x=0$.<|endoftext|> -TITLE: Let $X_1, X_2, \ldots$ be independent r.v.'s with $0 \leq X_n \leq 1$ and $\sum_n E(X_n) = \infty$. Show $\sum_n X_n = \infty$ with probability 1? -QUESTION [5 upvotes]: Let $X_1, X_2, \ldots$ be independent random variables with $0 \leq X_n \leq 1$ and $\sum_n E(X_n) = \infty$. I'd like to show that $\sum_n X_n = \infty$ with probability 1. This seems like a Borel Cantelli Problem to be but I am having a hard time defining the sets to work with. Is there another easier approach here? Thanks! - -REPLY [3 votes]: Independence and boundedness of the variables can be exploited efficiently with the following computation (which is an easy version of the Chernoff bound): for any fixed $\alpha>0$ we have -$$\mathbf{P}(X_1+\dots+X_n\le \alpha)=\mathbf{P}(e^{-(X_1+\dots+X_n)}\ge e^{-\alpha})\le\frac{\mathbf{E}[e^{-(X_1+\dots+X_n)}]}{e^{-\alpha}} -=e^\alpha\prod_{k=1}^n\mathbf{E}[e^{-X_k}].$$ -Now for any $0\le s\le 1$ we have the simple inequality -$$1-e^{-s}=se^\xi\ge se^{-1}$$ -(we applied Lagrange's theorem to the function $x\mapsto e^x$ on the interval $[-s,0]$, which gives us some $\xi\in [-s,0]$ such that the first equality holds; then we used the fact that $\xi\ge -1$), thus -$$\mathbf{E}[e^{-X_k}]\le \mathbf{E}[1-e^{-1}X_k]=1-e^{-1}\mathbf{E}[X_k]$$ -and finally -$$\prod_{k=1}^n\mathbf{E}[e^{-X_k}]\le\prod_{k=1}^n\left(1-e^{-1}\mathbf{E}[X_k]\right)\le\prod_{k=1}^n\exp\left(-e^{-1}\mathbf{E}[X_k]\right) -=\exp\left(-e^{-1}\sum_{k=1}^n\mathbf{E}[X_k]\right)\to 0$$ -as $n\to\infty$. So $$\mathbf{P}(\sum X_k\le \alpha)=\lim_{n\to\infty}\mathbf{P}(\sum_{k=1}^n X_k\le\alpha)=0$$ -and since $\alpha$ was arbitrary we get the thesis.<|endoftext|> -TITLE: Does this algorithm for Graph Realization work? -QUESTION [6 upvotes]: A sequence of integers $d_1, \dots, d_n$ is called graphical if there exists a simple graph $G$ with it as its degree sequence. Deciding if a sequence is graphical is called the Graph Realization Problem. -A theorem by Havel and Hakimi gives an algorithm to construct such a graph if it exists: it proceeds by repeatedly selecting a vertex of highest degree $v$ and connecting it to vertices of high degree until the degree of $v$ is depleted. -I want instead to consider the algorithm that, at every step, connects a vertex of highest degree with a single vertex of lowest nonzero degree. -Note that this algorithm, unlike the Havel-Hakimi one, does not select a vertex of the highest degree and connects it until its degree is depleted: a new vertex of highest degree is selected every time an edge is added. -For example, given the graphical degree sequence $2, 2, 2, 1, 1$, the algorithm proceeds as follows: -$$1,2,2,0,1\\0,1,2,0,1\\0,0,1,0,1\\0,0,0,0,0$$ -which yields the simple path $P_5$ on five vertices (where I broke the ties selecting the leftmost highest or smallest). -Is there a counterexample for which this algorithm does not yield a graph realization? -NOTE: I expect the answer to be affirmative, since this algorithm won't ever connect two vertices of low degree unless it depleted everything else. Therefore, if any graph realization of a certain degree sequence must contain an edge between two vertices of low degree we're done, but I wasn't able to find such an example. - -REPLY [4 votes]: There are some problems with your algorithm: -1) -It fails to distinguish solvable from unsolvable positions: If given a sequence such as nn it will happily go along adding edges between the two. This leads into -2) -Your algorithm doesn't produce simple graphs regardless of tiebreakers: -consider the sequence 64332222. this has a realization (just draw it). However, regardless of tiebreakers, your algorithm will always try to connect the first vertex to one of the 2s twice. This is one of the big upsides of havel-hakimi, by instantly depleting the degree of nodes when adding edges to them they don't run into the problem of multiedges. -If you were to restrict your algorithm to only connect unconnected edges it would likely produce valid realizations. However, that'd come at the cost of having to pass way more information along than the list of (unused) degree of each vertex. I think your algorithm will produce valid realizations because if you have a realization where there is an edge between vertices, say ab and cd and none between ac and bd then you may remove ab,cd and add ac,bd to produce another realization of the same sequence. In this way it should be possible to always avoid/force edges between vertexes of high and low degree.<|endoftext|> -TITLE: Summation of a constant using sigma notation -QUESTION [6 upvotes]: Apologies if this is a silly question, but is it possible to prove that $$\sum_{n=1}^{N}c=N\cdot c$$ or does this simply follow from the definition of sigma notation? -I am fairly sure it's the latter, but for some reason I've managed to get myself thrown by the absence of a summation index (intuitively of course it makes sense that summing a constant $N$-times should equal $N\cdot c$). - -REPLY [8 votes]: It is possible to prove it. If you define $\sum$ notation recursively then it's something you can prove by induction. Specifically, given a sequence $a_1, a_2, \dots$ of numbers, you can define $\displaystyle\sum_{i=1}^n a_i$ recursively by: -$$\displaystyle\sum_{i=1}^1 a_i = a_1 \quad \text{and} \quad \displaystyle\sum_{i=1}^{n+1} a_i = \left( \sum_{i=1}^n a_i \right) + a_{n+1} \text{ for all } n \in \mathbb{N}$$ -An inductive proof that $\displaystyle\sum_{i=1}^n c = nc$ then proceeds as follows: - -(Base case) $\displaystyle\sum_{i=1}^1 c = c$; -(Induction step) Fix $n$ and suppose $\displaystyle\sum_{i=1}^n c = nc$. Then -$$\sum_{i=1}^{n+1} c = \left( \sum_{i=1}^n c \right) + c = nc + c = (n+1)c$$ - -So we're done. Here, the sequence was just the constant sequence $c,c,c,\dots$. - -REPLY [5 votes]: You can write it as $c\sum_{i=1}^N 1$ and then say that the sum of $N$ $1's$ is $N$, but that doesn't actually shed much light on the matter. -The thing to remember, ultimately, is that $\sum$ isn't some abstruse function. It's just a more compact way of writing addition. Writing $\sum_{x=1}^N f(x)$ means writing $f(1)+f(2)+\cdots f(N)$, and anything that you can do with addition you can do with it, because it is exactly addition.<|endoftext|> -TITLE: how to prove the integral converges -QUESTION [8 upvotes]: Suppose $$I=\lim_{x\rightarrow\infty}\lim_{b\searrow0}\int_{b}^{x}{g(y)dy}$$ exists and is finite, where $g$ is a continuous function from $\mathbb{R}^{+}$ to $\mathbb{R}$. Prove $$\lim_{x\rightarrow\infty,b\searrow 0}{\int_{b}^{x}{g(y)dy}}$$ exists and equals $I$. -Can someone give me hints? I do not know where to start it. - -REPLY [2 votes]: EDIT: Usually one likes to see "why" something is true intuitively and then translate that understanding into a proof. Here I just followed the definitions and found I was done - never did understand "why" it was so. Until now: It's a special case of the following obvious fact: -Obvious Fact If $\lim_{x\to a}\left(\lim_{y\to b}(F(x)+G(y))\right)=I$ then $\lim_{(x,y)\to(a,b)}(F(x)+G(y))=I$. - -Original: -Inserting some parentheses to make things absolutely clear: You're assuming that -$$I=\lim_{x\rightarrow\infty}\left(\lim_{b\searrow0}\int_{b}^{x}{g(y)dy}\right),$$ -and you want to show that $$\lim_{(x,b)\to(\infty,0^+)}{\int_{b}^{x}{g(y)dy}}=I,$$right? -My first reaction was to say this does not follow. The corresponding implication with a function $F(x,b)$ in place of $\int_b^x g$ is not true. But of course, as I realized quickly when I started to construct a counterexample, $\int_b^x g$ cannot be an arbitrary function of $x$ and $b$. -Suppose $I=0$ for simplicity and assume $\epsilon>0$. There exists $A$ so that -$$\left|\lim_{b\to0}\int_b^xg\right|<\epsilon\quad(x\ge A).$$It follows that -$$\left|\int_x^yg\right|<2\epsilon\quad(A\le x\le y).$$(Given $x,y\ge A$, choose $b$ so that $|\int_b^x|<\epsilon$ and $|\int_b^y|<\epsilon$.) -Now choose $\delta>0$ so that $$\left|\int_b^Ag\right|<\epsilon\quad(0 -TITLE: How can we prove $\mathbb{Q}(\sqrt 2, \sqrt 3, ..... , \sqrt n ) = \mathbb{Q}(\sqrt 2 + \sqrt 3 + .... + \sqrt n )$ -QUESTION [8 upvotes]: I want to prove this statement. - -$$\mathbb{Q}(\sqrt 2, \sqrt 3, ..... , \sqrt n ) = \mathbb{Q}(\sqrt 2 + \sqrt 3 + .... + \sqrt n )$$ - for any $n >1$. - -It looks like a very hard problem. -How can I approach this one? - -REPLY [12 votes]: Let us show that $$\mathbb{Q}(\sqrt[d_1]{a_1}, \ldots, \sqrt[d_n]{a_n}) = \mathbb{Q}( \sqrt[d_1]{a_1}+ \cdots + \sqrt[d_n]{a_n})$$ - ( $a_l$, $d_l$ positive integers). - -By Galois theory, if $K$ be a field, $L\supset K$ a Galois extension, $\alpha$, $\beta$ in $L$ so that every $\sigma \in \text{Gal}(L/K)$ that fixes $\alpha$ also fixes $\beta$, then $\beta \in K(\alpha)$. -Let now $\alpha = \sqrt[d_1]{a_1}+ \cdots + \sqrt[d_n]{a_n}$ and $\beta_l = \sqrt[d_l]{a_l}$, $1\le l \le n$. Consider a Galois extension $L$ containing all the $\beta_l$ (and so $\alpha$, too). Let $\sigma \in \text{Gal}(L/\mathbb{Q})$. We have $\sigma(\beta_l) = \omega_l \beta_l$ where $\omega_l$ is a $d_l$-th root of $1$. Assume that $\sigma(\alpha) = \alpha$, that is -$$\sum \omega_l \beta_l = \sum \beta_l.$$ -Now the $\beta_l$'s are real and positive and we have -$$| \sum \omega_l \beta_l| \le \sum | \omega_l \beta_l | = \sum \beta_l$$ -and if we have equality then all the modulus $1$ numbers $\omega_l$ have to be equal, and from the above we conclude $\omega_l=1$ for all $l$, and so $\sigma(\beta_l)= \beta_l$ for all $l$. -We conclude $\sqrt[d_l]{a_l} \in \mathbb{Q}( \sqrt[d_1]{a_1}+ \cdots + \sqrt[d_n]{a_n})$ for all $l$.<|endoftext|> -TITLE: Is there an intuitive way of viewing the Law of Total Expectation $\mathbb{E}\big[\mathbb{E}[X|Y]\big]=\mathbb{E}[X]?$ -QUESTION [5 upvotes]: Law of total expectation -If $\mathbb{E}\big[|X|\big]$ finite then for any $Y,\;\mathbb{E}\big[\mathbb{E}[X\mid Y]\big]=\mathbb{E}[X]$ - -I remember reading this for the first time and thinking... hold up, what? -The proof is simple, but I am wondering whether there is an intuitive reason why we might expect this result? - -REPLY [3 votes]: The conditional expectation of $X$ with respect to $Y$ is our best estimate of $X$ given exact knowledge of $Y.$ The expectation of any variable is our best estimate, given no specific knowledge about any variable at all. It seems reasonable then that our a priori expectation of the variable $E[X|Y]$ before we have any knowledge of $Y$ is just the general expectation of $X.$ - -REPLY [3 votes]: That depends on why this is violating your intuition. For me, the basic intuition behind it is, $E(X)$ is the expected value of the random variable $X$, across all possible conditions. For any random variable $Y$, let $y_1, y_2, y_3, \ldots$ represent the possible values of $Y$. Then these $y_i$ are also, in some sense, a "cover set" of all possible conditions, and therefore if you do a weighted average of the conditional expected values $E(X \mid Y)$, you should obtain the overall expected value of $X, E(X)$.<|endoftext|> -TITLE: Module which is not direct sum of indecomposable submodules -QUESTION [11 upvotes]: I would like to find an example of a ring $R$ and a $R$-module $M$, which can't be written as a direct sum of indecomposable submodules, i.e. - $$ M \not \cong \bigoplus\limits_{i \in I} M_i$$ - for all set $\{M_i \;\vert\; i \in I \}$ of indecomposable submodules. - -In that case, I know that $M$ should be neither noetherian nor artinian, but I wasn't able to find such an example. Any help would be appreciated! - -REPLY [7 votes]: A good way to find examples like this is to look at infinite products. For instance, let $k$ be a field (or more generally, any ring with no nontrivial idempotent elements), let $T$ be an infinite set, let $R=k^T$ (a product of copies of $k$ indexed by $T$), and let $M=R$. It is not too hard to show that any direct summand of $M$ is of the form $k^S$ for some subset $S\subseteq T$ (to show this, use the fact that any direct summand of a ring as a module over itself is generated by an idempotent). Thus the only indecomposable direct summands of $M$ are those of the form $k^S$ when $S$ is a singleton. But the direct sum of all of these is just the infinite direct sum $k^{\oplus T}$, which is the proper submodule of $M$ consisting only of elements which are $0$ on all but finitely many coordinates. Thus $M$ is not a direct sum of indecomposable submodules. -Here's another example, which shows you don't have to be working over some big complicated ring. Take $R=\mathbb{Z}$ and let $M=\mathbb{Z}^T$ for any infinite set $T$. I claim that $M$ is not a direct sum of indecomposable submodules. Since $M$ is not free, it suffices to show that any indecomposable submodule of $M$ is isomorphic to $\mathbb{Z}$. To show this, note that if $N\subseteq M$ is any nonzero submodule, then for some $t\in T$ the projection $p_t:N\to\mathbb{Z}$ onto the $t$th factor of the product is nonzero. Thus $p_t(N)=n\mathbb{Z}$ for some nonzero $n\in\mathbb{Z}$, and since $n\mathbb{Z}$ is free the surjection $p_t:N\to n\mathbb{Z}$ splits and gives a direct summand of $N$ isomorphic to $n\mathbb{Z}\cong \mathbb{Z}$. So if $N$ is indecomposable, it must be isomorphic to $\mathbb{Z}$.<|endoftext|> -TITLE: Does $1^{\infty}=e$ or $1^{\infty}=1$? -QUESTION [5 upvotes]: In fact the real question is: Does $\lim\limits_{n\to\infty}1^{n}=e$?. -I know that -$$ -\lim\limits_{n\to\infty}\left(1+\dfrac{1}{n}\right)^n=e, -$$ -So, can we say that $1^\infty=e$? -And, by logic, This product -$$\underset{\text{infinity times}}{1\cdot1\cdot\ldots\cdot1},$$ -gives $1$ not other value. - -REPLY [2 votes]: Congratulations, you have discovered a false belief. - -False belief. Insertion of limits "rule" $$\lim_{x \rightarrow x_0}f(g(x),x) = \lim_{x \rightarrow x_0}f\left(\lim_{y \rightarrow x_0}g(y),x\right)$$ - -Counterexample. -$$f(a,b) = \left(1+a\right)^b \qquad g(a)=\frac{1}{a}$$ -Then $$\lim_{x \rightarrow \infty}f(g(x),x) = e, \qquad \lim_{x \rightarrow \infty}f\left(\lim_{y \rightarrow \infty}g(y),x\right) = 1$$<|endoftext|> -TITLE: How to prove $d\omega=(\nabla_\mu\omega)_\nu dx^\mu\wedge dx^\nu$ without using coordinates -QUESTION [5 upvotes]: This is exercise 7.8 b) of Nakahara's GTaP: Let $\omega\in\Omega^1(M)$ be a 1-form on a Riemannian manifold with Levi-Civita connection $\nabla$. Prove that -$$ -\mathrm{d}\omega=(\nabla_\mu\omega)_\nu\, \mathrm dx^\mu\wedge\mathrm dx^\nu -$$ -I proved it using the fact that $\mathrm dx^\mu\wedge\mathrm dx^\nu=-\mathrm dx^\nu\wedge\mathrm dx^\mu$, so: -\begin{align} -(\nabla_\mu\omega)_\nu\,\mathrm dx^\mu\wedge\mathrm dx^\nu & = (\partial_\mu\omega_\nu-{\Gamma^\lambda}_{\mu\nu}\,\omega_\lambda)\,\mathrm dx^\mu\wedge\mathrm dx^\nu\\ -& = \sum_{\mu<\nu}\left(\partial_\mu\omega_\nu-\partial_\nu\omega_\mu+({\Gamma^\lambda}_{\mu\nu}-{\Gamma^\lambda}_{\nu\mu})\omega_\lambda\right)\,\mathrm dx^\mu\wedge\mathrm dx^\nu\\ -& = \sum_{\mu<\nu}\left(\partial_\mu\omega_\nu-\partial_\nu\omega_\mu\right)\,\mathrm dx^\mu\wedge\mathrm dx^\nu\\ -& = \mathrm d\omega -\end{align} -I hope this is correct and makes sense. -I don't like my solution because starting at the second line, it "quits" Einstein summation convention and needs an explicit summation symbol. - -Is there a way to prove this without "quitting" Einstein summation convention? -Is there maybe even a way to prove it in a coordinate-free way? - -REPLY [6 votes]: Here's an outline of a coordinate-free proof. -(1) For any $1$-form $\omega$ and any vector fields $X,Y$, there is the formula -$$d\omega(X,Y) = X(\omega(Y)) - Y(\omega(X)) - \omega([X,Y]).$$ -(2) For any affine connection $\nabla$, there is the formula -$$X (\omega(Y)) = (\nabla_X\omega)(Y) + \omega(\nabla_XY).$$ -By switching the roles of $X, Y$ and subtracting, this gives -$$X \omega(Y) - Y \omega(X) = (\nabla_X \omega)(Y) - (\nabla_Y \omega)(X) + \omega(\nabla_XY - \nabla_YX).$$ -But $\nabla$ is torsion-free, so.... (left to you) -(3) By plugging in $X = \frac{\partial}{\partial x^\mu}$ and $Y = \frac{\partial}{\partial x^\nu}$, we get..... (left to you) -Remarks: Note that the proof works for any torsion-free affine connection $\nabla$, not just the Levi-Civita connection. This formula is an instance of "Cartan's First Structure Equation." Generalizations exist to $k$-forms for any $k \geq 1$.<|endoftext|> -TITLE: Which groups act freely on $S^n$? -QUESTION [20 upvotes]: When $n$ is even, it is easy to classify groups which act freely on $S^n$ using degree theory: if $G$ acts on $S^n$, then associating to each element $g \in G$ the degree of the map obtained from multiplication by $g$, one gets a map $d : G \to \{\pm 1\}$. It is easy to verify this is a homomorphism. If $G$ acts freely, multiplication by any element $g$ is a fixed point free map, thus $d(g) = (-1)^{n+1} = -1$, making $d$ injective. The only nontrivial group which injects into $\Bbb Z/2$ is itself, so we're done. -However, a lot of groups act freely on $S^n$ for odd $n$. For example, $\Bbb Z/p$ acts on $S^3$ freely for all prime $p$ (so called lens space action). What do we know about such groups? Is it possible to classify them? -If $G$ is a finite group acting freely on $S^3$, then as free action of finite groups on Hausdorff spaces is properly discontinuous, quotient map $S^3 \to S^3/G$ is a covering projection. Thus $S^3/G$ is a closed 3-manifold with fundamental group $G$, hence $G$ is a closed 3-manifold group. On the other hand, if $G$ is a closed 3-manifold group, let $M$ be that manifold and $G$ must act on $\tilde{M}$ freely. But $\tilde{M}$ is a simply connected closed 3-manifold hence homeomorphic to $S^3$ by Poincare conjecture, so $G$ must act on $S^3$ freely. -Thus, finite groups acting freely on $S^3$ are precisely the finite closed 3-manifold groups. But what about infinite groups? Given an infinite group, how can we tell if it acts on $S^3$ or not? More generally, what about groups acting freely on $S^n$ for some fixed odd $n > 1$? - -[edit] I am only interested in actions of discrete groups. Also, any sort of general remark (long enough to not fit as a comment) or partial answers (like the answers below) are welcome to me, you can post them as answers. - -REPLY [5 votes]: This has almost but not quite been stated a few times, so to clear the air: the answer is known for finite groups, it is due to Madsen, Thomas, and Wall, and it says that a finite group $G$ acts freely on some sphere if and only if - -all of the abelian subgroups of $G$ are cyclic; equivalently, the cohomology is periodic; equivalently, $\mathbb{Z}_p \times \mathbb{Z}_p$ does not occur as a subgroup for any prime $p$; and -every element of order $2$ is central. - -The necessity of the first condition is due to Smith and the necessity of the second condition is due to Milnor. This is taken from the introduction to Alejandro Adem's Constructing and Deconstructing Group Actions.<|endoftext|> -TITLE: 20 balloons are distributed amongst 6 children: Probability that one child gets no balloon? -QUESTION [7 upvotes]: 20 balloons are randomly distributed amongst 6 children. What is the probability, that at least one child gets no balloon? -What's the mistake in the following reasoning (I know there has to be a mistake; by simulation I know, that the actual probability has to be appr. 0.15, which is not what the following formula gives): -I start to think about the opposite case: What is the probability that every child gets at least one balloon. There are all together ${20+6-1\choose 20} = {25\choose 20}$ ways to distribute the balloons amongst the children. The number of the desired ways (i.e. distribute balloons so that every child gets at least one balloon) is ${14+6-1\choose 14} = {19\choose 14}$. -So, the probability that every child gets at least one balloon, when the balloons are randomly distributed amongst the children should be $$ \frac{19\choose 14}{25\choose 20}$$ -For the opposite case, i.e. the probability that at least one child gets no balloon is: -$$ 1 - \frac{19\choose 14}{25\choose 20} = 0.78114...$$ -At which point did I get wrong?? -BTW: I used the following R-Code to simulate: - v <- vector() - for (i in 1:100000){ - t <- table(sample(1:6, 20, replace=T)) - v[i] <- length(t)<6 - } - print mean(v) - -One Remark: -The answer from mlu is in my opinion correct; thank you very much for it! However: My questions was, where my mistake is in the above reasoning? -The number of different ways to distribute k indistinguishable balls (=balloons) into n distinguishable boxes (=children) is ${n+k-1\choose k}$. So: where did I actually got wrong, because the denominator as specified above is correct, right? So whats wrong about the counter? -Solution -Thank you very much, again, mlu, for the answer as a commentary below. Now I got it: I counted the number of partitions and tried to calculate the probability with the Laplace-Technique (the nominator for the total number of cases, and the counter for the number of cases we are interested in) but I missed, that not every partition is equally probable. For instance the partition where one child gets all balloons is much more improbable than the partition, that child1 to child4 gets 3 balloons and child5 and child6 get 4 balloons is much more probable, which is clear even by intuition: In the first case, there is always just one possibility to put the balloon whereas in the second case there are (at least at the beginning) many possibilities to put balloons. - -REPLY [4 votes]: Lets assume both children and balloons are distinguisable, labeled. Then the number of distributions corresponds to selecting a 20 digit sequence of numbers 1 to 6, giving $6^{20}$ possibilities. Let $E_k$ be the event that child k does not receive a ballon, this event corresponds to selecting a 20 digit sequence not containing the number k giving $5^{20}$ possibilities. -$$P(\cup_k E_k) = \sum_k P(E_k) - \sum_{k,l} P(E_k \cap E_l) + \sum_{k,l,m} P(E_k \cap E_l \cap E_m) \dots$$ -$$ P(\cup_k E_k) = \sum_{n=1}^5 (-1)^{n+1}\frac{\left(\begin{matrix} 6 \\ n \end{matrix}\right)(6-n)^{20}}{6^{20}} = $$ -$$ 6 \left(\frac{5}{6} \right)^{20} - 15 \left(\frac{4}{6}\right)^{20} + 20 \left(\frac{3}{6} \right)^{20} - 15 \left( \frac {2}{6} \right) ^{20} + 6 \left( \frac{1}{6} \right)^{20} $$<|endoftext|> -TITLE: Contour integral of $\sqrt{z^{2}+a^{2}}$ -QUESTION [6 upvotes]: Suppose $a$ is real and nonnegative. Say we wanted to compute the above function (for whatever reason, be it to solve an improper real integral, or something else) along the curve $C$, as on the picture. I have chosen the contour as to avoid the branch cut connecting the three branch points. Supposing $arg\left ( z \right ) \in \left [ 0, 2\pi \right )$ I also made parametrisations for each part of the contour. However, I wasn't able to do so for the parts $C_{i}$, $i=1,2,3$. -In several integrals like this and this one Ron talks about assigning a phase to the segments. To me it seems like he is assigning the phase as if the branch point was now the origin of the plane, and the phase he added was relative to that point, am I right on this one? With that being said I would say that $$C2: z=iye^{i\pi}$$ -$$C3: z=iye^{-i\pi}$$ -$$C1: z=iye^{i0}$$ -However this doesn't look right as the argument wasn't defined for $\left [ -\pi,\pi \right )$. How do we deal with these branch cuts? And how to know what phase to add? Note that I have asked a similar question here for a different function, but I didn't receive satisfactory answers(due to my poor wording, I guess). - -REPLY [3 votes]: Let $\Omega=\mathbb{C}\setminus[-\pi i,\pi i]$. If $\gamma$ is any simple closed path in $\Omega$, then -$$ - \int_{\gamma}\frac{w}{w^2+a^2}dw -$$ -has value equal to $0,\pm 2\pi i,\pm 4\pi i,\pm 6\pi,\cdots$. For any $z \in \Omega$, define $\gamma_{z}$ to be a path from $0$, to the right of $0$, with a termination point $z\in\Omega$, and define -$$ - G(z) = a\exp\left\{\int_{\gamma_z}\frac{w}{w^2+a^2}dw\right\}. -$$ -The function $G$ does not depend on the specific such path $\gamma_z$, and $G(0)=a$. Also, -$$ - G'(z)=G(z)\frac{z}{z^2+a^2},\;\;\; z\in\Omega. -$$ -Therefore $G(z)^2$ satisfies the following for $z\in\Omega$: -$$ - \frac{\frac{d}{dz}G(z)^2}{G(z)^2}=\frac{2z}{z^2+a^2}=\frac{\frac{d}{dz}(z^2+a^2)}{z^2+a^2}, \\ - \frac{d}{dz}\frac{z^2+a^2}{G(z)^2}=0,\\ - G(z)^2=C(z^2+a^2). -$$ -Evaluating at $z=0+$ gives $C=1$. Hence, $G(z)^2=z^2+a^2$, or $G(z)$ is a square root of $z^2+a^2$. -The argument of $G$ is $0$ on the positive real axis because $\gamma$ may be chosen to a straight line segment from $0+$ to $z$ on the positive real axis. Then the argument along $C_1$ is found by integrating -$$ - \int_{0}^{r}\frac{\epsilon+is}{(\epsilon+is)^2+a^2}ds -$$<|endoftext|> -TITLE: Does this series of angles converge? -QUESTION [6 upvotes]: Consider the sequence of squares and angles as in this figure: - -Since $\tan \alpha_n=\frac{1}{n}$, we can show that $\alpha_1+\alpha_2+\alpha_3=\frac{\pi}{2}$ -(see :Determine the angle of 3 drawn lines from each corner of 3 congruent squares) -For $n>3$ the sum of the angles -$$ -\beta_n= \sum_{i=1}^n \alpha_i -$$ - becomes more difficult to find using the tigonometric formulas for the tangent of sum angles. So, the first question is if there is some other method to find this sum. -The second question is if the series -$$ -\sum_{i=1}^\infty \alpha_i -$$ -converge or not and, if it converges, what is the sum. -I've tempted a numerical experiment that gives: $\beta_{1000}=\pi \cdot2.294981074\ldots$ and $\beta_{2000}=\pi \cdot2,515537077\ldots$ -But, obviously, this is not conclusive. - -REPLY [2 votes]: $$ -\tan \alpha_n = \frac 1 n. -$$ -$$ -\frac x {\tan x} \to 1 \text{ as }x\to0. -$$ -Hence -$$ -\frac{\alpha_n}{1/n} = \frac{\alpha_n}{\tan\alpha_n} \to 1 \text{ as }n\to\infty. -$$ -If $a_n,b_n>0$ for all $n$ and $\lim\limits_{n\to\infty}\dfrac{a_n}{b_n}$ is a strictly positive number, then the two series $\sum\limits_{n=1}^\infty a_n$ and $\sum\limits_{n=1}^\infty b_n$ either both converge or both diverge. We know that $\sum\limits_{n=1}^\infty \frac 1 n$ diverges. Therefore the other series also diverges.<|endoftext|> -TITLE: Finding Lyapunov function for a given system of differential equations -QUESTION [11 upvotes]: I am being introduced to the Lyapunov functions in order to determine the stability of a given system. I know that finding a Lyapunov function is not easy, so I would like to ask for any trick or hint in order to find a Lyapunov function for -$$ \left\{\begin{array}{l}x'=-4y+x^2,\\y'=4x+y^2\end{array}\right. $$ -at $(0,0)$. I have tried combinations of $x^{2n}$ and $y^{2m} $ and also products of $x$ and $y$ but got nothing clear. Also, I've searched the phase plot for the system and it is clear that $(0,0)$ is a stable point (not asymptotically stable). Thanks in advance. - -REPLY [5 votes]: It is not quite clear why you put a bounty on this question, since @Evgeny answered it in the best possible way. However, if you are looking for a Lyapunov function, here it is (up to an additive constant): -$$ -L(x,y)=\frac{2^{2/3} \left(1-\frac{x (x+4) \left(x^2-12 x+8 y+48\right)}{\sqrt[3]{x^3 (x+4)^3} \left(x^2-4 y\right)}\right) \left(\frac{x (x+4) \left(x^2-12 x+8 y+48\right)}{2 \sqrt[3]{x^3 (x+4)^3} \left(x^2-4 y\right)}+1\right) \left(\left(1-\frac{x (x+4) \left(x^2-12 x+8 y+48\right)}{\sqrt[3]{x^3 (x+4)^3} \left(x^2-4 y\right)}\right) \log \left(2^{2/3} \left(1-\frac{x (x+4) \left(x^2-12 x+8 y+48\right)}{\sqrt[3]{x^3 (x+4)^3} \left(x^2-4 y\right)}\right)\right)+\left(\frac{x (x+4) \left(x^2-12 x+8 y+48\right)}{\sqrt[3]{x^3 (x+4)^3} \left(x^2-4 y\right)}-1\right) \log \left(2\ 2^{2/3} \left(\frac{x (x+4) \left(x^2-12 x+8 y+48\right)}{2 \sqrt[3]{x^3 (x+4)^3} \left(x^2-4 y\right)}+1\right)\right)-3\right)}{9 \left(-\frac{\left(x^2-12 x+8 y+48\right)^3}{2 \left(x^2-4 y\right)^3}+\frac{3 x (x+4) \left(x^2-12 x+8 y+48\right)}{2 \sqrt[3]{x^3 (x+4)^3} \left(x^2-4 y\right)}-1\right)}-\frac{x (x+4) \left(4 \log \left(x^2-4 x+16\right)+x\right)}{18 \sqrt[3]{2} \sqrt[3]{x^3 (x+4)^3}} -$$<|endoftext|> -TITLE: Would like a hint for proving $(\forall x P(x)) \to A \Rightarrow \exists x( P(x) \to A)$ in graphical proof exercise on The Incredible Proof Machine -QUESTION [7 upvotes]: Update: Updated the title now that I've observed that we can use math in the title. I've also gone thru and removed dots. The tool expresses quantification using dots like this $\forall x.P(x)$ rather than $\forall x P(x)$. I originally used these dots in my post as well. -I'm going through all of the proofs in The Incredible Proof Machine and need a hint for one of the proofs. (The Incredible Proof Machine is an online graphical proof tool.) -Given: $(\forall x P(x)) \to A $ -Prove: $\exists x (P(x) \to A )$ -It seems like a trivial proof and here's my hand-waving attempt: There are two cases to consider: - -$\forall(x) P(x)$: In this case its trivial to prove the conclusion since we can prove $A$. -$\neg \forall(x) P(x) $ In this case it must be that there exists some $c$ such that $ \neg P(c)$. Therefore trivially, $P(c) \to A$ and therefore $\exists x (P(x) \to A)$. - -However, I get stuck trying to prove it using the actual logic connectors available in the tool. I'm not able to use the same approach in the second case of my case analysis (or at least I'm not sure how). This is my attempt: - -If you look right in the middle of the diagram you'll see there are no connections (and a small red dot indicating an error). This approach doesn't seem to lead anywhere. -I'm using two instances of TND to do case analysis. The first case is as described above. But I don't know how to handle the second case, so I used a TND in the second case to generate two sub-cases: $P(y_{10}) \vee (P(y_{10}) \to \bot)$. The second case of this TND is again trivial, but the first case doesn't lead anywhere. -In the middle of the proof I have two facts $P(y_{10})$ and $(\forall x P(X)) \to \bot$. These two facts don't seem like they can lead to the conclusion. -I'm looking for a hint of an approach to try to solve this proof. - -REPLY [3 votes]: For other interested parties, I did get some hints from the tool community. If you wish to review these they are on this issue at GitHub -UPDATE: I finally finished the proof. The key to success was figuring out (with help) how to prove the identity $\neg \forall x P(x) \Rightarrow \exists x (\neg P(x))$. That identity was not built into the tool and can be a little tricky/cumbersome to prove from axioms. If you attempt the proof using the tool, learn how to create custom blocks like "proof by contradiction" and "case analysis". Otherwise the proof gets very messy. Here's the finished proof (still a little messy). -The blocks with the snowmen on them are "proof by contradiction". The block with baseball is case analysis.<|endoftext|> -TITLE: How do i evaluate this sum $\sum\limits_{n=1}^{\infty} \frac{(-1)^{n+1}}{n^2n!}$? -QUESTION [10 upvotes]: How do I evaluate this sum: -$$\sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n^2n!}$$ -Note: The series converges by the ratio test. I have tried to use this sum:$$ \sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n}= \ln (2) $$ but I didn't succeed. Might there be others techniques which I don't know? -Thank you for any help - -REPLY [4 votes]: We have that -$$ \frac{1}{n^2}=\int_{0}^{1}(-\log x)\,x^{n-1}\,dx \tag{1}$$ -hence: -$$ \sum_{n\geq 1}\frac{(-1)^{n+1}}{n^2 n!}=\int_{0}^{1}(-\log x)\sum_{n\geq 1}\frac{(-1)^{n-1} x^{n-1}}{n!}\,dx =\int_{0}^{1}\frac{1-e^{-x}}{x}(-\log x)\,dx\tag{2}$$ -and -$$ \sum_{n\geq 1}\frac{(-1)^{n+1}}{n^2 n!}=\frac{d}{d\alpha}\left.\int_{0}^{1}\frac{1-e^{-x}}{x^\alpha}\,dx\,\right|_{\alpha=1}\tag{3}$$ -is a value of the derivative of a sum of an exponential integral function and a $\Gamma$ function. -The series definition directly implies that -$$ \sum_{n\geq 1}\frac{(-1)^{n+1}}{n^2 n!}= \phantom{}_3 F_3\left(1,1,1;2,2,2;-1\right).\tag{4}$$<|endoftext|> -TITLE: Is there an obvious reason why the number of binary Lyndon words is equal to the number of irreducible polynomials over GF(2)? -QUESTION [6 upvotes]: The title of Sloane's A001037 is: Number of degree-$n$ irreducible polynomials over $GF(2)$; number of $n$-bead necklaces with beads of 2 colors when turning over is not allowed and with primitive period $n$; number of binary Lyndon words of length $n$. -The first few terms of the sequence are (for $n=1,2,...$ ) $2,1,2,3,6,9,...$ -The formula for the sequence is $\frac{1}{n}\sum_{d|n}\mu(\frac{n}{d})\cdot 2^d$. -I am familiar with the derivation given by Wilf in Generatingfunctiontology on page 62. This derivation explains why the formula enumerates binary Lyndon words and equivalently the "$n$ bead necklaces" statement in the title. -I know the 2 irreducible polynomials of degree 1 are $x$ and $x+1$. -The degree 2 polynomial is $x^2+x+1$. -The degree 3 polynomials are $x^3+x^2+1$ and $x^3+x+1$. -The degree 4 polynomials are $x^4+x+1$, $x^4+x^3+x^2+x+1$ and $x^4+x^3+1$. -The binary Lyndon words are: -$a(1)=2=\#\{"0","1"\}$, -$a(2)=1=\#\{"01"\}$, -$a(3)=2=\#\{"001","011"\}$, -$a(4)=3=\#\{"0001","0011","0111"\}$ -I would like to know if there is an easy correspondence between these objects or if there is some explanation as to why the formula counts the irreducible polynomial over $GF(2)$. - -REPLY [6 votes]: Necklaces and Lyndon words (of the same size) count the same objects. Each necklace represents an equivalence class with respect to rotation, and the Lyndon word is way to choose a unique representative of each class. So the more canonical equivalence may be from necklaces to irreducible polynomials. -Monic irreducible polynomials of degree $n$ are the same as Galois orbits of size $n$. Galois groups of finite fields being cyclic, those are cyclic length $n$ Galois orbits, which (by the degree of the irreducible polynomial) are in the unique degree $n$ extension of the finite field. -What does such an orbit look like? The degree $n$ extension of finite field $F$ is an $n$ dimensional vector space over $F$. It has a basis on which the Galois group acts by permutations, in other words by a cyclic action of order $n$. Given the basis we have the correspondence: elements of $F^n$ are necklaces, elements of $F$ are the colors of the beads in the necklace, the Galois group rotates the necklaces, Galois orbits are the sets of roots of monic irreducible polynomials. -Nothing requires $|F|=2$ in this argument.<|endoftext|> -TITLE: Image of unit sphere being hyper ellipse proof (SVD) -QUESTION [7 upvotes]: When I check for the proof of singular value decomposition, they all assume the following is true: - -The image of the unit sphere under any $m * n$ matrix is a hyper - ellipse. - -However I could not find a decent proof for this, even though I googled for hours. I keep seeing notes like: "This geometric fact is not obvious. We shall restate it in the language of linear algebra and prove it later. For the moment, assume it is true." -Maybe I am using wrong keywords. Could you please give me a link, text book name, etc. (a reference) for this proof? - -REPLY [6 votes]: Suppose $T$ is a linear map on a finite-dimensional inner product space $V$. The Polar Decomposition states that there is an isometry $S$ on $V$ such that -$$ -T = S \sqrt{T^* T}. -$$ -Because $\sqrt{T^*T}$ is a positive operator, the Finite-Dimensional Spectral Theorem states that there is an orthonormal basis $e_1, \dots, e_n$ of $V$ and nonnegative numbers $s_1, \dots, s_n$ such that -$$ -\sqrt{T^*T}e_j = s_j e_j -$$ -for $j = 1, \dots n$. Thus $\sqrt{T^*T}$ maps the unit sphere of $V$ to a hyper-ellipse, and because $S$ is an isometry, $T$ also maps the unit sphere of $V$ to a hyper-ellipse. -The Singular Value Decomposition follows easily from the Polar Decomposition without mentioning hyper-ellipses (see, for example, Chapter 7 in my book Linear Algebra Done Right).<|endoftext|> -TITLE: A limit about $\int_0^\infty x^{-x}e^{tx}dx$ -QUESTION [9 upvotes]: How to prove $$\lim_{t\to +\infty}\frac{\int_0^{+\infty} x^{-x}e^{tx}dx}{e^{\frac12(t-1)+e^{t-1}}}=\sqrt{2\pi}.$$ -Someone asked this difficult question, I have tried Taylor formula of $e^x$ but failed, could you show me the method? - -REPLY [5 votes]: Writing $x^x = \exp(x \ln x)$, your integral is -$$ \int_0^\infty \exp((t - \ln x)x) \; dx$$ -The maximum of $(t - \ln x) x$ with respect to $x > 0$ occurs at -$x_0 = \exp(t-1)$, and taking a Taylor series around that point -$$ (t - \ln x)(x) = x_0 - \dfrac{(x - x_0)^2}{2 x_0} + O((x-x_0)^3)$$ -Thus (after taking care of some details) your integral is asymptotic as $t \to \infty$ to -$$ \int_{-\infty}^\infty \exp\left(x_0 - \dfrac{(x - x_0)^2}{2 x_0}\right)\; dx = \sqrt{2\pi} \exp\left(e^{t-1} + t/2 - 1/2\right) $$<|endoftext|> -TITLE: Compact Metric Spaces and Separability of $C(X,\mathbb{R})$ -QUESTION [21 upvotes]: Let $(X,d)$ be a compact metric space. Show that $C(X,\mathbb{R})$ is a separable metric space (space of continuous functions from $X$ to $\mathbb{R}$). - -I first showed that if $(X,d)$ is compact, then it must be separable, so we have a dense subset $\{x_{1},x_{2},...\}$ which is countable of $X$. Then, I'm not so sure on how to move forward. I was thinking of using the Stone Weierstrass Theorem for the set of functions: -$F=\{1,f_{1},f_{2},...\}$ -Where $f_{n}(x)=d(x,x_{n})$ for $x \in X$. Then, this implies that the above set is dense in $C(X,\mathbb{R})$ and countable, so $C(X,\mathbb{R})$ is separable if $F$ is a unital separating subalgebra. -Clearly $F$ is unital, but I'm not sure on how to show it is separating and a subalgebra of $C(X,\mathbb{R})$ (it is a subset of the former set since the distance function is continuous). How would one proceed with this step? -Thank you for your help. - -REPLY [16 votes]: Let $F$ as you said. -Let $\mathbb R[F]$ the $\mathbb R$-subalgebra generated by $F$. -We want to use Stone-Weierstrass theorem on the latter (rather than $F$) and show that it is dense. This will suffice for a proof that $C(X,\mathbb R)$ is separable, since $\mathbb Q[F]$ is countable and dense in $\mathbb R[F]$. -$\mathbb R[F]$ contains $1$ and it is obviously an algebra. Let's show that it separates points. -Let $x\ne y\in X$. Since $\{x_n\}_{n\in\mathbb N}$ is dense, there must exist $x_m$ such that $d(x,x_m)\le \frac13 d(x,y)$. Forcibly, it cannot hold $d(y,x_m)=d(x,x_m)$. If it held, then $$d(x,y)\le d(x,x_m)+d(y,x_m)\le \frac23 d(x,y)$$ absurd. -So the function $f_m$ separates $x$ and $y$. -Stone-Weierstrass can therefore be used on $\mathbb R[F]$, completing the proof. -Precisations: - -How is $\mathbb R[F]$ defined? Either the intersection of all the $\mathbb R$-subalgebras of $C(X,\mathbb R)$ which contain $F$ or, equivalently, as the $\mathbb R$-vector subspace of $C(X,\mathbb R)$ generated by the products of finitley many elements of $F$. -How is $\mathbb Q[F]$ defined? Either the intersection of all the $\mathbb Q$-subalgebras of $C(X,\mathbb R)$ that contain $F$ or, as above, the $\mathbb Q$-vector subspace of $C(X,\mathbb R)$ generated by the products of finitely many elements of $F$. -Why is $\mathbb Q[F]$ dense in $\mathbb R[F]$? It is rather easy, actually, but the notation is a bit tedious. -If $g\in\mathbb R[F]$, then there exist $k\in\mathbb N,\ g_1, \cdots, g_k\in F$ and a finite set $S\subseteq \mathbb N^k$ such that $$g=\sum_{(n_1,\cdots,n_k)\in S} \lambda_{n_1,\cdots,n_k}g_1^{n_1}\cdots g_k^{n_k}$$ for some $\lambda_{n_1,\cdots,n_k}\in\mathbb R$. -Now, if you approximate each $\lambda_{n_1,\cdots,n_k}$ with rationals $$\alpha_{n_1,\cdots,n_k}^{(t)}\stackrel{t\to\infty}{\longrightarrow}\lambda_{n_1,\cdots,n_k}$$ -and call $$g^{(t)}=\sum_{(n_1,\cdots,n_k)\in S} \alpha^{(t)}_{n_1,\cdots,n_k}g_1^{n_1}\cdots g_k^{n_k}\in\mathbb Q[F]$$ -you get $$\Vert g-g^{(t)}\Vert_\infty=\left\Vert \sum_{(n_1,\cdots,n_k)\in S} (\lambda_{n_1,\cdots,n_k}-\alpha_{n_1,\cdots,n_k}^{(t)})g_1^{n_1}\cdots g_k^{n_k}\right\Vert_\infty\le\\\le \left(\sum_{(n_1,\cdots,n_k)\in S}\Vert g_1^{n_1}\cdots g_k^{n_k}\Vert_\infty\right)\cdot\max_{(n_1,\cdots,n_k)\in S}\left\vert\lambda_{n_1,\cdots,n_k}-\alpha^{(t)}_{n_1,\cdots,n_k}\right\vert\stackrel{t\to\infty}{\longrightarrow}0$$<|endoftext|> -TITLE: Show that $Hom_R(R^n,M) \cong M^n$ for R-modules -QUESTION [5 upvotes]: We want to show that $Hom_R(R^n,M) \cong M^n$ for $n\in\Bbb Z_{\ge0}$ -I have already shown that $Hom_R(R,M) \cong M$ by letting $f:Hom_R(R,M)\rightarrow M$ given by $f(\phi) = \phi(1)$. -I showed that $f$ is bijective and is a group homomorphism, thus $Hom_R(R,M) \cong M$. -It seems too easy to define the same function for $Hom_R(R^n,M) \cong M^n$, I must be doing something wrong. Injectivity and surjectivity were both straightforward, and showing its a homomorphism seemed to go okay... Can you just use the same $f$? How can you show these are isomorphic? - -REPLY [2 votes]: In general, following theorem holds: -$$\operatorname{Hom}_R\left(\bigoplus_{i\in I}A_i, B \right)\cong \prod_{i\in I}\operatorname{Hom}_R(A_i,B)$$ -Let define a map $\phi$ from the former to the latter defined as -$$\phi(f) := \langle f\circ \iota_i\rangle_{i\in I}$$ -(where $\iota_i:A_i\to \bigoplus_{i\in I} A_i$ be the canonical insertion) and define map $\psi$ from the latter to the former as -$$\psi(\langle g_i\rangle_{i\in I})(a) = \sum_{i\in I}g_i(a_i).$$ -Above sum ranges for all $i\in I$. Each $g_i$ is a morphism from $A_i$ to $B$ and $a = \langle a_i\rangle_{i\in I} \in \bigoplus_{i \in I} A_i$. $a_i=0$ but finitely many so $\psi(\langle g_i\rangle_{i\in I})$ is well-defined. You can check that $\phi$ and $\psi$ are homomorphism and inverses each other. -You can find the finite version of above theorem, and its proof is essentially same.<|endoftext|> -TITLE: Is there a direct proof that a compact unit ball implies automatic continuity? -QUESTION [16 upvotes]: One of the fundamental theorems in functional analysis is that if $X$ is a Banach space (say over $\Bbb C$) with a compact closed unit ball, then $X$ is finitely dimensional. -The usual proof is by assuming $X$ is infinite dimensional, and constructing by induction a sequence of vectors on the unit sphere which are not only linearly independent, but also have distances $>\frac12$ from one another. -But you can also prove this from automatic continuity. Namely, if every linear functional $f\colon X\to\Bbb C$ is continuous then $X$ has a finite dimension. If $\{v_n\mid n\in\Bbb N\}$ are linearly independent and lie the unit sphere, the function $f(v_n)=n$ can be extended to a linear functional on $X$. It is unbounded and therefore not continuous. - -Can you prove directly from the assumption that the closed unit ball (equiv. the unit sphere) is compact that every linear functional is continuous? - -REPLY [2 votes]: This is definitely not an answer to the question. But it's a possibly amusing proof of the result - I was thinking about proving that automatic continuity and found I'd proved that $X$ is finite-dimensional. -Say $B(x,r)$ is the closed ball in $X$ about $x$ of radius $r$. Suppose $B(0,1)$ is compact. Then there exists a finite set $F\subset B(0,1)$ such that $$B(0,1)\subset\bigcup_{y\in F}B(y,1/2).$$ -Hence, by the normed-vector-spaceness of $X$, for every $x\in F$ we have $B(x,1/2)\subset\bigcup_{y\in F}(x+B(y/2,1/4))$, so that $$B(0,1)\subset\bigcup_{y_0,y_1\in F}B(y_0+ y_1/2,1/4).$$Etc. It follows that for every $x\in B(0,1)$ there exist $y_0,y_1\dots\in F$ with $$x=\sum_{n=0}^\infty2^{-n}y_n.$$ -And now if you regroup the terms in that sum it follows that $x$ is a linear combination of the elements of $F$, qed. (In fact $x/2$ is in the convex hull of $F$.)<|endoftext|> -TITLE: Elliptic functions as inverses of Elliptic integrals -QUESTION [15 upvotes]: Let us begin with some (standard, I think) definitions. -Def: An elliptic function is a doubly periodic meromorphic function on $\mathbb{C}$. -Def: An elliptic integral is an integral of the form -$$f(x) = \int_{a}^x R\left(t,\sqrt{P(t)}\right)\ dt,$$ -where $R$ is a rational fucntion of its arguments and where $P(t)$ is a third or fourth degree polynomial with simple roots. -I have often heard the claim that an elliptic function is (or can be) defined as the inverse of an elliptic integral. However, I have never seen a proof of this statement. As someone who is largely unfamiliar with the subject, most of the references I could dig up seem to refer to the special case of the Jacobi elliptic functions, which appear as inverse functions of the elliptic integrals of the first kind. Maybe the claim I'm referring to is simply talking about the special case of Jacobi elliptic functions, but I believe the statement holds in generality (I could be wrong). -So, can anyone provide a proof or reference (or counter-example) to something akin to the following? -Claim: The elliptic functions are precisely the inverses of the elliptic integrals, as I've defined them above. That is, every elliptic function arises as the inverse of some elliptic integral, and conversely every elliptic integral arises as the inverse of some elliptic function. - -REPLY [4 votes]: The claim as stated is not true. (E.g., if $R$ has only even powers of the second variable, the resulting function $f$ is the integral of a rational function.) What is true is that every general elliptic integral of this form can be expressed as a linear combination of integrals of rational functions and the three Legendre canonical forms (elliptic integrals of the first, second, and third kind). This is a classical result, and there are several different algorithms to reduce a general elliptic integral to this form, some of them implemented in common computer algebra systems. -A modern (freely available) reference with a list of classical references is here: B.C. Carlson, Toward Symbolic Integration of Elliptic Integrals, Journal of Symbolic Computation, 28 (6), 1999, 739–753<|endoftext|> -TITLE: Equivalence between smooth and topological fiber bundles -QUESTION [7 upvotes]: All manifolds in this post are hausdorff and second-countable. -Is it true that two smooth fiber bundles with same fiber, base manifold and structure group (that is a Lie group $G$ of diffeomorphisms of the fiber) are equivalent if and only if are equivalent as continuous fiber bundle (so the equivalence can be only continuous)? If it is true, can you give me a reference? -Thank you. - -REPLY [8 votes]: As long as the Lie groups are finite dimensional, yes, this is true. The key is that you can make finite-dimensional approximations to the classifying space $BG$: that is, there are finite dimensional smooth manifolds $B_iG$ with inclusion maps $B_iG \hookrightarrow B_{i+1}G$ such that, as topological spaces, $BG = \lim B_iG$. I don't have a good reference for this or a sketch of the proof. For your favorite groups, it's obvious: $BO(n) = \text{Gr}(n,\infty) = \lim_k \text{Gr}(n,n+k)$, for instance, or $BU(1) = \lim_k \Bbb{CP}^k$. The general case, then, is the same idea: you find a sequence of spaces $E_nG$ that $G$ acts freely on whose limit is contractible. If I remember a reference or a proof for this I'll edit it in. -Now that we have this: let $M$ be a finite dimensional smooth manifold. -Two $G$-bundles being smoothly equivalent is the same as saying that the smooth maps $M \to BG$ are smoothly homotopic. (To make sense of this, one either makes a Hilbert manifold out of $BG$, in which case the first paragraph wasn't necessary, as we'll see, or you assume $M$ is compact so the image lies in some $B_iG$.) The proof of this is essentially the same as the proof that the same is true continuously of continuous maps. -Now we're in a setting in which we can apply the various smooth map approximation theorems. This is the reason I tried to make things finite dimensional; I don't know how to prove the same approximation theorems in the infinite-dimensional setting, though they're almost certainly true. Anyway, let's get going. -First, any continuous map is homotopic to a smooth map. So for any continuous vector bundle on $M$, this must be isomorphic to a smooth vector bundle. Next, suppose I have smooth vector bundles $f_i: M \to BG$. Suppose they are continuously homotopic: that is, there is a continuous map $f_t: M \times I \to BG$. Now (again, either because we can restrict the codomain to a finite-dimensional manifold or because you're willing to use infinite-dimensional approximation theorems) we know that we can approximate this by a smooth map without changing the values on the boundary, since they're already smooth. So this means that if vector bundles $f_0$, $f_1$ are continuously isomorphic, they are smoothly isomorphic. -This becomes false if you allow the Lie group to become infinite-dimensional (say, $\text{Diff}(M)$.) The simplest examples I know are 4-manifold bundles over the circle which are topologically trivial but not smoothly trivial. But because $M$-bundles over the circle are in bijection with $\pi_0 \text{Homeo}(M)$ topologically or $\pi_0 \text{Diff}(M)$ smoothly, you just need a diffeomorphism that's continuously isotopic to the identity but not smoothly. One is provided, eg, in Ruberman, "An obstruction to smooth isotopy in dimension 4".<|endoftext|> -TITLE: Prove without using graphing calculators that $f: \mathbb R\to \mathbb R,\,f(x)=x+\sin x$ is both one-to-one, onto (bijective) function. -QUESTION [6 upvotes]: Prove that the function $f:\mathbb R\to \mathbb R$ defined by $f(x)=x+\sin x$ for $x\in \mathbb R$ is a bijective function. - -The codomain of the $f(x)=x+\sin x$ is $\mathbb R$ and the range is also $\mathbb R$. So this function is an onto function. -But I am confused in proving this function is one-to-one. -I know about its graph and I know that if a function passes the horizontal line test (i.e horizontal lines should not cut the function at more than one point), then it is a one-to-one function. The graph of this function looks like the graph of $y=x$ with sinusoids going along the $y=x$ line. -If I use a graphing calculator at hand, then I can tell that it is a one-to-one function and $f(x)=\frac{x}{2}+\sin x$ or $\frac{x}{3}+\sin x$ functions are not, but in the examination I need to prove this function is one-to-one theoritically, without graphing calculators. -I tried the method which we generally use to prove a function is one-to-one but no success. -Let $f(x_1)=f(x_2)$ and we have to prove that $x_1=x_2$ in order fot the function to be one-to-one. -Let $x_1+\sin x_1=x_2+\sin x_2$ -But I am stuck here and could not proceed further. - -REPLY [2 votes]: Assume $f$ is Many-One. -So there do exist some $x_{1}$ and $x_{2}$ where $x_{1}\neq x_{2}$ such that $$f(x_{1})=f(x_{2})$$ -$$x_{1}+\sin x_{1}=x_{2}+\sin x_{2}$$$$x_{1}-x_{2}=\sin x_{2}-\sin x_{1}=-2\cos\left(\frac{x_{1}+x_{2}}{2}\right)\sin\left(\frac{x_{1}-x_{2}}{2}\right)$$ -$$\left|x_{1}-x_{2}\right|=2\left|\cos\left(\frac{x_{1}+x_{2}}{2}\right)\sin\left(\frac{x_{1}-x_{2}}{2}\right)\right|\leq2\left|\sin\left(\frac{x_{1}-x_{2}}{2}\right)\right|$$ -$$\implies\left|\frac{\sin\left(\frac{x_{1}-x_{2}}{2}\right)}{\frac{x_{1}-x_{2}}{2}}\right|\geq 1$$ -which is obviously a contradiction. -So our assumption that $f(x)$ is Many-One is false. -$f(x)$ being onto is trivial since on the RHS $x$ can take all real values.<|endoftext|> -TITLE: Is a factorial-primorial mesh divisible by the factorial infinitely often? -QUESTION [5 upvotes]: Question -Suppose the factorial and primorial functions conceived a baby through the act of addition and called it $n!\#$, and it looked like this: -$$n!\# = \prod_{i=1}^n (p_i + i) = (2 + 1)(3 + 2)(5 + 3)(7 + 4) \dots (p_n + n)$$ -Does it happen infinitely often that the factorial divides this function, i.e. are there infinite integer solutions to: -$$\dfrac{n!\#}{n!}$$ -Background -The factorial function $n!$ is well known and is given by: -$$\prod_{i=1}^n i = 1 \times 2 \times 3 \times \dots n$$ -The marginally less well known primorial function $p_n\#$ is given by: -$$\prod_{i=1}^n p_i = 2 \times 3 \times 5 \times \dots p_n$$ -where $p_i$ is the $i^{th}$ prime. -The function defined above is catalogued in the OEIS under the following link: OEIS reference. However there is no reference to divisibility by $n!$ there. -What I know so far -Here are the first few values I've computed for $n!\#$: -$\begin{array}{c|ccccc} -n & n! & n!\# & \textrm{divides?} \\ -\hline -1 & 1 & 3 & \textrm{Yes} \\ -2 & 2 & 15 & \textrm{No} \\ -3 & 6 & 120 & \textrm{Yes}\\ -4 & 24 & 1320 & \textrm{Yes} \\ -5 & 120 & 21120 & \textrm{Yes} \\ -6 & 720 & 401280 & \textrm{No} \\ -7 & 5040 & 9630720 & \textrm{No} \\ -\end{array}$ -My Intuition -I'm of two minds as to the truth of this statement, and I don't have enough experience with numbers to judge either way, so I'm on the fence for now. Here are my two basic arguments, which are far from rigorous. -Argument for: The primes are randomly scattered, whereas the sequence $1, 2, 3, \dots, n$ isn't. So adding $p_n + n$ should preserve the randomness that was there in the primes in the first place. By randomness, I mean there should be no preference shown for particular prime factors over others. Now, since $n\#!$ grows much faster than $n!$, it should eventually start sweeping up all the primes in $n!$. Perhaps there is even some $N$ such that that divisibility of $n!\#$ by $n!$ is true for all $n > N$, but this is quite strong and I'm not so sure. -Argument against: On the other hand, what throws doubt on the conjecture is that because $n\#!$ grows so fast, perhaps it grows too fast, and misses lots of little primes that are bundling up in $n!$. In other words, perhaps there is a point $N$ such that that divisibility of $n!\#$ by $n!$ is false for all $n > N$. - -REPLY [3 votes]: This is not a proof, but here is some relevant numerical evidence for the conjecture that $n=1,3,4,5$ are the only times (not counting $n=0$) this fraction is an integer. -Firstly, it's the only solution for $n$ up to $15000$. (All Mathematica code below.) -More importantly, the number of prime factors in the denominator that are not cancelled out (counted with multiplicity so that $20=2^2*5$ has $3$ prime factors) appears to grow roughly linearly with $n$, yet it would have to drop to $0$ for the fraction to be an integer. -Here is an unconvincing plot for $0\le n<100$: - -Here is a plot for $0\le n\le7000$ with the line $y=(0.0868166)x+27$ drawn on top: - -(*Mathematica Code*) - -f[0]=1; -f[n_]:=f[n]=f[n-1]*(Prime[n]+n); - -Print[Table[ If[IntegerQ[f[n]/(n!)], n, Nothing], {n, 0, 15000}]]; - -countprimes[n_] := countprimes[n] = -Total[Transpose[Select[FactorInteger[f[n]/(n!)], Function[x, 0 > x[[2]] ] ]][[2]]]; - -Print[ListPlot[Table[{n, If[n==0||n==1||n==3||n==4||n==5,0,countprimes[n]]}, {n, 0, 99}]]]; - -m=Table[{n, If[n==0||n==1||n==3||n==4||n==5,0,countprimes[n]]}, {n, 0, 7000}]; - -Print[LinearModelFit[m, x, x]]; - -Show[ListPlot[m], Plot[27 + 0.0868166 x, {x, 0, 7000}, PlotStyle -> Orange]]<|endoftext|> -TITLE: Given a polynomial find the minimum value of the variable. -QUESTION [5 upvotes]: If $x^5 - x^3 + x = a. $ -Then we have to find the minimum value of $x^6$ in terms of a. -The answer given is $2a - 1$ if that gives any idea. -I have no idea how to approach this problem. -A hint would do fine. - -REPLY [4 votes]: I reinterprete the problem a bit differently (based upon the proposed answer) to show that -$$\tag1x^5-x^3+x=a\implies x^6\ge 2a-1. $$ -Note that $(1)$ is trivially true for $a\le \frac12$. Hence we may assume that $a$ is positive. -We have $$(x^2+1)a=(x^2+1)(x^5-x^3+x)=x^7+x $$ -so that $x$ must be positive. -Divide by $x$ and subtract $1$ to arrive at -$$x^6=\frac{(x^2+1)a}{x}-1\ge\frac{2xa}{x}-1=2a-1, $$ -where we used $x>0$ and $x^2+1\ge 2x$ (from $x^2+1-2x=(x-1)^2\ge 0$).<|endoftext|> -TITLE: A function of two cumulative probability distributions with same first 2 moments -QUESTION [7 upvotes]: Let $\Phi_1$ and $\Phi_2$ be cumulative probability distribution functions with domain $[L, \infty)$, $L\geq 0$, both distributions having the same expectation $\mu$ and the same second moment (hence finite second moment, $\textbf{a modification and added constraint to the earlier post}$), and the kurtosis of the distribution behind $\Phi_2$ is higher than that of $\Phi_1$ (new constraint). -Looking for whether $G'-G \geq 0$ $, with -$$G=1-\frac{1}{\mu}\int_L^\infty \left(1- \Phi_1(x)\right)^2 \, \mathrm{d} x$$ -$$G'=1-\frac{1}{\mu}\int_L^\infty \left(1- \frac{1}{2}\left(\Phi_1(x)+\Phi_2(x)\right)\right)^2 \, \mathrm{d} x$$ -$\textbf{Approach: }$ -Consider a square integrable function $s(x):[L,\infty) \rightarrow (-1,1)$, as a difference, with -$\Phi_2(x)=\Phi_1(x) + s(x)$. -$$G'-G=\frac{1}{\mu}\left(\int_L^{\infty } s(x) \, dx- \int_L^{\infty } s(x) \Phi (x) \, dx-\frac{1}{4}\int_L^{\infty } s(x)^2 \, dx\right) $$ -It looks like we have $\int_L^\infty s(x) \, dx=0$ and $\int_L^\infty x \, s(x) \, dx=0$, since both distributions have the same first two moments and are in the positive domain, and integrating by parts we get $ \int_L^\infty \left(1-\Phi(x)\right) \, dx= \int_L^\infty \left(1-\Phi(x)-s(x)\right) \, dx$, and $ \int_L^\infty x \left(1-\Phi(x)\right) \, dx= \int_L^\infty x \left(1-\Phi(x)-s(x)\right) \, dx.$ -What are the bounds on $G'-G$? Are there calculation mistakes in the above? -We also have $s(L)=s(\infty)=0, s(x)\leq 1-\Phi_1(x)$. By Cauchy-Schwarz, we also get $\left(\int s(x) \Phi (x)\right)^2 \leq \int s(x)^2 \int \Phi (x)^2$, but I can't see where this can be useful. - -REPLY [3 votes]: I don't know if that particularly helps (since it does not directly relate to the moments of distributions characterized by $F_1,F_2$, and relies on elementary calculations), but maybe it will foster further discussion. Let $f=\overline{F}_1$, $g=\overline{F}_2$ and $||f||^2=\int f^2(x)d x$ and assume that $F_1,F_2$ have the same mean, $||f||,||g||<\infty$. Then $G'-G\ge 0$ if and only if -$$ -2\int (f^2-g^2)+\int(f-g)^2\ge 0. -$$ -The above shows immediatelly that if $||f||>||g||$ then $G'-G\ge 0$. Assume $||f||<||g||$ and denote $z=||g||/||f|| >1$. Further transofmations give another useful(?) iff condition: -$$ -||f|| ||g|| \left( \frac{3}{z}-z-2\frac{}{||f||g||}\right)\ge 0. -$$ -The term $\frac{}{||f||g||}$ is the 'angle' or 'correlation' between $f$ and $g$ in $L^2$ (not to be confused with correlation between random variables with pdf's $F_1$ and $F_2$), hence takes values in $[-1,1]$. As a result $G'-G<0$, if $z>3$, and if $z\in(1,3)$ then it can go either way (depending on the assumed 'correlation').<|endoftext|> -TITLE: Are there more than 2 digits that occur infinitely often in the decimal expansion of $\sqrt{2}$? -QUESTION [14 upvotes]: The other day I got to thinking about the decimal expansion of $\sqrt{2}$, and I stumbled upon a somewhat embarrassing problem. -There cannot be only one digit that occurs infinitely often in the decimal expansion of $\sqrt{2}$, because otherwise it would be rational (e.g. $\sqrt{2} = 1.41421356237\ldots 11111111\ldots$ is not possible). -So there must be at least two digits that occur infinitely often, but are there more? Is it possible that e.g. $\sqrt{2} = 1.41421356237\ldots 12112111211112\ldots$? - -REPLY [8 votes]: This problem is wide open. It is conjectured that every irrational algebraic number is absolutely normal (i.e. in every base, digits appear asymptoticaly with the same density). However, it is not even known whether there is any algebraic irrational with some three digits appearing infinitely many times in any base! Hence, to the best of our knowledge, every irrational algebraic number could eventually have only zeroes and ones in every base.<|endoftext|> -TITLE: Sum of n terms of the series $\frac{1}{1 \cdot 3}+\frac{2}{1 \cdot 3 \cdot5}+\frac{3}{1 \cdot 3 \cdot 5 \cdot 7}+\cdots$ -QUESTION [10 upvotes]: I need to find the sum of n terms of the series -$$\frac{1}{1\cdot3}+\frac{2}{1\cdot 3\cdot 5}+\frac{3}{1\cdot 3\cdot 5\cdot 7}+\cdots$$ -And I've no idea how to move on. It doesn't look like an arithmetic progression or a geometric progression. As far as I can tell it's not telescoping. What do I do? - -REPLY [6 votes]: Clearly $$U_{r+1}=\frac{r}{1 \cdot3\cdot 5\cdot 7 \cdot......\cdot(2r-3)\cdot(2r-1)\cdot(2r+1)}$$ -$$2U_{r+1}=\frac{2r}{1 \cdot3\cdot 5\cdot 7 \cdot......\cdot(2r-3)\cdot(2r-1)\cdot(2r+1)}$$ -$$2U_{r+1}=\frac{(2r+1)-1}{1 \cdot3\cdot 5\cdot 7 \cdot......\cdot(2r-3)\cdot(2r-1)\cdot(2r+1)}$$ -$$2U_{r+1}=\frac{(2r+1)}{1 \cdot3\cdot 5\cdot 7 \cdot......\cdot(2r-3)\cdot(2r-1)}-\frac{1}{1 \cdot3\cdot 5\cdot 7 \cdot......\cdot(2r-3)\cdot(2r-1)}$$ -$$2U_{r+1}=\frac{1}{1 \cdot3\cdot 5\cdot 7 \cdot......\cdot(2r-3)\cdot(2r-1)}-\frac{1}{1 \cdot3\cdot 5\cdot 7 \cdot......\cdot(2r-3)\cdot(2r-1)\cdot(2r+1)}$$ -Now let $$V_r=\frac{1}{1 \cdot3\cdot 5\cdot 7 \cdot......\cdot(2r-3)\cdot(2r-1)}$$ -Then $$V_{r+1}=\frac{1}{1 \cdot3\cdot 5\cdot 7 \cdot......\cdot(2r-3)\cdot(2r-1)\cdot(2r+1)}$$ -Thus $$2U_{r+1}=V_r-V_{r+1}$$ -$$\displaystyle 2\sum_{r=1}^{n} U_{r+1}=\sum_{r=1}^{n} \{V_r-V_{r+1}\}=V_1-V_{n+1}$$ -$$\displaystyle 2\sum_{r=1}^{n} U_{r+1}=V_1-V_{n+1}$$ -$$\displaystyle 2\sum_{r=1}^{n} U_{r+1}=\frac{1}{1}-\frac{1}{1 \cdot3\cdot 5\cdot 7 \cdot......\cdot(2n-3)\cdot(2n-1)\cdot(2n+1)}$$<|endoftext|> -TITLE: Is there a special value for $\frac{\zeta'(2)}{\zeta(2)} $? -QUESTION [13 upvotes]: The answer to an integral involved $\frac{\zeta'(2)}{\zeta(2)}$, but I am stuck trying to find this number - either to a couple decimal places or exact value. -In general the logarithmic deriative of the zeta function is the dirichlet series of the van Mangolt function: -$$\frac{\zeta'(s)}{\zeta(s)} = \sum_{n \geq 0} \Lambda(n) n^{-s} $$ -Let's cheat: Wolfram Alpha evaluates this formula as: -$$ \frac{\zeta'(2)}{\zeta(2)} = - 12 \log A + \gamma + \log 2 + \log \pi \tag{$\ast$}$$ -This formula features some interesting constants: - -$A$ is the Glaisher–Kinkelin constant 1.2824271291006226368753425688697... -$\gamma$ is the Euler–Mascheroni constant 0.577215664901532860606512090082... -$\pi$ is of course 3.14... - -Wikipedia even says that $A$ and $\pi$ are defined in similar ways... which is an interesting philosophical point. - -Do we have a chance of deriving $(\ast)$? - -REPLY [12 votes]: By differentiating both sides of the functional equation$$ \zeta(s) = \frac{1}{\pi}(2 \pi)^{s} \sin \left( \frac{\pi s}{s} \right) \Gamma(1-s) \zeta(1-s),$$ we can evaluate $\zeta'(2)$ in terms of $\zeta'(-1)$ and then use the fact that a common way to define the Glaisher-Kinkelin constant is $\log A = \frac{1}{12} - \zeta'(-1)$. -Differentiating both sides of the functional equation, we get -$$\begin{align} \zeta'(s) &= \frac{1}{\pi} \log(2 \pi)(2 \pi)^{s} \sin \left( \frac{\pi s}{2} \right) \Gamma(1-s) \zeta(1-s) + \frac{1}{2} (2 \pi)^{s} \cos \left(\frac{\pi s}{2} \right) \Gamma(1-s) \zeta(1-s)\\ &- \frac{1}{\pi}(2 \pi)^{s} \sin \left(\frac{\pi s}{2} \right)\Gamma^{'}(1-s) \zeta(1-s) - \frac{1}{\pi}(2 \pi)^{s} \sin \left(\frac{\pi s}{2} \right)\Gamma(1-s) \zeta'(1-s). \end{align}$$ -Then letting $s =-1$, we get $$\zeta'(-1) = -\frac{1}{2\pi^{2}}\log(2 \pi)\zeta(2) + 0 + \frac{1}{2 \pi^{2}}(1- \gamma)\ \zeta(2) + \frac{1}{2 \pi^{2}}\zeta'(2)$$ since $\Gamma'(2) = \Gamma(2) \psi(2) = \psi(2) = \psi(1) + 1 = -\gamma +1. \tag{1}$ -Solving for $\zeta'(2)$, -$$ \begin{align} \zeta'(2) &= 2 \pi^{2} \zeta'(-1) + \zeta(2)\left(\log(2 \pi)+ \gamma -1\right) \\ &= 2 \pi^{2} \left(\frac{1}{12} - \log (A) \right) + \zeta(2)\left(\log(2 \pi)+ \gamma -1\right) \\ &= \zeta(2) - 12 \zeta(2) \log(A)+ \zeta(2) \left(\log(2 \pi)+ \gamma -1\right) \tag{2} \\ &= \zeta(2) \left(-12 \log(A) + \gamma + \log(2 \pi) \right). \end{align}$$ -$(1)$ https://en.wikipedia.org/wiki/Digamma_function -$(2)$ Different methods to compute $\sum\limits_{k=1}^\infty \frac{1}{k^2}$ -EDIT: -If you want to show that indeed $$\zeta'(-1)= \frac{1}{12}- \lim_{m \to \infty} \left( \sum_{k=1}^{m} k \log k - \left(\frac{m^{2}}{2}+\frac{m}{2} + \frac{1}{12} \right) \log m + \frac{m^{2}}{4} \right) = \frac{1}{12}- \log(A),$$ you could differentiate the representation $$\zeta(s) = \lim_{m \to \infty} \left( \sum_{k=1}^{m} k^{-s} - \frac{m^{1-s}}{1-s} - \frac{m^{-s}}{2} + \frac{sm^{-s-1}}{12} \right) \ , \ \text{Re}(s) >-3. $$ -This representation can be derived by applying the Euler-Maclaurin formula to $\sum_{k=n}^{\infty} {k^{-s}}$.<|endoftext|> -TITLE: Pairwise sums are perfect squares . -QUESTION [11 upvotes]: I thought of this problem as a simplification of the Euler-Brick problem . - -$1)$For which $n$ is it possible to find $n$ distinct positive integers $a_1,a_2,\ldots,a_n$ such that all their pairwise sums , namely $a_i+a_j$ for $i \not = j$, are perfect squares . - -For example when $n=3$ we can choose : $4$,$21$ and $60$ (with the sums $25$,$64$ and $81$ ) . Another example is $1$ , $24$ and $120$ . -If it's not possible for every $n$ then let $f(n)$ be the largest number of sums which can be perfect squares simultaneously. -So $f(3)=3$ . -I can also achieve $f(4) \geq 4$ with $1,3,35,46$ . -My other question is : - -$2)$ What bounds can we achieve for $f(n)$ ? (even for particular values is good ) . What I want is a better than 'linear' bound . - -The motivation to ask this came from the Euler Brick problem which asks to find three positive integers $x,y,z$ such that $x^2+y^2$ , $y^2+z^2$ , $x^2+z^2$ are all perfect squares . -There are parametric solutions for this problem but for the $4$D brick problem not even a solution is known (see here https://en.wikipedia.org/wiki/Euler_brick) -This hard $4$D version made me ask this simplified version . -I am (almost) sure that I'm not the first considering this problem so there might be known results in the research literature , but I haven't found them yet. -Thanks for everyone who can help me with this problem . I appreciate all your efforts . -EDIT : I added the distinct condition , else the problem is easy . Here are the details : -Choose some $x=2a^2$ and then choose all the numbers to be $x$ . All the sums are now perfect squares so $f(n)=\binom{n}{2}$ for every $n$ . - -REPLY [6 votes]: My Latest Answer -I have discovered a less trivial approximation for the growth rate of $f(n)$. But first I want to be rigorous about exactly what $f(n)$ is counting. I'm doing this in case I misinterpreted the question. My understanding is that $f(n)$ is the maximum number of distinct pairs $(i, j)$ with $1 \leq i < j \leq n$ such that there exists a set of $n$ positive integers $\{a_1, a_2, \dots, a_n\}$ with $a_i + a_j = s^2$ for some $s$. Clearly, $f(n) \leq {n \choose 2}$, since we can pick at most ${n \choose 2}$ pairs from $n$ objects. -Now, to find a lower bound for $f(n)$, we just need to find a set of pairs and a corresponding set of values, such that all values indexed by the pairs sum to perfect squares. Let's consider the set $\{1, 2, 3, \dots, n\}$ and define $h(n)$ as the number of distinct pairs of elements in this set that add to perfect squares. Clearly $h(n) \leq f(n)$. I will now put forth a very simple proof demonstration that: -$$n^{\frac32} \sim h(n)$$ -Consider adding $n+1$ to the set $\{1, 2, 3, \dots, n\}$. We now form all the pairs $(1, n+1), (2, n+1), \dots, (n, n+1)$ whose corresponding sums are $n+2, n+3, \dots, 2n+1$. Now, let $s(a, b)$ denote the number of perfect squares in the interval $[a, b]$. Then we have: -$$h(n+1) = h(n) + s(n+2, 2n+1)$$ -Now, how many perfect squares are in the interval $[n+2, 2n+1]$? Well, roughly there are: -$$\sqrt{2n+1} - \sqrt{n + 2} \approx \sqrt{2n} - \sqrt{n} = \sqrt{n}(\sqrt{2} - 1)$$ -So, roughly speaking, we have: -$$h(n+1) \approx h(n) + \sqrt{n}(\sqrt{2} - 1)$$ -Expanding out the recursion and factoring out the $(\sqrt{2} - 1)$ then we get: -$$h(n) \sim \sqrt{1} + \sqrt{2} + \dots + \sqrt{n} \sim \int_1^n \sqrt{x}\,dx \sim n^{\frac32}$$ -My maths here is not up to my usual standard of rigour, but I think it's sound. This would imply that the growth rate of $f(n)$ is somewhere between $n^{\frac32}$ and $n^2$. - -Update: We can arrive at a lower bound for $h(n)$ in a different way, avoiding the use of integration. Consider the set $\{1, 2, 3, \dots, n\}$. There are $\lfloor\sqrt{n}\rfloor$ perfect squares in this set. For each perfect square $k^2$, we can form it with $\lfloor\dfrac{k^2 - 1}{2}\rfloor \geq \dfrac{k^2}{2} - 1$ unique pairs: -$$(1, k^2 - 1), (2, k^2 - 2), \dots, (\lfloor\dfrac{k^2 - 1}{2}\rfloor, \lceil\dfrac{k^2 + 1}{2}\rceil)$$ -These aren't all the squares that can be formed, but they are all possible, so they are valid for a lower-bound argument. Let $R=\lfloor\sqrt{n}\rfloor$ for ease of notation. A lower bound is then: -$$\sum_{i=1}^{R}\left(\dfrac{i^2}{2} - 1\right) = \frac12\sum_{i=1}^{R}i^2 - R = \dfrac{R(R+1)(2R+1)}{12} - R$$ -Clearly this is of the order $n^{\frac32}$. -My Boring Original Answer -I will give a very trivial answer to $(2)$. We have the bound: -$$n - 1 \leq f(n)$$ -To see why, choose $a_1 = 1$ and for all $i \neq 1$ choose $a_i = i^2 - 1$. Then we have that for all $i \neq 1$, $a_i + a_1 = i^2$. Since there are $n - 1$ values of $i \neq 1 \leq n$, we obtain the lower bound. - -Update: Paying tribute to the commenter Crostul and the OP ComplexPhi, who have done more work than I, we can improve this lower bound to: -$$f(n) \geq f(k) + n - k$$ -for any $k \leq n$. -Noting the discovery of the commenter Zander above, we have that $f(4) = 6$, and so we have the following bound: -$$f(n) \geq n + 2 \;\;\; \mathrm{where} \;\;\; (n \geq 4)$$<|endoftext|> -TITLE: Is $\int_{M_{n}(\mathbb{R})} e^{-A^{2}}d\mu$ a convergent integral? -QUESTION [9 upvotes]: Is the following integral a convergent integral? Can we compute it, precisely? -$$\int_{M_{n}(\mathbb{R})} e^{-A^{2}}d\mu $$ -Here $\mu$ is the usual measure of $M_{n}(\mathbb{R})\simeq \mathbb{R}^{n^{2}}$? -So $\mu$ can be counted as $\mu=\prod_{i,j} da_{ij}$ -Note: If this integral would be convergent , either in Lebesgue or in Riemann sense, then it would be equal to a scalar matrix. Because for every invertible matrix $P$ we have: -$P^{-1}(\int_{M_{n}(\mathbb{R})} e^{-A^{2}}d\mu) P= \int_{M_{n}(\mathbb{R})} e^{-(P^{-1}AP)^{2}}d\mu=\int_{M_{n}(\mathbb{R})} e^{-A^{2}}d\mu$ since the mapping $A\mapsto P^{-1}AP$ is a measure preserving and volum preserving linear map.Now we apply the change of coordinate formula for integral. - -REPLY [4 votes]: With $n=2$, we can look at matrices $A$ such that $A^2$ has eigenvalues with negative real part. When $T(A)^2-4\Delta(A)<0$, the real part of the eigenvalues of $A^2$ is $\frac{T^2}{4}-\frac{4\Delta-T^2}{4}=\frac{2T^2-4\Delta}{4}$. So $A^2$ has eigenvalues with negative real part provided $T^2<2\Delta$ (a stronger condition than $T^2<4\Delta$). One of the simplest matrices with this property is -$$Q=\begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}.$$ -Now consider a small perturbation of $Q$: -$$\begin{bmatrix} \delta_1 & -1+\delta_2 \\ 1+\delta_3 & \delta_4 \end{bmatrix}$$ -The inequality now reads -$$\delta_1^2 + 2 \delta_1 \delta_4 + \delta_4^2 < 2 \delta_1 \delta_4 - 2(-1+\delta_2)(1+\delta_3).$$ -Equivalently: -$$\delta_1^2+\delta_4^2<2+2\delta_3-2\delta_2-2\delta_2 \delta_3.$$ -In view of this inequality we can consider the hypercube $H$ with "radius" $1/12$ around $Q$. If $A \in H$ then $T^2-2\Delta<-1$ and so $\frac{2T^2-4\Delta}{4}<-1/2$. Thus the eigenvalues of $A^2$ will have real part less than $-1/2$. Also the volume of $H$ is $(1/6)^4>0$. -Now the integral of $e^{-A^2}$ over the set of all scalar multiples of elements of $H$ cannot converge absolutely.<|endoftext|> -TITLE: the $\partial\bar{\partial}$-lemma dilemma -QUESTION [6 upvotes]: In the question here Simplifying the Kahler form, user290605 asked a question about how is that when we take the differential of Kahler form:$$\mathcal{K}=\frac{\sqrt{-1}}{2\pi}g_{i\bar{j}}dz^i\wedge d\bar{z}^{\bar{j}},$$ we get$$\partial_ig_{j\bar{k}}=\partial_jg_{i\bar{k}} \hspace{1cm} \text{and} \hspace{1cm} \partial_{\bar{i}}g_{j\bar{k}}=\partial_{\bar{k}}g_{j\bar{i}}.\hspace{1cm} (2)$$ Then, was my question, that how starting from (2) -(for example here p.45)] do we get $$g_{i\bar{k}}=∂_i∂_\bar{k}K(z,\bar{z})$$ where $K(z,\bar{z})$ is the Kahler potential? -As you can see, John, answered me there by saying that - -That follows from the fact that $\mathcal{K}$ is closed and of type -$(1,1)$. This is called the $\partial\bar{\partial}$-lemma and the -proof can be found in (e.g.) p.14 of this note. Note that -$\bar{\partial}$-Poincare lemma is needed. - -However, as a physicist I am not familiar with this lemma and would really appreciate if anyone can elaborate on this more. - -REPLY [7 votes]: The $\partial\overline{\partial}$-lemma says that a closed (1,1)-form $\omega$ locally arises as $\partial\overline{\partial}f$ for some smooth function $f$. This is in the same spirit as the regular Poincare lemma (which says closed differential forms are locally exact) but instead of exactness meaning "equal to d(something)", as in the Poincare lemma, it here means $\partial\overline{\partial}$(something). It is worth remembering that the Poincare lemma is true on all smooth manifolds, the $\overline{\partial}$-lemma (in which 'exactness' means $\overline{\partial}$(something)) is true on all complex manifolds, but we only have a general proof of the $\partial\overline{\partial}$-lemma on Kahler manifolds, which is one of the reasons they are distinctly important.<|endoftext|> -TITLE: Is the point promised by Borsuk-Ulam stable under perturbation of the map? -QUESTION [12 upvotes]: The Borsuk-Ulam theorem says that, given a continuous map $f: S^n \to \Bbb R^n$, there is some point $x \in S^n$ with $f(x)=f(-x)$. There may, of course, be many such points (maybe $f$ is constant!) But I'd still like to show that some "equivariant" point is stable in the following sense. -We start with a map $\varphi: X \times S^n \to \Bbb R^n$, $X$ some compact manifold. We think of this as a continuously varying family of maps $\varphi_x: S^n \to \Bbb R^n$, given by $\varphi_x(p) = \varphi(x,p)$. The Borsuk-Ulam theorem guarantees there is some point $p_x$ with $\varphi_x(p_x) = \varphi_x(-p_x)$. -What I'd like to know is: is there always a continuous function $\psi: X \to S^n$, such that $\varphi_x(\psi(x)) = \varphi_x(-\psi(x))$? (This should be called stability of the equivariant point because given a map $\varphi_x$, under a mild perturbation of $\varphi_x$, we may still find two equivariant points that are close to one another.) Said another way, given a continuous family of maps $S^n \to \Bbb R^n$, can I continuously choose an equivariant point? -It would be good to prove this at first for $X = I$. If one can do that, and prove a relative version of the theorem for $D^n$ relative its boundary (including $n=1$), one can prove it for all finite CW-complexes. I tend to think that it will be true provided it's true for $X = I$. But I can't seem to make any progress on this case. - -REPLY [7 votes]: As stated, the answer is no. Consider the case of $n=1$ so that maps $f: S^n \rightarrow \mathbb{R}^n$ are simply maps $f: [0, 2\pi] \rightarrow \mathbb{R}$ for which $f(0) = f(2\pi)$, and the Borsuk-Ulam theorem states for every such continuous map $f$ there exists $\theta \in [0, 2\pi)$ such that $f(\theta) = f(\theta+\pi)$. -Let $X = [0,1]$ and consider $\varphi: [0,1] \times [0, 2\pi] \rightarrow \mathbb{R}$ given by -$$\varphi(t,\theta) = t(1-t) \phi(\theta),$$ -where $\phi: [0, 2\pi] \rightarrow \mathbb{R}$ is supported in $[0, \pi]$ and defined on its support by -$$\phi(\theta) = |\theta-\pi/2|-\pi/2 \text{ (for }0 \le \theta \le \pi).$$ -The graph of $\phi$ is given by - -Notice that $\varphi : [0,1] \times [0, 2\pi] \rightarrow \mathbb{R}$ is continuous and in fact defines a homotopy from the constant $0$ function to itself. Moreover, -$$\{ (t, \theta) \in [0,1] \times [0,2\pi] : \varphi(t,\theta) = \varphi(t,\theta + \pi) \} =\big(\{0,1\}\times[0, 2\pi]\big) \cup\big([0,1] \times \{0, \pi, 2\pi\}\big).$$ -Now, of course, we can continuously choose an equivariant point for $\varphi$. Namely, take $\psi :[0,1] \rightarrow [0,2\pi]$ to be the constant $0$ (or $\pi$ or $2\pi$) function. -However, consider what happens if we translate $\phi$ in $\theta$ and concatenate the resulting homotopy $\hat{\varphi}$ with the homotopy $\varphi$ above. Explicitly, consider $\Phi : [0,1] \times [0, 2\pi] \rightarrow \mathbb{R}$ defined by -$$\Phi(t,\theta) = \varphi(2t, \theta), \text{ if }0 \le t \le 1/2 \text{, and} $$ -$$\Phi(t,\theta) = \varphi(2t-1, \theta-\pi/2), \text{ if }1/2 \le t \le 1.$$ -Now, the equivariant points for this family are -$$\{ (t, \theta) \in [0,1] \times [0,2\pi] : \Phi(t,\theta) = \Phi(t,\theta + \pi) \} =\big(\{0,1/2,1\}\times[0, 2\pi]\big) \cup\big([0,1/2] \times \{0, \pi, 2\pi\}\big) \cup \big( [1/2,1] \times\{\pi/2, 3\pi/2\}\big), \text{ which is pictured as}$$ - -It follows that there does not exist any function $\Psi :[0,1] \rightarrow [0,2\pi]$ such that $\Psi(t)$ is an equivariant point for all $t$ and $\Psi$ is continuous at $t=1/2$. -I'll additionally remark that, while this example shows that it's impossible to continuously choose equivariant points in a neighborhood of $t=1/2 \in [0,1]$, it is still possible to continuously choose equivariant points in a "one-sided neighborhood" of $t=1/2$. I expect, however, that even this is impossible in general and that a counter-example might be obtained by concatenating infinitely many $\Phi(t, \theta)$ which are suitably rescaled.<|endoftext|> -TITLE: What are the prerequisites for studying mathematical logic? -QUESTION [16 upvotes]: I am looking to study mathematical logic, however, I find that introductory books are very daunting, which kind of disheartens me. You see, slowly but surely, I started to realize that the maths which I have learned did not just pop out of thin air, but is a collection of systems, which must of been developed via some other system, i.e, maths did not develop itself. -So I began to look into the origins of mathematics, and read that it was developed via a type of logic, which exists sort of by 'default', via a set of axioms, and then of course I looked up the definition of axioms. -So given that I'd be studying a type of logic whose origins are self evident axioms, naturally I believed there would be no prerequisites. However, in looking up mathematical logic, I have come across things such as Boolean algebra, sets, first order logic, some other type of logic, called 'traditional logic', as well as references to a sort of calculus, though not in a mathematical sense, I think. -So all in all, I am trying to develop a type of mental spider web, and I am trying to find out the strands which lye at the absolute bounds so that I may learn this mystique logic. Though I have no idea where to start. -Side note: This is the book I have started reading: http://www.dainf.cefetpr.br/~kaestner/Logica/MaterialAdicional/announceRautemberg.pdf -Credit goes to Wolfgang Rautenberg. - -REPLY [5 votes]: This is an old question, butlet me plug my favorite logic textbook: "Computability and Logic" by Boolos, Burgess, and Jeffrey. As the name implies, it has a strong computability-theoretic focus which you may not be interested in; however, it also has a self-contained treatment of first-order logic logic (chapters 9-10 and 12-14) which I found the clearest by far of the books I had access to when I was first learning this stuff. Its presentation of Godel's theorems (chapters 11 and 15-18, building on chapters 1-4 and 6-7) is also excellent, in my opinion. (And besides, computability theory is really cool.) -It ends with a collection of further topics; some of this material is usually only covered in more advanced and specialized courses, but it's actually quite accessible, so it's nice to have it in one place in a more introductory text. I'm not sure I would have chosen those exact topics to include rather than others, but it's certainly a reasonable selection.<|endoftext|> -TITLE: Ring with Unique Simple Module -QUESTION [5 upvotes]: Let $A$ be a not necessarily commutative unital ring with a unique simple module (up to isomorphism). Let $\mathfrak m$ be the annihilator of this simple module, which is a two-sided ideal. We claim that $\mathfrak m$ is a maximal two-sided ideal. If $I$ is a maximal left ideal, then $A/I$ is a simple module and its annihilator is contained in $I$, since any annihilating element must kill $1+I$. If $J$ is a two-sided ideal contained in $I$, then $J$ must annihilate $A/I$, since if $x\in J, y\in A$, then $x(y+I)=xy+xI\subseteq I$, since $xy\in J\subseteq I$. Now, if $M$ is a maximal two-sided ideal (which exists by Zorn's Lemma), then there's a maximal left ideal $I$ containing $M$ (again by Zorn). Then, $R/I$ is simple and its annihilator is a two-sided ideal containing $M$ and thus equal to $M$, which also equals $\mathfrak m$ because there's a unique simple module. Hence, $\mathfrak m$ is the unique maximal two-sided ideal. -If $A$ is an Artinian ring, then $A/\mathfrak m$ is also an Artinian ring (since any infinite descending chain of left ideals in the quotient lifts to an infinite descending chain in $A$). Furthermore, $A/\mathfrak m$ is a simple ring since $\mathfrak m$ is a maximal two-sided ideal, so by Artin-Weddenburn, $A/\mathfrak m$ is isomorphic to a matrix algebra over a division ring. Is this true if we don't assume $A$ is Artinian? - -REPLY [4 votes]: Let $m$ be the annihilator of a simple right $A$-module called $S$. -Then $S$ becomes a simple and faithful $A/m$ module, so that $A/m$ is a right primitive ring. These may or may not be Artinian, and the Artinian ones are precisely the simple Artinian rings (square matrix rings over division rings.) -one isotype of simple module -Now additionally require $A$ to have one isotype of simple right module. -You're right that every maximal right ideal must contain one particular two sided ideal, and it is the unique maximal ideal of $A$. Furthermore, it is the Jacobson radical of $A$. -Additionally, every maximal right ideal is essential in $A$, and the unique simple module is singular and nonprojective.<|endoftext|> -TITLE: Finding a space with $X \cong X+2$ and $X \not\cong X+1$. -QUESTION [6 upvotes]: Question. Is there a topological space $X$ with $X \cong X+2$ and $X \not\cong X+1$? -Here, $X+n$ denotes the disjoint union (i.e. coproduct) of $X$ with $n$ isolated points. -This question is similar to MO/218113 and MO/225896. I am pretty sure that it is easier, though. Perhaps it already works with a nasty topology on $\mathbb{N}$? - -REPLY [6 votes]: A reference to such a space and a brief description can be found in this answer; there is a more thorough description in this answer. Briefly, the space is obtained by taking two copies of $\beta\Bbb N$, the Čech-Stone compactification of $\Bbb N$, and identifying the remainders in the obvious way.<|endoftext|> -TITLE: Residually Finite Braid Group -QUESTION [5 upvotes]: In Braid Groups of Kassel, Turaev, it mentions that $\mathcal{B}_n$ is a residually finite group. The definition that they give as a residually finite group is a group $G$ such that for each $g\in G-\{e_G\}$ ($e_G$ the identity of $G$), there exists an homomorphism $f$ to a finite group $H$ such that $f(g)\neq e_H$. My question is: -How can I obtain the group $H$ for a given element $g\in \mathcal{B}_n$ and the homomorphism that fulfills this? -I hope you can help me. Nice Holidays. - -REPLY [2 votes]: Another possibility is to embed $\mathcal{B}_n$ into the automorphism group $\mathrm{Aut}(\mathbb{F}_n)$ of the free group $\mathbb{F}_n$. Now, Baumslag gave a very short prove of the fact that, for any finitely generated residually finite group $G$, $\mathrm{Aut}(G)$ is also residually finite. The conclusion follows from the residual finiteness of finitely generated free groups (see for example the beautiful proof of Stallings in Topology of finite graphs), since a subgroup of a residually finite group is clearly residually finite itself. -For more details, see Basic results on braid groups and the references therein.<|endoftext|> -TITLE: Center of noetherian rings -QUESTION [5 upvotes]: Is it true that the center of a right Noetherian ring (with identity) is always a Noetherian ring ? - -REPLY [3 votes]: No, this is not true, even under some rather restrictive conditions on the ring. A number of counterexamples, showing that if $R$ is a prime Noetherian PI ring, the center of $R$ need not be Noetherian can be found in Examples 5.1.16 through 5.1.18 of L. Rowen's book Polynomial identities in ring theory. Additional examples are sketched in Exercises 1 and 2 of §5.1 in the same book.<|endoftext|> -TITLE: Formal proof of Lyapunov stability -QUESTION [14 upvotes]: I was trying to solve the question of AeT. on the (local) Lyapunov stability of the origin (non-hyperbolic equilibrium) for the dynamical system -$$\dot{x}=-4y+x^2,\\\dot{y}=4x+y^2.\tag{1}$$ -The streamplot below indicates that this actually is true. - -Performing the change of variables to polar coordinates $x=r\cos\phi$, $y=r\sin\phi$ and after some trigonometric manipulations we result in -$$\dot{r}=r^2(\cos^3\phi+\sin^3\phi)\\ \dot{\phi}=4+r\cos \phi \sin\phi(\cos \phi -\sin \phi )$$ -From this set of equations I want to prove that if we start with sufficiently small $r$ then $r$ will remain bounded with very small variations over time. -My intuitive approach: For very small $r$ -$$\dot{\phi}\approx 4$$ that yields $$\phi(t)\approx 4t +\phi_0$$ -If we replace in the $r$ dynamics we obtain -$$\dot{r}\approx r^2\left[\cos^3(4t+\phi_0)+\sin^3(4t+\phi_0)\right]$$ -Integrating over $[0,t]$ we obtain -$$\frac{1}{r_0}-\frac{1}{r(t)}\approx \int_0^t{\left[\cos^3(4s+\phi_0)+\sin^3(4s+\phi_0)\right]ds}$$ -The right hand side is a bounded function of time with absolute value bounded by $4\pi$ since -$$\int_{t_0}^{t_0+2\pi}{\left[\cos^3(4s+\phi_0)+\sin^3(4s+\phi_0)\right]ds}=0 \quad \forall t_0$$ -Thus for very small $r_0$ it holds true that $r(t)\approx r_0$. -I understand that the above analysis is at least incomplete (if not erroneous) and I would be glad if someone can provide a rigorous treatment on the problem. -I think that a "singular-perturbation like" approach may be the solution (bounding $r$ by $\epsilon$) and considering the comparison system to prove the global boundedness result but I haven't progressed much up to now. - -REPLY [3 votes]: OP's streamplot suggests that the line $y=x-4$ is a flow trajectory. If we insert the line $y=x-4$ in OP's eq. (1) we easily confirm that this is indeed the case. -From now on we will assume that $y\neq x-4$. It is straightforward to check that the function -$$H(x,y)~:=~\frac{xy+16}{x-y-4}-4 \ln |x-y-4| $$ -is a first integral/an integral of motion: $\dot{H}=0$. -In fact, if we introduce the (non-canonical) Poisson bracket -$$B~:=~\{x,y\}~:=~ (x-y-4)^2 ,$$ -then OP's eq. (1) becomes Hamilton's equations -$$ \dot{x}~=~\{x,H\}, \qquad \dot{y}~=~\{y,H\}. $$ -The above result was found by following the playbook laid out in my Phys.SE answer here: $B$ is an integrating factor for the existence of the Hamiltonian $H$.<|endoftext|> -TITLE: Formulae of the Year $2016$ -QUESTION [11 upvotes]: Soon it's the year $2016$. Time to ponder how we can arrange the digits in 2016 to form a valid equation. Use any symbols you like (please explain the less obvious ones). Keep digits in the same order (should this be relaxed?). -Examples: -$$\lfloor e^2\rfloor + 0 - 1! = 6$$ -$$\left\lfloor\sqrt{\sqrt{201}}\right\rfloor = \lceil\sqrt{6}\rceil$$ -where $\lfloor x\rfloor$ denotes the floor function and $\lceil x\rceil$ the ceiling. -Don't overuse constants (i.e. avoid adding up several $\pi$ and $e$ just to get to some arbitrary value). -EDIT: clarification: use each of the digits $2$, $0$, $1$, $6$ in this order only once. Combine digits giving $20$, $201$, $16$, etc as you like (I won't argue whether in a fraction the numerator or denominator comes first :-). Please don't criticize answers that violate this rule, as this clarification came late. - -REPLY [10 votes]: Another easy one) $$\Large(\color{red}{ 2}!)^{\color{blue}{\Large {2}}}+(\Large \color{red}{0}!)^{\color{blue}{\Large{0}}}+(\color{red}{1}!)^{\color{blue}{\Large{1}}}=\color{red }{\Large 6}$$<|endoftext|> -TITLE: Minimal polynomial of integral elements -QUESTION [5 upvotes]: Let $R$ be an integrally closed domain and let $K$ be its fraction field. Let - $L\supseteq K$ be a field. If $\alpha\in L$ is integral over $R$ (i.e. - if it satisfies a monic polynomial in $R[x]$), does its minimal - polynomial over $K$ lie in $R[x]$? - -CONTEXT: -I'm trying to prove that the trace $t_{L/K}$ of integral elements lie in $R$ (provided that the extension $L/K$ is finite). I'm trying to use the fact that the trace of $\alpha$ is an integer multiple of certain coefficient of its minimal polynomial, so this trace lies in $R$ if such coefficient does. -Since $\alpha$ satisfies an integral relation $p(\alpha)=0$ over $R$, it's minimal polynomial $q$ over $K$ exists and it divides $p$, does it imply that $q\in R[x]$? if so, then I'd be done. -It's clear that the result is true if $R$ is a UFD. In such a case, it's only a matter of looking at the unique factorization of the polynomial in $R[x]$ and apply Gauss' lemma. However, I don't see a straight proof nor counter example in the general case. - -REPLY [2 votes]: To prove in the case of the ring integrally closed, show the following: -Assume $A$ commutative with $1$ and in $A[X]$ we have the equality between monic polynomials $f= g\cdot h$, where $f = X^m + a_1 X^{m-1} + \cdots + a_m$, $g = X^p + b_1 X^{p-1} + \cdots + b_p$, $h =X^q + c_1 X^{q-1} + \cdot + X_m$. Then all the coefficients of $g$ and $h$ are integral over the subring $\mathbb{Z}[a_i] $ of $A$ (that is, the above equality $f= g\cdot h$ implies a series of integral dependences for all the $b_j$, $c_k$). -Let's state the following simple but important lemma: -Let $A$ be a ring and $P$ a monic polynomial in $A[X]$. There exists an extension of ring $A \subset S$ such that the polynomial $P$ splits completely in $S[X]$. The proof is similar to the analogous fact for fields. -Consider now $S$ an extension of $R$ in which $g$ splits completely. Write -$g(X) = (X-\beta_1)\cdot \ldots \cdot (X-\beta_p)$. Since the equality $f= g h$ also holds in $S[X]$ we have $f(\beta_j) = g(\beta_j) \cdot h(\beta_j) = 0$. Therefore, all the $\beta_j$ are integral over $\mathbb{Z}[a_i] $ and therefore, so are the symmetric functions in $\beta_j$'s and so the coefficients of $g$. Since the equality of integral dependence holds in $S$, it will also hold in $A$, since $A\subset S$. -$\bf{Added:}$ The $A$ in this proof is a general commutative, $1$ ring. For the purposes of the proof of the OP statement, one should take $A = K$, $f$ a monic polynomial in $R[X] \subset K[X]$ and $f = g h$ in $K[X]$.<|endoftext|> -TITLE: Elliptic curve over algebraically closed field of characteristic $0$ has a non-torsion point -QUESTION [14 upvotes]: Let $E/k$ be an elliptic curve over an algebraically closed field $k$ of characteristic $0$. Can one prove that the abelian group $E(k)$ is non-torsion? Better yet, can one prove that $E(k) \otimes_\mathbb Z \mathbb Q$ is an infinite-dimensional $\mathbb Q$-vector space? -It is very tempting here to try to use the Lefschetz principle, to try to reduce the situation to $k= \mathbb C$ where both statements are obvious. However I am not sure that one can actually apply the Lefschetz principle as it would require formulating the statements in the first-order theory of fields and I am unfortunately not much of a logician. -At least one can say that if the field $k$ is uncountable then $E(k)$ is uncountable whereas $E(k)^{\text{tors}}$ is countable (so much is true over any field), so there is always a non-torsion point. -However when the field $k$ is countable, there seems to me to be no "trivial" reason why $E(k)$ should have an element of infinite order. The fact that $k$ has characteristic $0$ has to intervene somehow, as the statement is false in finite characteristic... - -REPLY [7 votes]: After some searching, I found the following article: - -G. Frey and M. Jarden, Approximation theory and the rank of abelian varieties over large algebraic fields. Proc. London Maths. Soc. 28 (1974), 112-128. - -A link is provided on the second author's website; see here for the actual article. -In it, they prove the following: -Theorem 10.1. If $A$ is an abelian variety of positive dimension defined over an algebraically closed field which is not the algebraic closure of a finite field, then the rank of $A(K)$ is equal to the cardinality of $K$. -The proof is a bit cumbersome, but in the second remark following the theorem, they provide an alternative and more direct method that does not depend on the main results of their paper. This alternative method seems useful for the weaker question you asked. -In the introduction, they also say: - -Another proof was indicated by J.-P. Serre in a letter. - -Since they do not include a reference for the letter, it might have been private communication. (I didn't search in Serre's Œuvres for the letter.) Or maybe Serre's proof is the alternative method they give. -Remark. Note that the theorem is trivial for $K$ uncountable: - -The cardinality of $A(K)$ is at most that of $K$, with equality if $K$ is algebraically closed; -The torsion is countable, since the $n$-torsion has size $\leq n^{2g}$ (with equality if $\operatorname{char} K \nmid n$); -Thus, $A(K) \otimes_{\mathbb Z} \mathbb Q$ has the same cardinality as $K$. -Thus, if $K$ is uncountable, then $A(K) \otimes \mathbb Q$ cannot be finite-dimensional. -Now use that for $I$ infinite, the cardinality of $I$ equals the cardinality of $\mathbb Q^{(I)}$ (but of course not that of $\mathbb Q^I$; cf. Cantor's diagonal argument). - -Similarly, this argument proves for any infinite field (not necessarily algebraically closed, nor uncountable) that the dimension of $A(K) \otimes \mathbb Q$ is at most the cardinality of $K$. Thus, the only content of the theorem is exactly the question you asked: if $K$ is algebraically closed and not the algebraic closure of a finite field, does $A(K) \otimes \mathbb Q$ have infinite dimension?<|endoftext|> -TITLE: The connection between differential forms and ODE -QUESTION [20 upvotes]: Is there a connection between being an exact differential equation and being an exact differential form? -I always found it bothersome with basic ode that you could somehow treat dy/dx as a bona fide fraction, is what's secretly going on here integrating differential forms? If so, can you expand on this connection or direct me to a good resource? - -REPLY [26 votes]: Solving an exact differential equation can be interpreted as finding the integral curves of a one-dimensional distribution defined by an exact form. Let me describe the relation: -Let $\omega$ be a differential one-form defined on an open subset $\Omega \subseteq \mathbb{R}^2$ and assume that $\omega$ doesn't vanish at any point of $\Omega$. Then $\ker(\omega)$ defines a one-dimensional distribution on $\Omega$. For each $p \in \Omega$, the subspace $\ker(\omega_p)$ is a one-dimensional subspace of $T_p(\mathbb{R}^2) \cong \mathbb{R}^2$ so we can think of $\omega$ as defining a field of lines on $\Omega$. A curve $\alpha \colon I \rightarrow \Omega$ is called an integral curve of $\omega$ if $\omega_{\alpha(t)}(\dot{\alpha}(t)) = 0$ for all $t \in I$. -If we write $\omega$ explicitly as $\omega = g(x,y)dx + h(x,y)dy$ then $\alpha(t) = (x(t),y(t))$ is an integral curve of $\omega$ if -$$ g(x(t),y(t)) \dot{x}(t) + h(x(t),y(t)) \dot{y}(t) \equiv 0. $$ -If $\omega$ is exact, then $\omega = df = \frac{\partial f}{\partial x} dx + \frac{\partial f}{\partial y} dy$ for some $f \colon \Omega \rightarrow \mathbb{R}$. The function $f$ is called a potential for $\omega$ and can be used to find integral curves of $\omega$ as follows. Let $(x_0, y_0) \in \Omega$ and set $C = f(x_0,y_0)$. Since $df = \omega$ has constant rank one, the level set $f^{-1}(C)$ is a one dimensional submanifold of $\Omega$. If $\alpha(t)$ is any parametrization of (a portion of) $f^{-1}(C)$ near $(x_0,y_0)$ then $f(\alpha(t)) = C$ and so, by differentiating, we get -$$ df_{\alpha(t)}(\dot{\alpha}(t)) = \omega_{\alpha(t)}(\dot{\alpha}(t)) \equiv 0. $$ -This means that to find integral curves of $\omega$, we can find the level sets of $f$ and they will, implicitly, give us integral curves. -Now, let us assume that we are given a first order differential equation of the form $h(x,y(x))y'(x) = g(x,y(x))$. By performing formal manipulation, we have -$$ h(x,y) \frac{dy}{dx} = g(x,y) \implies g(x,y)dx - h(x,y) dy = 0. $$ -This can be interpreted rigorously as saying that if we define a one form $\omega$ by -$$\omega = g(x,y)dx - h(x,y) dy$$ -then the graph $(x,y(x))$ of a solution of the first order differential equation will be an integral curve of $\omega$. Conversely, any integral curve of $\omega$ which can be expressed as $(x,y(x))$ will be a solution of the first order differential equation. Thus, instead of solving the original equation, we can instead find the integral curves of $\omega$. If $\omega$ is exact, we can find a potential $f$ for $\omega$ (determined uniquely up to a constant on connected domains) and then the level sets of $f$ will given an implicit description for the solutions of the original equation. -If $\omega$ is not exact, it might still be closed and so locally exact. Then, by finding local potentials we can find local solutions of our equation. If $\omega$ is not closed, we can try and find a non-zero function $\mu$ such that $\mu \omega$ is closed. Since the distribution defined by $\omega$ and $\mu \omega$ is the same, integral curves of $\mu \omega$ will also be integral curves of $\omega$ and they will allow us to find solutions. Such $\mu$ is called an integrating factor and can always be found locally. Note that almost all the methods one usually learns of solving first order differential equations are particular cases of the discussion above (this works for linear equations, linear non homogeneous equations, separable equations, exact equations, etc).<|endoftext|> -TITLE: How do I see that $\mathbb{RP}^4$ and $\mathbb{RP}^6$ do not admit fields of tangent $2$-planes? -QUESTION [6 upvotes]: A manifold $M$ is said to admit a field of tangent $k$-planes if its tangent bundle admits a subbundle of dimension $k$. How do I see that $\mathbb{RP}^4$ and $\mathbb{RP}^6$ do not admit fields of tangent $2$-planes? - -REPLY [7 votes]: It has no subbundles at all. If it did, pulling back along the projection map $p: S^{2n} \to \Bbb{RP}^{2n}$ would get you a nontrivial subbundle of $TS^{2n}$, which would get you a splitting $\xi \oplus \eta = TS^{2n}$. Now, note that beacuse $H^1(S^{2n}) = 0$, and hence that $w_1(\xi) = 0$ and $\xi$ is orientable (and the same for $\eta$). Now looking at Euler classes, $$0=e(\xi)e(\eta) = e(\xi \oplus \eta) = e(TS^{2n})=2.$$ The first equality is because the cohomology of $S^{2n}$ is zero in degrees less than $2n$.<|endoftext|> -TITLE: does there exist a prime such that... -QUESTION [6 upvotes]: Let $n>1$ be an non square positive integer (you can have it prime, if you wish), does there exist a prime $p>2$ such that $n$ generates the multiplicative group of $\mathbb F_p$? It sounds true, but I could not find an immediate proof for that... maybe using some reciprocity law? Not sure. - -REPLY [4 votes]: The general question in a strong form is the contents of Artin's Conjecture: - -Every integer which is neither a perfect square nor equal to $−1$ is a primitive root modulo infinitely many primes. - -This remains unproved.<|endoftext|> -TITLE: find the least natural number n such that if the set $\{1,2,...,n\}$ is arbitrarily divided into two nonintersecting subsets -QUESTION [7 upvotes]: Find the least natural number $n$ such that if the set $\{1,2,\dots,n\}$ is arbitrarily divided into two non intersecting subsets then one of the subsets contains three distinct numbers such that the product of two of them equals the third. - -Lets say I have two sets: -$A$ and $B$ -So $1,2,3$ have to be in one set, lets put them in $A$. This forces $6$ to be in $B$ If we put $4$ in A, then $8,12$ must be in $B$. If we put $5$ in $A$ then $10,15$ must be in $B$. If we put $7$ in $A$ then $14,21$ to be in $B$. -So right now I have: -$$ A=\{1,2,3,4,5,7\} $$ -$$ B=\{6,8,12,10,14,15,21\} $$ -I don't see a particular pattern so i am assuming there is a different approach to this problem because this could go on forever. Any ideas? -EDIT: I misunderstood the question. Here is my new sets -$$ A=\{1,2,3,6,9,12,24,36,72,18\} $$ -$$ B=\{4,5,7,8,32,10,40,56,28,35,20\} $$ -Here is my set so far - -REPLY [4 votes]: The answer is $n=96$. -To prove this, take any $n\ge96$ and assume we partition the set $\{1,2,\cdots,n\}$ into the disjoint union of $A$ and $B$. Let $P(A)$ denote the set of products of distinct (and different from $1$) elements of $A$, similarly $P(B)$ denote the set of products of distinct (and different from $1$) elements of $B$. -Assume without loss of generality that $48\in B$ and consider cases as follows. Four cases are formed based on whether $2$ is in $A$ or in $B$, and whether $3$ is in $A$ or in $B$. In each case we assume that $A\cap P(A)=\emptyset$, $B\cap P(B)=\emptyset$, and derive a contradiction. -Case when $\{2,3\}\subseteq A$. Then $6\in B$ (since $6=2\cdot3$ and $\{2,3\}\subseteq A$). Then $8\in A$ (since $8=48/6$ and $\{6,48\}\subseteq B$), so $4=8/2\in B$. -Then $24=3\cdot8=4\cdot6\in P(A)\cap P(B)$, which is enough to derive a contradiction. (Indeed, $24$ must be in either $A$ or $B$. In the former case the contradiction is that $24=3\cdot8\in A\cap P(A)$, in the latter case $24=4\cdot6\in B\cap P(B)$.) -It is perhaps easier to visualize the above argument as in the following table, where numbers further to the right are added to $A$ or $B$ as a consequence of numbers (at the left) that were added earlier. -$$ -\begin{array}{l|c|c|c|c|r} -\hline -A & 2,\ {\color{red}3} & & {\color{red}8} & & {\color{red}{24}} &\\ -\hline B & 48 & {\color{blue}6} & & {\color{blue}4} & {\color{blue}{24}} &\\ -\hline -\end{array} -$$ -Case when $2\in A$, $3\in B$. Then $16=48/3\in A$, $8=16/2\in B$, $\{6=48/8,\ 24=3\cdot8\}\subset A$, $\{4=24/6,\ 12=24/2,\ 48\}\subset B$, and $4\cdot12=48$. This case visualized as follows: -$$ -\begin{array}{l|c|c|c|c|r} -\hline -A & 2 & 16 & & 6,\ 24 & &\\ -\hline B & 3,\ {\color{blue}{48}} & & 8 & & {\color{blue}{4,\ 12}} &\\ -\hline -\end{array} -$$ -Case when $3\in A$, $2\in B$. Then $24=48/2\in A$, $8=24/3\in B$, $\{4=8/2,\ 6=48/8\}\subset A$, hence $4\cdot6=24$ with $\{4,6,24\}\subset A$. -This case visualized as follows: -$$ -\begin{array}{l|c|c|c|c|r} -\hline -A & 3 & {\color{red}{24}} & & {\color{red}{4,\ 6}} &\\ -\hline B & 2,\ 48 & & 8 & & \\ -\hline -\end{array} -$$ -Finally, case $\{2,3\}\subseteq B$. Then $\{6=2\cdot3,\ 96=2\cdot48,\ 16=48/3\}\subset A$, a contradiction as $6\cdot16=96$. -This last case visualized as follows: -$$ -\begin{array}{l|c|c|c|c|r} -\hline -A & & {\color{red}{6,\ 96,\ 16}} & \\ -\hline B & 2,\ 3,\ 48 & & \\ -\hline -\end{array} -$$ -It remains to show that we could partition the set $\{1,2,\cdots,95\}$ into the disjoint union $A\cup B$ such that no two distinct (and different from $1$) numbers in $A$ have a product in $A$, and no two distinct (and different from $1$) numbers in $B$ have a product in $B$. -That is, $A\cap P(A)=\emptyset$ and $B\cap P(B)=\emptyset$. -Using considerations as above, we start with $\{6,8,12,16,24,36,18\}\subset A$ and $\{2,3,4,48,72\}\subset B$, and add the remaining numbers up to $95$ one after the other in either $A$ or $B$ trying to avoid a conflict. -This was done by hand (and after that checked with a computer). -The following partition works: -$A=\{6,8,10,12,14,15,16,18,20,21,22,24,26,27,28,30,32,33,34,35,36,38,39,40,42,44,45,46,50,51,52,55,57,58,62,63,65,68,69,74,75,76,77,78,82,85,86,87,91,92,93,94,95\}$ -and -$B=\{1,2,3,4,5,7,9,11,13,17,19,23,25,29,31,37,41,43,47,48,49,53,54,56,59,60,61,64,66,67,70,71,72,73,79,80,81,83,84,88,89,90\}.$ -To make the verification easier we also list $P(A)\cap\{1,2,\cdots,96\}$ and -$P(B)\cap\{1,2,\cdots,96\}$. -$P(A)\cap\{1,2,\cdots,96\}=\{48,60,72,80,84,90,96\}.$ -$P(B)\cap\{1,2,\cdots,96\}=\{6,8,10,12,14,15,18,20,21,22,26,27,28,33,34,35,36,38,39,44,45,46,50,51,52,55,57,58,62,63,65,68,69,74,75,76,77,82,85,86,87,91,92,93,94,95,96\}.$ -Somewhat arbitrary $1$ and all primes ended up in $B$. This partition is not unique as $1$, $11$, and all primes $p\ge17$ (or any subset of these) could be moved from $B$ to $A$ without harm. (But $13$ could not be moved from $B$ to $A$ as this would create a conflict $6\cdot13=78$.) Many other variations are likely possible too. -One last edit to put $P(A)$, $A$, $B$, $P(B)$ all together for easier visual inspecion. -$$ -\begin{array}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|r} -\hline -P(A) & & & & & & & & & & & & & & & & & \\ -\hline -A & & & & & &6 & & 8& &10 & &12 & &14 &15 & 16& \\ -\hline -B &1 &2 &3 &4 &5 & &7 & &9 & &11 & &13 & & & & \\ -\hline -P(B) & & & & & &6 & &8 & &10 & &12 & &14 &15 & & \\ -\hline -\end{array} -$$ -$$ -\begin{array}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|r} -\hline -P(A) & & & & & & & & & & & & & & & & & \\ -\hline -A & &18 & &20 &21 &22 & &24 & &26 &27 &28 & &30 & &32 & \\ -\hline -B &17 & &19 & & & &23 & &25 & & & &29 & &31 & & \\ -\hline -P(B) & &18 & &20 &21 &22 & & & &26 &27 &28 & & & & & \\ -\hline -\end{array} -$$ -$$ -\begin{array}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|r} -\hline -P(A) & & & & & & & & & & & & & & & &48 & \\ -\hline -A &33 &34 &35 &36 & &38 &39 &40 & &42 & &44 &45 &46 & & & \\ -\hline -B & & & & &37 & & & &41 & &43 & & & &47 &48 & \\ -\hline -P(B) &33 &34 &35 &36 & &38 &39 & & & & &44 &45 &46 & & & \\ -\hline -\end{array} -$$ -$$ -\begin{array}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|r} -\hline -P(A) & & & & & & & & & & & &60 & & & & & \\ -\hline -A & &50 &51 &52 & & &55 & &57 &58 & & & &62 &63 & & \\ -\hline -B &49 & & & &53 &54 & &56 & & &59 &60 &61 & & &64 & \\ -\hline -P(B) & &50 &51 &52 & & &55 & &57 &58 & & & &62 &63 & & \\ -\hline -\end{array} -$$ -$$ -\begin{array}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|r} -\hline -P(A) & & & & & & & &72 & & & & & & & &80 & \\ -\hline -A &65 & & &68 &69 & & & & &74 &75 &76 &77 &78 & & & \\ -\hline -B & &66 &67 & & &70 &71 &72 &73 & & & & & &79 &80 & \\ -\hline -P(B) &65 & & &68 &69 & & & & &74 &75 &76 &77 & & & & \\ -\hline -\end{array} -$$ -$$ -\begin{array}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|r} -\hline -P(A) & & & &84 & & & & & &90 & & & & & &96 & \\ -\hline -A & &82 & & &85 &86 &87 & & & &91 &92 &93 &94 &95 & & \\ -\hline -B &81 & &83 &84 & & & &88 &89 &90 & & & & & & & \\ -\hline -P(B) & &82 & & &85 &86 &87 & & & &91 &92 &93 &94 &95 &96 & \\ -\hline -\end{array} -$$<|endoftext|> -TITLE: Continuity of outer measure induced by measure from below -QUESTION [6 upvotes]: This question comes from Bass, Ex 4.15. Given a finite measure space $(X, \mathcal{A}, \mu)$, we can define an outer measure $\mu^*$ as -$$ - \mu^*(A) := \inf\{ \mu(B) : A \subset B , B \in \mathcal{A} \} \:. -$$ -(One can check that this is an outer measure and that $\mu^*$ agrees with $\mu$ on $\mathcal{A}$). -I want to show now that if $\{A_n\}_{n \geq 1}$ is a sequence of increasing sets (not necessarily in $\mathcal{A}$) such that $A_n \nearrow A$, then $\mu^*(A_n) \nearrow \mu^*(A)$. -I have this so far. By monotonicity of outer measure, $\mu^*(A_n) \leq \mu^*(A)$ for all $n$, hence $\lim_{n \rightarrow \infty} \mu^*(A_n) \leq \mu^*(A)$. It remains to show that -$\mu^*(A) \leq \lim_{n \rightarrow \infty} \mu^*(A_n)$. -My rough idea is as follows. Fix any $\epsilon > 0$. We can pick a sequence of sets $\{G_i\}_{i \geq 1}$ in $\mathcal{A}$ such that $\mu^*(A_i) \geq \mu(G_i) - \epsilon/2^{i}$. If the $\{G_i\}$ were an increasing set then we'd be done by using the continuity of measure from below. However, they are not. But, we can define $H_n := \bigcup_{i=1}^{n} G_i$ which is an increasing sequence by definition. Since $A \subset \bigcup_{n \geq 1} H_n$, we have that -$$ -\mu^*(A) \leq \mu( \bigcup_{n \geq 1} H_n ) = \lim_{n \rightarrow \infty} \mu(H_n) \:. -$$ -Now I am not quite sure how to finish off. I want to say something to the effect of $\mu(H_n) \leq \mu^*(A_n) + \epsilon$. However, since the $A_n$'s are not measurable, I am having trouble making such a comparison. - -REPLY [11 votes]: As you wrote, you have already proved $\lim_{n \rightarrow \infty} \mu^*(A_n) \leq \mu^*(A)$. It remains to show that -$\mu^*(A) \leq \lim_{n \rightarrow \infty} \mu^*(A_n)$. -Your idea to complete the proof is essentially correct, all it needs is a small adjustment. -Given any $\epsilon > 0$. We can pick a sequence of sets $\{G_i\}_{i \geq 1}$ in $\mathcal{A}$ such that $A_i\subseteq G_i$ and $\mu^*(A_i) \leq \mu(G_i) \leq \mu^*(A_i) + \epsilon/2^{i}$. -Define $H_n=\bigcap_{i=n}^\infty G_i$. Then $\{H_n\}_{n\geq 1}$ is an increasing sequence of sets in in $\mathcal{A}$. -For any $n \geq 1$, if $i \geq n$ then $A_n\subseteq A_i \subseteq G_i$. So, for any $n \geq 1$, $A_n \subseteq \bigcap_{i=n}^\infty G_i=H_n$. So we have $A_n \subseteq H_n\subseteq G_n$ and we get -$$ \mu^*(A_n) \leq \mu(H_n) \leq \mu(G_n) \leq \mu^*(A_n) + \epsilon/2^{n}$$ -So we have $\lim_{n \to +\infty} \mu^*(A_n) = \lim_{n \to +\infty} \mu(H_n)$. -Since $A_n \nearrow A$, it is easy to see that that $A\subseteq \bigcup_{n\geq 1} H_n$. But $H_n \nearrow \bigcup_{n\geq 1} H_n$, so using the the continuity of measure from below, we have -$$\mu^*(A)\leq \mu\left(\bigcup_{n\geq 1} H_n\right)=\lim_{n \to +\infty} \mu(H_n)= \lim_{n \to +\infty} \mu^*(A_n)$$<|endoftext|> -TITLE: What is the value of $142,857 \times 7^2$? -QUESTION [12 upvotes]: What is the value of $142,857 \times 7^2$? - -Obviously you could solve this with a calculator and be done. But is there a more clever way to calculate this? - -REPLY [13 votes]: Do you recognize that $1/7=0.\overline{142857}?$ If so, you will recognize that $142,857 \cdot 7 = 999,999,$ so $142,857 \cdot 7^2=(1,000,000-1)\cdot 7=6,999,993$ - -REPLY [5 votes]: Multipling by $7^2=49$ is the same as multiplying by $50$ then subtracting one copy. -So $$142,857*7^2=142,857*50-142857$$. -Now, multipling by 50 is the same as multiplying by 100 then dividing by 2. -so we have $$(142,857*100)/2-142857$$. -Multiplying by 100 is just adding 2 0s: -$$14285700/2-142857$$ -Dividing by 2 is easy to do by hand: $$7142850-142857$$ -And finally, subtracting two numbers is easy to do by hand: $$6999993$$ -Hopefully that's actually right, because I'm not checking :) - -REPLY [3 votes]: If you recognize that those are the decimal digits of $1/7 = .142857142857142857...$, then you realize that $1000000/7$ must be 142857+0.142857, so 142857 must be (100,000 - 1)/7, and you're on your way.<|endoftext|> -TITLE: Are the addition and multiplication of real numbers, as we know them, unique? -QUESTION [15 upvotes]: After recently concluding my Real Analysis course in this semester I got the following question bugging me: -Is the canonical operation of addition on real numbers unique? -Otherwise: Can we define another operation on Reals in a such way it has the same properties of usual addition and behaves exactly like that? -Or even: How I can reliable know if there is no two different ways of summing real numbers? -Naturally these dense questions led me to further investigations, like: -The following properties are sufficient to fully characterize the canonical addition on Reals? - -Closure -Associativity -Commutativity -Identity being 0 -Unique inverse -Multiplication distributes over - -If so, property 6 raises the question: Is the canonical multiplication on Reals unique? -But then, if are them not unique, different additions are differently related with different multiplications? -And so on... -The motivation comes from the construction of real numbers. -From Peano's Axioms and the set-theoretic definition of Natural numbers to Dedekind and Cauchy's construction of Real numbers we haven't talked about uniqueness of operations in classes nor I could find relevant discussion about this topic on the internet and in ubiquitous Real Analysis reference books by authors as: - -Walter Rudin -Robert G. Bartle -Stephen Abbott -William F. Trench - -Not talking about the uniqueness of the operations, as we know them, in a first Real Analysis course seems rather common and not elementary matter. -Thus, introduced the subject and its context, would someone care to expand it eventually revealing the formal name of this field of study? - -REPLY [10 votes]: The short answer is no: the operation defined by $a+_3 b=(a^3+b^3)^{1/3}$ also has all the properties 1 through 6 over the reals. This distributes over cannonical multiplication: for any $a,b,z\in \mathbb{R}$, -$z(a+_3b)=z(a^3+b^3)^{1/3}=(z^3)^{1/3}(a^3+b^3)^{1/3}=((za)^3+(zb)^3)^{1/3}=(za)+_3(zb).$ -It might, however, be the case (and this is entirely speculation, not necessarily true) that only operations of the form $a+_f b = f^{-1}(f(a)+f(b))$ (where $f:\mathbb{R}\to \mathbb{R}$ is bijective and fixes the origin; or stated differently, $f$ is a permutation of the real numbers and $f(0)=0$) have all the properties 1 through 6. That would mean that addition is unique up to automorphism on the real numbers. -I would agree that this is not a trivial question.<|endoftext|> -TITLE: Rubik's Revenge Cube in GAP -QUESTION [8 upvotes]: I'm trying to create the Rubik's Revenge (4x4x4 cube) group in GAP . -Take the following net of the 4x4x4 cube with each sticker labelled with a number. The front, left, upper, right, down, and back faces labelled with their respective initials. - U - [64][65][66][67] - [68][69][70][71] - [72][73][74][75] - [76][77][78][79] R -L F -[48][49][50][51] [ 0][ 1][ 2][ 3] [16][17][18][19] -[52][53][54][55] [ 4][ 5][ 6][ 7] [20][21][22][23] -[56][57][58][59] [ 8][ 9][10][11] [24][25][26][27] -[60][61][62][63] [12][13][14][15] [28][29][30][31] - - D[80][81][82][83] - [84][85][86][87] - [88][89][90][91] - [92][93][94][95] - - B[32][33][34][35] - [36][37][38][39] - [40][41][42][43] - [44][45][46][47] - - - -We will consider 12 basic moves. These are the standard definitions of moves for the 4x4x4 cube. Note that each is a quarter turn. - -F: a clockwise turn of front-most face. -f: a clockwise turn of front-inner segment. -B: a counterclockwise turn of the back-most face. -b: a counterclockwise turn of the back-inner segment. -U: a clockwise turn of the top-most face (viewed from above). -u: a clockwise turn of of the inner-top segment (viewed from above). -D: a counterclockwise turn of the bottom-most face (viewed from above). -d: a counterclockwise turn of the inner-bottom segment (viewed from above). -R: a clockwise turn of the right-most face (viewed from the right). -r: a clockwise turn of the inner-right segment (viewed from the right). -L: a counterclockwise turn of the left-most face (viewed from the right). -l; a counterclockwise turn of the inner-left segment (viewed from the right). - -Each of these moves permutes the numbers in the above net. I write out these permutations in GAP. Note I have substituted 96 for 0 because GAP does not permit 0 in a cycle. -F:=(96,3,15,12)(1,7,14,8)(2,11,13,4)(5,6,10,9)(16,83,63,76)(20,82,59,77)(24,81,55,78)(28,80,51,79); - -f:=(72,17,87,62)(73,21,86,58)(74,25,85,54)(75,29,84,50); - -B:=(32,35,47,44)(33,39,46,40)(34,43,45,36)(37,38,42,41)(64,60,95,19)(65,56,94,23)(66,52,93,27)(67,48,92,31); - -b:=(68,61,91,18)(69,57,90,22)(70,53,89,26)(71,49,88,30); - -U:=(64,67,79,76)(65,71,78,72)(66,75,77,68)(69,70,74,73)(96,48,47,16)(1,49,46,17)(2,50,45,18)(3,51,44,19); - -u:=(4,52,43,20)(5,53,42,21)(6,54,41,22)(7,55,40,23); - -D:=(80,83,95,92)(81,87,94,88)(82,91,93,84)(85,86,90,89)(12,28,35,60)(13,29,34,61)(14,30,33,62)(15,31,32,63); - -d:=(8,24,39,56)(9,25,38,57)(10,26,37,58)(11,27,36,59); - -L:=(48,51,63,60)(49,55,62,56)(50,59,61,52)(53,54,58,57)(96,80,32,64)(4,84,36,68)(8,88,40,72)(12,92,44,76); - -l:=(1,81,33,65)(5,85,37,69)(9,89,41,73)(13,93,45,77); - -R:=(16,19,31,28)(17,23,30,24)(18,27,29,20)(21,22,26,25)(3,67,35,83)(7,71,39,87)(11,75,43,91)(15,79,47,95); - -r:=(2,66,34,82)(6,70,38,86)(10,74,42,90)(14,78,46,94); - -I then look at the permutation group generated by these moves. -G:=Group(F,f,B,b,U,u,D,d,L,l,R,r); - -Now to determine if this group G is the correct group, I look at its size divided by 24 (to account for the rotational symmetries). -gap> Size(G)/24; -707195371192426622240452051915172831683411968000000000 - -I compare this number to the size provided on the Wikipedia article (linked above): -7401196841564901869874093974498574336000000000 -My Problem: -What is causing this discrepancy of size? -The first thing I thought was that I had written the permutations wrong, but I rewrote them and got the same thing, so I don't think this is the case. This leads me to believe that I have some theory wrong. -Note that the ratio of my calculated to wikipedia's given size is 95551488 which is $2^{17} * 3^6$. -Also, the formula given on Wikipedia is $$\frac{8!\cdot 3^7 \cdot 24!^2}{4!^6\cdot 24}$$ - -REPLY [6 votes]: It depends on the definition of the puzzle: If the faces are colored uniformly, for example a swap of 5 and 6 is not seen in the puzzle. You can see this discrepancy in the group. We construct the subgroup that fixes the corners in place, but on edges and middle pieces allows a permutation that is not seen in the puzzle: -cor:=[3,12,15,32,35,44,47]; -s:=Stabilizer(G,cor,OnTuples); -edge:=[ [ 1, 2 ], [ 4, 8 ], [ 7, 11 ], [ 13, 14 ], [ 17, 18 ], [ 20, 24 ], -[ 23, 27 ], [ 29, 30 ], [ 49, 50 ], [ 52, 56 ], [ 55, 59 ], [ 61, 62 ], -[ 81, 82 ], [ 84, 88 ], [ 87, 91 ], [ 93, 94 ], [ 33, 34 ], [ 36, 40 ], -[ 39, 43 ], [ 45, 46 ], [ 65, 66 ], [ 68, 72 ], [ 71, 75 ], [ 77, 78 ] ]; -s:=Stabilizer(s,edge,OnTuplesSets); -cen:=[ [ 5, 6, 9, 10 ], [ 21, 22, 25, 26 ], [ 53, 54, 57, 58 ], -[ 85, 86, 89, 90 ], [ 37, 38, 41, 42 ], [ 69, 70, 73, 74 ] ]; -s:=Stabilizer(s,cen,OnTuplesSets); - -gives you a subgroup of order $2^{17}3^6$ that does no recognizable action on the puzzle. This is the discrepancy you observe. You could (modulo rotations in space) consider cosets of this subgroup to describe the puzzle. -In the case of the $3\times 3\times 3$ cube this phenomenon is often hidden by fixing the middle pieces in space which deals with space rotations as well as rotations of the middle pieces. However there are similarly operations that would rotate middle pieces (and which turn up in reality if you take a Rubik's cube with pictures on the faces). If you also account for rotations of the middle pieces you get a larger group (and a more difficult puzzle).<|endoftext|> -TITLE: In $S_5$, we have $aba^{-1}=b^2$, $b=(12345)$, find $a$. -QUESTION [5 upvotes]: In $S_5$, we have $aba^{-1}=b^2, b=(12345)$, find $a$. -I have tried different ways to substitute/rearrange, but none of them worked. - -REPLY [4 votes]: Use the fact that for any cycle $(abc...)$ and permutation $\sigma$, $\sigma(abc...)\sigma^{-1}=(\sigma(a)\sigma(b)...)$. In your case, $b^2=(13524)$, so $a(1)=1$, $a(2)=3$, $a(3)=5$, $a(4)=2$, $a(5)=4$, and putting this all together in cycle notation: $a=(2354)$.<|endoftext|> -TITLE: How to show that $y^T x - \frac{1}{2}x^T Q x$ is bounded above? -QUESTION [5 upvotes]: Strictly convex quadric function. Consider $f(x)=\frac{1}{2}x^TQx$, With $Q\in S_{++}^n$. The function $y^T x - \frac{1}{2}x^T Q x$ is bounded above as a function of $x$ for all $y$. It attaints its maximum at $x=Q^{-1}y$. -This is an example from my book, But I dont understand it well. I dont see how $y^T x - \frac{1}{2}x^T Q x$ is bounded and how to find its maximum. - -REPLY [3 votes]: We can complete the square. We want to write $F(x) = \frac12 x^T Q x - y^T x$ in the form -\begin{align} -\frac12 (x - x_0)^T Q (x - x_0) + c &= -\frac12 x^T Q x - x^T Q x_0 + \frac12 x_0^T Q x_0 + c. -\end{align} -To make things match up, we should pick $x_0$ such that $Q x_0 = y -\iff x_0 = Q^{-1} y$, and we should pick -\begin{align} -c &= - \frac12 x_0^T Q x_0 \\ -&= -\frac12 y^T Q^{-1} y. -\end{align} -We have discovered that -\begin{align} -\frac12 x^T Q x - y^T x &= -\underbrace{(x - Q^{-1} y)^T Q (x - Q^{-1}y)}_{\text{nonnegative}} -\frac12 y^T Q^{-1} y. -\end{align} -This shows that $F$ is bounded below, and that it attains a minimum at -$x = Q^{-1}y$. -(Note that minimizing $F$ is equivalent to solving $Qx = y$. That's a very useful fact.)<|endoftext|> -TITLE: Determine isomorphic graphs in a set -QUESTION [5 upvotes]: I am having trouble understanding the following question. Which of the following graphs are isomorphic? - -The answer provided was the graphs $i$ and $iii$ are isomorphic and there are no other isomorphisms. -This is my answer -Graphs $i,ii,iii$ are all isomorphic as they all have equal number of vertices and for each vertex there is a corresponding vertex (unique) in another graph with the same degree. -Graph $i$ -$|V| = 6 \\deg(V)=3$ - -Graph $ii$ -$|V| = 6 \\deg(V)=3$ - -Graph $iii$ -$|V| = 6 \\deg(V)=3$ -What am I doing wrong? It feels like I'm missing something simple but I still cant see it. Thanks in advance! - -REPLY [5 votes]: Isomorphism is means of telling if two graphs are the same in some sense. A good way to think of an isomorphic "sameness" is as a "relabeling" or a permutation of the vertices. This is equivalent to the formal definition of a an isomorphism between graphs: -Definition: Isomorphism of graphs. An isomorphism between graph $G_1(\boldsymbol {V_1},E)$ and $G_2(\boldsymbol {V_2}, E')$ is a bijective (meaning it is both one-to-one and onto) function $f$ from $V_1 \rightarrow V_2$, where if $v_\alpha,v_\beta \in V_1$ are adjacent then $f(v_\alpha),f(v_\beta )\in V_2$ are also adjacent. -Once can think of this as being a relabeling of the vertex set, because if any two vertexes are adjacent, then if they are relabeled that will not change, and relabeling is obviously bijective. Here is a simple example: - -If I move (map) each vertex as I showed (with $F$ mapping to itself), then this would be equivalent to simply moving the labels in a similar manner. However, not all graphs have the nice symmetry of this one, and so we can ask the harder question: is isomorphic to: - -The simple response here is to look for an explicit isomorphism (Hint: $F \mapsto L$), but a more enlightening one is to look at how we might move the vertices of the second graph to resemble the first one, so that we might see if they are indeed the same graph just "relabeled". Indeed we simply move one vertex and see they are very similar, and that a natural isomorphism can be drawn: - -What underpins this notions of relabeling is put more explicitly in terms of adjacency matrices. Namely, what we are doing is saying: if we permute the adjacency matrix of one graph (meaning we re-order the vertices and their cosponsoring columns and rows) to be the same as the adjacency matrix of another graph, then they are the same graph.<|endoftext|> -TITLE: Smooth manifold $M$ is completely determined by the ring $F$. -QUESTION [7 upvotes]: For any smooth manifold $M$, the collection $F = C^\infty(M, \mathbb{R})$ of smooth real valued functions on $M$ can be made into a ring, and every point $x \in M$ determines a ring homomorphism $F \to \mathbb{R}$ in $F$. If $M$ is compact, every maximal ideal in $F$ arises in this way from a point of $M$. -My question is, if there is a countable basis for the topology of $M$, how do I see that every ring homomorphism $F \to \mathbb{R}$ is obtained in this way? -Progress. We probably want to make use of an element $f \ge 0$ in $F$ such that each $f^{-1}[0, c]$ is compact? But it is not clear to me what do from there. - -REPLY [3 votes]: If $M$ is second-countable, let $\varphi: F \to \mathbb{R}$ be a linear homomorphism. Construct an $f$ as follows: take a countable partition of unity (with compact supports, by second countability) $p_i$. Then$$f = p_1 + 2p_2 + 3p_3 + \dots$$is well-defined since at any given point all but finitely many are zero. However, it is clear that the preimage of $[0, c]$ is always compact, since when $k > c$< points outside the union of the compact support of $p_1$ through $p_k$ will necessarily evaluate to at least $k$. Let $\varphi(f) = t$. -Notice that $\varphi$ is actually an $\mathbb{R}$-algebra homomorphism. Obviously, it is a $\mathbb{Q}$-algebra homomorphism, and the fact that squares must be nonnegative preserves ordering of scalars so continuity does the rest. -Suppose for the sake of contradiction that for each $x$, we had some $f_x$ so that $\varphi(f_x) \neq f_x(x)$. By translating, squaring, and scaling, we can find nonnegative $f_x$ in the kernel of $\varphi$ with $f_x(x) > t$. In fact, this open condition is true on an open neighborhood of $x$. The compact set $f^{-1}([0, t])$ can be covered by finitely many of these open neighborhoods, so we can build a summed function $f'$ which is everywhere nonnegative and at least $t$ on that compact set. Then $\varphi(f + f' - t) = 0$, but $f + f' - t$ is positive everyyhere and hence a unit, contradiction.<|endoftext|> -TITLE: Algebraic surface with infinitely many exceptional curves -QUESTION [6 upvotes]: I am learning about the classification of Projective Algebraic Surfaces (in fact, Compact Complex Surfaces) and I am troubled with the following point. -If I understood correctly every surface $X$ admits a (not necessarily unique) minimal model $X_{min}$, which is a surface without exceptional curves (a rational curve with sel-intersection $-1$). Furhtermore $X$ is obtained from $X_{min}$ after a finite number of blow-ups. -On the other hand I read that there are examples of surfaces with infinitely many exceptional curves. My question is how can we obtain $X$ from $X_{min}$ after a finite number of blow-ups? Can a single blow-up (or a finite number of them) add an infinite number of exceptional curves? -Another way of phrasing this is the following: given $X$ with infinitely many exceptional curves how can we obtain $X_{min}$ performing only a finite number of contractions/blow-downs. -Thanks in advance for your answers! - -REPLY [7 votes]: The solution of the paradox is that you may simultaneously blow up $k$ points and obtain more than $k$ exceptional curve. -The simplest example is obtained by simply blowing up two points $P_1,P_2$ in the plane $\mathbb P^2$. -The blown up surface $X=\tilde {\mathbb P^2}$ contains as exceptional curves not only the inverse images $E_1,E_2$ of $P_1,P_2$ but also the strict transform $\tilde L$ of the line $L=\overline {P_1P_2}$ joining $P_1$ to $P_2$: -Indeed the self-intersection of $L$ is $+1$ and that self-intersection diminishes by $1$ at each $P_i$ after the blow up, so that $\tilde L$ has self-intersection $1-1-1=-1$. -And since $\tilde L$ is isomorphic to $\mathbb P^1$ it is an exceptional curve. -Conclusion: - $X$ has $3$ exceptional curves $E_1,E_2,\tilde L$, although it is obtained by blowing up only $2$ points in $\mathbb P^2$ (which $\mathbb P^2$ has no exceptional curve at all). -If now you simultaneously blow up at least $9$ points of $\mathbb P^2$ in suitably general position you will obtain, rather surprisingly I concede, a surface with infinitely many exceptional curves: cf. Hartshorne, Remark 5.8.1, page 418.<|endoftext|> -TITLE: Which spaces can be used as "test spaces" for the Stone-Čech compactification? -QUESTION [10 upvotes]: Stone-Čech compactification $\beta X$ of a completely regular space $X$ is defined by the following property: Let $X$ be a completely regular space. Let $i \colon X \hookrightarrow \beta X$ be an embedding into a compact Hausdorff space $\beta X$. Then for every continuous map $f\colon X \to K$ where $K$ is a compact Hausdorff space there exists a unique continuous map $\widehat f \colon \beta X \to K$ such that $\widehat f \circ i =f$. (In the other words, every continuous map $X\to K$ has a continuous extension $\beta X\to K$.) - -http://presheaf.com/?d=d4l86n4i40s4n18675m3rw6cye1p -It is known that if we require the above property to be true not for all compact Hausdorff spaces $K$ but only for $K=[0,1]$, i.e. for the unit interval, then we get an equivalent definition. (One possible argument to show this is based on the fact that every compact Hausdorff space embeds into a power of the unit interval.) -My question is: - -Which (compact Hausdorff) spaces $K$ has similar property as unit interval, i.e., the property that if we require the universal property from the definition of Stone-Čech compactification to hold only for this space $K$, then we obtain an equivalent definition? Is complete characterization known? -Are these spaces precisely the generators of the reflective subcategory $\mathbf{CHaus}$ of the category $\mathbf{Top}$? Here $\mathbf{CHaus}$ denotes the category of compact Hausdorff spaces, $\mathbf{Top}$ is the category of topological spaces. By a generator of a reflective subcategory I mean a space such that its reflective hull is precisely this subcategory. - -For example, this is true if we take the unit circle $S$, simply because $[0,1]$ is a closed subspace of $S$. -If we take $K=\{0,1\}$ with the discrete topology, then we cannot repeat the same argument as for unit interval. (Not every compact space, is a subspace of some power $\{0,1\}^a$.) So the above is probably not true for $K=\{0.1\}$ - -REPLY [8 votes]: Such "test spaces" are exactly the compact Hausdorff spaces that contain a subspace homeomorphic to $[0,1]$. Clearly any such space is a test space; conversely, suppose $[0,1]$ does not embed in $K$. Then in fact every map $[0,1]\to K$ is constant, since any path-connected Hausdorff space is arc-connected. So taking $X$ to be any path-connected completely regular space, every map $X\to K$ is constant. It follows that any compactification of $X$ satisfies your universal property for $K$. Thus $K$ is not a test space. -As for your second question, yes, these are the generators of CHaus as a reflective subcategory. For any test space $K$ and any compact Hausdorff space $X$, there is an embedding $i:X\to K^S$ for some set $S$ (this follows from the fact that $[0,1]$ embeds in $K$). Moreover, this embedding can be realized as the equalizer of a pair of maps $K^S\rightrightarrows K^T$ for some $T$: it is the equalizer of its cokernel pair $K^S\rightrightarrows Y$, and $Y$ is again compact Hausdorff, so $Y$ embeds in $K^T$ for some $T$. Composing the cokernel pair with the inclusion $Y\to K^T$, we get a pair of maps $K^S\rightrightarrows K^T$ whose equalizer is $i$. Thus $X$ is generated by $K$ using limits. Conversely, if $K$ is not a test space, then it is totally path-disconnected, and then it is easy to see that the reflective subcategory generated by $K$ consists entirely of totally path-disconnected spaces.<|endoftext|> -TITLE: Programming and ZFC -QUESTION [5 upvotes]: Suppose I have a simple program that implements an algorithm (say depth-first search), written in a simple imperative programming language with the standard for loops, recursions, conditional statements and so on. It takes in a well-specified input and has a well-specified output. -Suppose I want to verify that it always produces the correct output for each input. I can treat the program as a mathematical object subject to certain rules of the programming language. I write a formal mathematical proof, assuming the usual axioms of ZFC and first-order logic, to show that this is true. -How would I know that my program definitely works (i.e. for all inputs, output correctly) due to this proof? I do know that in the ZFC axiom system, my program works because it is defined to work (it is deducible formally). However, it seems that the correctness of my program does not require the full machinery of ZFC; its only axioms are the rules of the programming language. Commonly used proof techniques like mathematical induction are founded on axioms in ZFC, but we do not yet know that these hold due only to the rules of the programming language. -Is it possible that my program does not actually work, but it is proven to work in ZFC? If that's the case, why are all algorithms proven in the usual framework of ZFC (assuming all axioms of set theory, and first-order logic)? -EDIT: Can formal verification of programs (with 100% certainty according to specifications of the programming language) be done in ZFC? I am assuming it can be done under weaker axiom systems (e.g. Hoare logic)? - -REPLY [4 votes]: Let's look at this from a far more mathematical point of view. -An algorithm is like a theory $T$ in some first-order logic. And you can ask if the theory proves $\varphi$. And this can be checked syntactically or in other ways. And that's fine. -An implementation is like a model of $T$. So now instead of asking if $T$ proves $\varphi$ you are asking if $\varphi$ is true in the model $M$. This turns out to be equivalent to asking if the theory of $M$ proves $\varphi$, and that's important. Because it means we can jump between the two questions about truth and provability. -If your implementation (including compiler and processor and whatnot) is faithful and does not deviate from the algorithm (and this means that you have to ignore, completely, all physical constraints or push them into the algorithm somehow), then the fact that you proved that the algorithm works means that your implementation works. -If your implementation is not faithful then you are asking if the algorithm which was faithfully implemented is equivalent to the algorithm that you wanted to implement. That's a whole other question, and it depends also on your ability to extract the "true algorithm" from your implementation.<|endoftext|> -TITLE: how to prove that ${S_N\over E[S_N]}$ converges to an exponential distribution -QUESTION [5 upvotes]: Suppose that $\{X_1,X_2,\ldots\}$ is a sequence of iid $L^1$-random variables such that $E[X_1]\neq 0$. Define for every $n$, -$$ -S_n=X_1+\cdots+X_n. -$$ -Let $N$ be a geometric random variable such that -$$ -P(N=k) = q^{k-1}p,\quad k=1,2,\ldots, -$$ -where $q=1-p$ and $p\in(0,1)$. Assume that $N$ and $\{X_1,X_2,\ldots\}$ are all independent. Show that as $p\to0$, -$$ -{S_N\over E[S_N]} -$$ -converges in distribution to an exponential distribution with some rate $\lambda$, and identify $\lambda$. - -REPLY [4 votes]: You are already on the right track and I complete the latter part for you. -Let $\mu = E[X_1]$ be the common mean and -$ \varphi(t) = E[e^{itX_1}]$ be the common characteristic function -As stated above, -$$ E[S_N] = E[E[S_N|N]] = E[N\mu] = \frac {\mu} {p} $$ -Now consider the characteristic function of $\displaystyle Z = \frac {S_N} {E[S_N]} = \frac {pS_N} {\mu}$: -$$ \begin{align*} \varphi_Z(t) & = E\left[e^{it\frac {pS_N} {\mu}}\right] \\ -& = E\left[E\left[e^{it\frac {pS_N} {\mu}}|N\right]\right] \\ -& = E\left[E\left[e^{i\frac {tp} {\mu} X_1}\right]^N\right] \\ -& = E\left[\varphi\left(\frac {tp} {\mu} \right)^N\right] \\ -& = \sum_{k=1}^{+\infty} \varphi\left(\frac {tp} {\mu} \right)^k (1 - p)^{k-1}p \\ -& = p\varphi\left(\frac {tp} {\mu} \right) \sum_{k=1}^{+\infty} -\left[\varphi\left(\frac {tp} {\mu} \right) (1 - p)\right]^{k-1} \\ -& = \frac {\displaystyle p\varphi\left(\frac {tp} {\mu}\right)} -{\displaystyle 1 - \varphi\left(\frac {tp} {\mu} \right) (1 - p) } -\end{align*}$$ -Note the infinite geometric series converges as the characteristic function is bounded by 1. Since $X_1 \in \mathcal{L}^1$, $\varphi$ is differentiable and thus we can evaluate the limit via L'Hopital Rule: -$$ \begin{align*} -\lim_{p\to 0} \varphi_Z(t) &= \lim_{p\to 0} -\frac {\displaystyle p\varphi\left(\frac {tp} {\mu}\right)} -{\displaystyle 1 - \varphi\left(\frac {tp} {\mu} \right) (1 - p) } \\ -& = \lim_{p\to 0} \frac {\displaystyle \varphi\left(\frac {tp} {\mu}\right) + p\varphi'\left(\frac {tp} {\mu}\right)\frac {t} {\mu}} -{\displaystyle \varphi\left(\frac {tp} {\mu} \right) - (1 - p)\varphi'\left(\frac {tp} {\mu}\right)\frac {t} {\mu} } \\ -& = \frac {\varphi(0) + 0} {\displaystyle \varphi(0) - \varphi'(0)\frac {t} {\mu}} \\ -& = \frac {1} {\displaystyle 1 - i\mu \frac {t} {\mu}} \\ -& = \frac {1} {1 - it} -\end{align*}$$ -which is the characteristic function of $\text{Exp}(\lambda = 1)$<|endoftext|> -TITLE: Haar functions form a complete orthonormal system -QUESTION [6 upvotes]: I want to show that the Haar functions in $L^2([0,1])$ forms an orthonormal basis: -Let $$f = 1_{[0, 1/2)} - 1_{[1/2,0)} \ \ \mbox{,} \ \ f_{j,k}(t) = 2^{j/2}f(2^jt - k).$$ -Let $\mathscr{A} = \{(j.k) : j \geq 0, k = 0, 1, 2, ..., 2^j -1\}.$ I can prove that $\ A := \{1_{[0,1]}\} \cup \{f_{j,k}: (j,k) \in \mathscr{A}\}$ is an orthonormal system in $L^2([0,1])$. -(using the fact that each of them is supported on $[2^{-j}k, 2^{-j}(k+1))$, and each different pairs $i, j$ either has disjoint support or contained in each other support) -I want to show that $A$ is complete. -Let $g \in L^2([0,1])$ with $\left = 0$ and $\left = 0$ for all $(i, j) \in A.$ I will show that $g = 0 $ a.e. Let $$I^l_{j,k} = [2^{-j},2^{-j}k + 2^{-j-1}), I^r_{j,k} = [2^{-j}k + 2^{-j-1}, 2^{-j}(k+1)).$$ -Then $$f_{i,j} = 2^{-j}(1_{I^l_{i,j}} - 1_{I^r_{i,j}}).$$ So I see that $$\int_{I^l_{i,j}} f = \int _{I^r_{i,j}} f$$ for all $(i,j) \in A $ and $$\int_{[0.1]} f = 0.$$ -It just "seems" that $f$ should be $0$ a.e., but I cannot think of rigorous reasons for this to happen (how to clearly show that it is true). - -REPLY [5 votes]: One approach is to show that every continuous function on $[0,1]$ is a uniform limit of linear combination of Haar functions. Suppose $\phi$ is continuous, $n$ is a positive integer, and let -$$\psi = \sum_{j\le n, k}\langle \phi, f_{j,k}\rangle f_{j,k}$$ -By construction, $\psi$ is a piecewise constant function; it is constant on subintervals of the form $[2^{-n-1}k, 2^{-n-1}(k+1))$, $k=0,\dots, 2^{n+1}$. I claim that on each such subinterval, the value of $\psi$ is equal to the average of $\phi$ on that subinterval: this implies uniform convergence, thanks to the uniform continuity of $\phi$. -The proof is by induction. The base case is -$$\int_0^1 \psi = \int_0^1 \langle \phi, 1_{[0,1]}\rangle 1_{[0,1]} = \int_0^1 \phi$$ which holds because all functions except $1_{[0,1]}$ have zero mean on $[0,1]$. Once it has been established that on some dyadic interval $I_{j,k}$ the averages of $\phi$ and $\psi$ are equal, consider its halves $I_-$ and $I_+$: then -$$ -\int_{I_+}\psi -\int_{I_-}\psi = 2^{-j/2}\int_0^1 f_{j,k}\psi - = 2^{-j/2} \int_0^1 f_{j,k}\phi = \int_{I_+}\phi -\int_{I_-}\phi -$$ -which together with -$$ -\int_{I_+}\psi + \int_{I_-}\psi = \int_I \psi = \int_I \phi = \int_{I_+}\phi + \int_{I_-}\phi -$$ -imply that $\int_{I_+}\psi = \int_{I_+}\phi$ and $\int_{I_-}\psi = \int_{I_-}\phi$. - -This proof is just a rephrasing of the construction of Haar martingale, which is a dyadic martingale converging to $f$.<|endoftext|> -TITLE: Random variables defined on the same probability space with different distributions -QUESTION [8 upvotes]: Consider the real-valued random variable $X$ and suppose it is defined on the probability space $(\Omega, \mathcal{A}, \mathbb{P})$. Assume that $X \sim N(\mu, \sigma^2)$. This means that -$$ -(1)\text{ } \mathbb{P}(X\in [a,b])=\mathbb{P}(\{w \in \Omega \text{ s.t } X(\omega)\in [a,b]\})=\frac{1}{2}\left(1+\frac{1}{\sqrt{\pi}}\int_{-(\frac{x-\mu}{\sigma \sqrt{2}})}^{(\frac{x-\mu}{\sigma \sqrt{2}})}e^{-t^2}dt\right) -$$ -In several books I found that we can also say that $X$ is distributed according to $\mathbb{P}$. -Now suppose that we add another random variable $Y$ -on the same probability space and assume $Y \sim U([0,1])$. This means that, for $0\leq a\leq b \leq 1$ -$$ -(2)\text{ } \mathbb{P}(Y \in [a,b])=\mathbb{P}(\{w \in \Omega \text{ s.t } Y(\omega)\in [a,b]\})=b-a -$$ -Question: the fact that $X$ and $Y$ are defined on the same probability space but have different probability distribution is a contradiction? What is the relation between $\mathbb{P}$, the normal cdf and the uniform cdf? Can we say that both $X$ and $Y$ are distributed according to $\mathbb{P}$ even if they have different distributions? - -REPLY [2 votes]: TL;DR I think the source of your confusion is seeing $X$ and $Y$ as being both the identity random variable in $(\Omega, \mathscr{F},\mathbb P)$. - -I'm going to give an example of explicit exponential and uniform distributions in the same probability space in $(\Omega, \mathscr{F},\mathbb P)$. -Consider a random variable $X$ in $((0,1), \mathcal{B}(0,1), Leb)$ given by -$$X(\omega):=\frac{1}{\lambda} \ln \frac{1}{1-\omega}, \lambda > 0$$ -It has cdf $F_X(x) = P(X \le x) = (1-e^{-\lambda x})1_{(0,\infty)}$, which we know to be the cdf of an exponentially distributed random variable. (*) -Actually, -$$X(1-\omega):=\frac{1}{\lambda} \ln \frac{1}{\omega}, \lambda > 0$$ -also has cdf $F_X(x) = P(X \le x) = (1-e^{-\lambda x})1_{(0,\infty)}$. -Are all cdfs in this mysterious probability space exponential? No! -Now consider the identity random variable $U$ in $((0,1), \mathcal{B}(0,1), Leb)$: -$$U(\omega):=\omega$$ -It has cdf $F_U(u) = P(U \le u) = u1_{(0,1)}+1_{(1,\infty)}$, which we know to be the cdf of a uniformly distributed random variable. -The above $X$ and $U$ have different CDFs under the same probability space. The aforementioned explicit representations of the exponential and uniform distributions in this probability space are called Skorokhod representations in $((0,1), \mathcal{B}(0,1), Leb)$. -Now consider the identity random variable $U$ in $(\mathbb R, \mathscr B(\mathbb R), (1-e^{-\lambda \{u\}})1_{(0,\infty)})$ -No surprise that $U$ is exponential by definition: $F_U(u) = P(U \le u) = (1-e^{-\lambda \{u\}})1_{(0,\infty)}$. -Now you're wondering: Aha! So every random variable here is exponential right? Well no, for any distribution you can think of, say, uniform, Bernoulli, etc, all have a place here and their Skorokhod representaion is given by: -$$Y(\omega) = \sup(y \in \mathbb{R}: F(y) < \omega)$$ -Try for yourself to see for yourself that -$$Y(\omega) = \sup(y \in \mathbb{R}: y1_{(0,1)}+1_{(1,\infty)} < \omega)$$ -has uniform distribution in $(\mathbb R, \mathscr B(\mathbb R), (1-e^{-\lambda \{u\}})1_{(0,\infty)})$, i.e. $$P(Y \le y) := P(\sup(y \in \mathbb{R}: y1_{(0,1)}+1_{(1,\infty)} < \omega) \le y) = y1_{(0,1)}+1_{(1,\infty)}$$ -Also try to see for yourself that $X(\omega)$ above no longer has exponential distribution in this probability space. (**) -Conclusion: I think the source of your confusion is seeing $X$ and $Y$ as being both the identity random variable in $(\Omega, \mathscr{F},\mathbb P)$. If you were to see them explicitly, you would know that they definitely don't necessarily have the same distribution. -What $\mathbb P$ does is tell you the probabilities of $\omega$'s. So you know how likely the sample point $$0.5 \in \Omega = (0,1)$$ is but not directly how likely the random variable $X$ is equal to a number in its domain such as the real number $$X(0.5) = \frac{1}{\lambda} \ln \frac{1}{1-0.5} \in \mathbb R$$ is. (*) Of course, the probability that $X$ is the real number $X(0.5)$ is - -dependent on the probability that the sample point $0.5$ because $0.5=1-e^{-\lambda X(0.5)}$ - -not expected to be the same in another probability space, assuming of course that $X$ is in the new probability space, because it now depends on the probability of the sample point/s $X^{-1}(X(0.5))$ aka $X \in \{X(0.5)\}$. (**) - - - -Pf of (*): -Two steps in computing $P(X \le x)$: - -Find all $\omega \in \Omega = (0,1)$ s.t. $X(\omega) \le x$ - -Compute the probability of all those $\omega$'s. - - -For $x \le 0$, $P(X\leq x) = P(X \in \emptyset^{\mathbb R}) = P(\emptyset^{\Omega}) = 0$ -For $x > 0$, $X(\omega) \le x$ -Step 1: -$$ \iff \frac1{\lambda}\ln(\frac{1}{1-\omega}) \le x$$ -$$ \iff \omega \le \frac{e^{\lambda x} - 1}{e^{\lambda x}}$$ -$$ \iff \omega \in (0,1) \cap (-\infty,\frac{e^{\lambda x} - 1}{e^{\lambda x}})$$ -$$ \iff \omega \in (0,\frac{e^{\lambda x} - 1}{e^{\lambda x}})$$ -Step 2: -$$Leb(\omega | \omega \in (0,\frac{e^{\lambda x} - 1}{e^{\lambda x}}))$$ -$$= Leb((0,\frac{e^{\lambda x} - 1}{e^{\lambda x}}))$$ -$$= \frac{e^{\lambda x} - 1}{e^{\lambda x}}$$ -QED - -Pf of (**): -Actually $X \notin (\mathbb R, \mathscr B(\mathbb R), (1-e^{-\lambda \{u\}})1_{(0,\infty)})$ because we need $\frac{1}{1-\omega} > 0 \iff \omega < 1$. -QED -Same for $X(1-\omega)$ where we need $\frac{1}{\omega} > 0 \iff \omega > 0$. -But we can can further try to show $X$ is not exponential in $((-\infty,1), \mathscr B((-\infty,1)), (1-e^{-\lambda \{u\}})1_{(0,\infty)})$ -Pf: -For $x \le 0$, $P(X\leq x) = P(X \in \emptyset^{\mathbb R}) = P(\emptyset^{\Omega}) = 0$ -For $x > 0$, $X(\omega) \le x$ -Step 1: -$$ \iff \frac1{\lambda}\ln(\frac{1}{1-\omega}) \le x$$ -$$ \iff \omega \le \frac{e^{\lambda x} - 1}{e^{\lambda x}}$$ -$$ \iff \omega \in (-\infty,1) \cap (-\infty,\frac{e^{\lambda x} - 1}{e^{\lambda x}})$$ -$$ \iff \omega \in (-\infty,\min\{1,\frac{e^{\lambda x} - 1}{e^{\lambda x}}\})$$ -Step 2: -$$P(\omega | X(\omega) \le x)$$ -$$ = P(\omega | \omega \in (-\infty,\min\{1,\frac{e^{\lambda x} - 1}{e^{\lambda x}}\}))$$ -$$= \int_{-\infty}^{\min\{1,\frac{e^{\lambda x} - 1}{e^{\lambda x}}\}} d((1-e^{-\lambda \{u\}})1_{(0,\infty)})$$ -$$ = 1-e^{-\lambda \min\{1,1-e^{-\lambda t}\}}$$ -Doesn't look exponential to me. -QED<|endoftext|> -TITLE: Prove that $\int_a^b \left( \int_c^d f(x,y)dy\right) dx=\int_c^d \left( \int_a^b f(x,y)dx\right) dy$ -QUESTION [7 upvotes]: Let $f$ be continuous function on $[a,b]\times [c,d]$. Prove that -Prove that $$\int_a^b \left( \int_c^d f(x,y)dy\right) dx=\int_c^d - \left( \int_a^b f(x,y)dx\right) dy$$ - -First of all, note that $ \int_c^d f(x,y)dy$ is continuous in $x$ and $ \int_a^b f(x,y)dx$ is continuous in $y$ so both integrals exist. -We have -$$\frac{d\left[\int_a^b \left( \int_c^t f(x,y)dy\right) dx \right]}{dt}=\int_a^b f(x,t) dx$$ -(We used differentiation under the integral sign and the fundamental theorem of calculus) -Also -$$\frac{d\left[\int_c^t - \left( \int_a^b f(x,y)dx\right) dy \right]}{dt}=\int_a^b f(x,t) dx$$ -(Here we used the fundamental theorem of calculus) -Thus $$\int_a^b \left( \int_c^t f(x,y)dy\right) dx-\int_c^t - \left( \int_a^b f(x,y)dx\right) dy$$ -as a function of $t$ is constant on $(c,d)$. -It remains to show that this constant is $0$. How can I do this? - -REPLY [5 votes]: Denote by $g$ the function $g \colon [c,d] \to \mathbf R$ -$$ g(t) = \int_a^b \left(\int_c^t f(x,y)\, dy\right)\, dx - \int_c^t \left( \int_a^b f(x,y)\, dx\right) \, dy $$ -you consider. Then $g$ is continuous on $[c,d]$, differentiable on $(c,d)$ and has $g'(t) = 0$ for $t \in (c,d)$ (you've proven this above). Hence (by the mean value theorem), $g$ is constant on $[c,d]$ (this also follows by continuity from the constantness on $(c,d)$). Therefore, for all $t$: $g(t) = g(c)$. But $g(c) = 0$, as both summands are $0$.<|endoftext|> -TITLE: Prove that if $7^n-3^n$ is divisible by $n>1$, then $n$ must be even. -QUESTION [8 upvotes]: I tried using factorization of $a^n-b^n$ for odd $n$ in an attempt to work through to a situation where the factors are such that they cannot have n as a factor. But I reached nowhere. Here's how I proceeded - -$$a^n-b^n=(a-b)\left(a^{n-1}+a^{n-2}b+a^{n-3}b^2+\dots+a^2b^{n-3}+ab^{n-2}+b^{n-1}\right)$$ -a-b=4 and can be ignored. -The latter term is essentially odd and not divisible by any odd number till $9$ (easy to prove without getting into calculations). -However, for some arbitrary odd number $x=p^k$ where $p \ge 11$ is prime, I cannot say whether the sum of two terms is divisible by $x$ or not when the two terms are individually not divisible by $x$. - -REPLY [6 votes]: If $n\mid 7^n-3^n$ and $n>1$, then let $p$ be the least prime divisor of $n$. -Clearly $\gcd(21,p)=1$, so $\left(7\cdot 3^{-1}\right)^n\equiv 1\pmod{p}$, i.e. $\text{ord}_p\left(7\cdot 3^{-1}\right)\mid n$. -By Fermat's Little theorem $\text{ord}_p\left(7\cdot 3^{-1}\right)\mid p-1$. Therefore $\text{ord}_p\left(7\cdot 3^{-1}\right)\mid \gcd(n,p-1)=1$, so $7\cdot 3^{-1}\equiv 1\pmod{p}$, so $7\equiv 3\pmod{p}$, so $p\mid 7-3=4$, so $p=2$.<|endoftext|> -TITLE: Submartingale convergence (Durrett 5.3.1) -QUESTION [7 upvotes]: Exercise 5.3.1 in Durrett's "Probability Theory and Examples" states -Let $X_n$, $n\ge 0$, be a submartingale with $\sup X_n < \infty$. Let $\xi_n=X_n-X_{n-1}$, and suppose $E(\sup \xi_n^+)<\infty$. Show that $X_n$ converges almost surely. -Here are my thoughts: -It looks like we need to use theorem 5.2.8 (Martingale convergence theorem), which states that $X_n$ converges almost surely if $\sup EX_n^+<\infty$. I am trying to understand why we can just do the following (or at least I'm guessing we can't): -Suppose $X_n > 0$ (edit: on some set with positive measure), then $0< \sup X_n < \infty$, and therefore $EX_n^+ \le \sup X_n < \infty$. So we're done. -Otherwise, if $X_n \le 0$ a.e., then $\sup X_n^+ = 0$ a.e. Once again, we're done. -Am I missing something? Since I didn't use all the information, I probably am. -Thanks for the help. - -REPLY [10 votes]: You've stepped off a cliff in deducing that the expectation $E(X^+_n)$ is bounded above by the random variable $\sup_n X_n$. -Suggestion: Employ the argument used by Durrett in the proof of his Theorem 5.3.1. Fix a positive real $K$, define the stopping time $T=T_K$ to be the first time $n$ that $X_n$ is larger than $K$, and observe that the stopped process satisfies -$$ -X_{n\wedge T}\le K+\sup_m\xi_m^+, -$$ -so that -$$ -E(X_{n\wedge T})\le K+E(\sup_m\xi_m^+)<\infty,\qquad\forall n. -$$ -Now apply the submartingale convergence theorem to the stopped process. This yields a.s. convergence of $X_n$ on the event $\{T_K=\infty\}=\{\sup_mX_m\le K\}$. Finally, vary $K$.<|endoftext|> -TITLE: $\alpha$-computable bounded subset of $\alpha$ is in $L_\alpha$ -QUESTION [5 upvotes]: I would like to prove the proposition 1.12b from Chong, Techniques of Admissible Recursion Theory: -Let $\alpha$ be an admissible ordinal. A subset $K \subseteq \alpha$ is in $L_\alpha$ ($\alpha$-th level of Goedel's Constructible Universe $L$) iff it is bounded and both $A$ and $\alpha - A$ are $\Sigma_1$-definable over $L_\alpha$. -The left to right direction is trivial. But what about the other direction. Suppose that $K \subseteq \gamma < \alpha$ and both $K$ and $\alpha - K$ are $\Sigma_1$-definable over $L_\alpha$. How could I prove that $K$ is in $L_\alpha$? -I was thinking of proving that if $\phi$ is a formula that defines $K$ over $L_\alpha$, then perhaps the same formula defines $K$ over $L_\delta$ for some $\delta < \alpha$. But the problem seems to be to guarantee that the existential witnesses are bounded in some $\delta < \alpha$. - -REPLY [4 votes]: Depending on your definition of an admissible ordinal, this question is either obvious or may require some work. -Usually, $\alpha$ is an admissible ordinal if and only if $L_\alpha$ is an admissible set. An admissible set is a transitive set satisfying some basic things like pairing, union, and $\Delta_1$-separation, and $\Sigma_1$-collection. -Sometimes, an admissible set is defined using only $\Delta_0$ comprehension and collection, but you can prove $\Delta_1$ comprehension and $\Sigma_1$ collection from these axioms. -Back to your problem: Since $K$ is bounded, there is some $\delta < \alpha$ such that $K \subseteq \delta$. $\delta \in L_\alpha$. Since $K$ is $\Sigma_1$ and $\Pi_1$ by your assumption, there are $\Sigma_1$ and $\Pi_1$ formulas $\varphi$ and $\psi$ such that $L_\alpha \models (\forall x)(\varphi(x) \Leftrightarrow \psi(x))$ and $K = \{\beta < \alpha : L_\alpha \models \varphi(x)\} = \{\beta < \alpha : L_\alpha \models \psi(x)\}$. Then by $\Delta_1$, comprehension the set $\{\beta < \delta : L_\alpha \models \varphi(x)\} \in L_\alpha$. But this set is just $K$ since $K$ is bounded by $\delta$.<|endoftext|> -TITLE: How can the standard deviation be interpreted when the range is partially impossible? -QUESTION [13 upvotes]: After meassuring the answer time of a software system I calculated the standard deviation from the samples. The average time is about 200ms, the standard deviation $$\sigma = 300ms$$ According to the image below this should mean that 68.2% of all response times should be between -100ms and 500ms. - -Image:https://en.wikipedia.org/wiki/Standard_deviation -A negative response time makes obviously no sense. How should the part of the normal distribution be interpreted that is enclosed in the red box? -Sample data with similiar avg ~202 stddev ~337: -100 -100 -200 -150 -70 -90 -110 -80 -150 -70 -190 -110 -130 -100 -100 -1500 - -REPLY [2 votes]: There's another issue that the other answers are not addressing. In applications like this one you're often not interested in the standard deviation, since it is a non-robust statistic with a breakdown point of 0%, which means that for a large sample size changing a negligible fraction of the data can result in an arbitrary change in the statistic's value. Instead, consider using quantiles, common ones being the inter-quantile range, which are more robust statistics. Specifically, the $25$-th and $75$-th quantiles both have 25% breakdown point, because you need to change at least 25% of the data to arbitrarily affect them. -This is particular important in your consideration, because of a number of factors: - -Communication delays are often caused by one-time events that result in a down-time rather than a normal delay, and of course such down-times are very long in comparison. For example think of power outages, server crash, even sabotage... -Even if there are no down-times in your data, other factors could have a significant impact on your measurements that are completely irrelevant to your application. For example, other processes running in the background might slow down your application, or memory caching might be improving the speed for some but not all runs. There might even be occasional hardware activity that affects the speed of your application only now and then. -Usually people judge a system's responsiveness based on the average case, not the average of all cases. Most will accept that an operation might in a minority of the cases completely fail and never even return a response. An excellent example is the HTTP request. A small but nonzero proportion of packets get totally dropped from the internet and the request would have a theoretically infinite response time. Obviously people don't care and just press "Refresh" after a while.<|endoftext|> -TITLE: Inequality $\log x\le \frac{2}{e} \, \sqrt{x}$ -QUESTION [6 upvotes]: The inequality $$\log x \le \frac{2}{e} \, \sqrt{x},$$ where $\log x$ denotes the natural logarithm, is used in the proof of Theorem 4.7 in Apostol's Analytic Number Theory. -It seems that the inequality is not very difficult to prove using calculus. We could simply find maximum/minimum of some function like $f(x)= \frac2e \sqrt{x} - \log x$ or $g(x)=\frac{\log x}{\sqrt x}$. -Are there some other methods how this inequality can be proved? Is there a way in which this inequality can be seen more directly, without having to calculate critical points of some auxiliary function? - -REPLY [3 votes]: Letting $x=(eu)^2$, the inequality is equivalent to -$$\log u\le u-1$$<|endoftext|> -TITLE: Intuitive explanation why complex differentiation is not linear over real and imaginary part -QUESTION [7 upvotes]: Suppose we have a complex valued function $f = u + iv \colon U \to \mathbb C$ defined on an open subset $U$ of $\mathbb C$, which is holomorphic. -I was asked why it is wrong to use the linearity of the differentiation operator to write -$$ -\frac{d}{dz} f(z) = \frac{d}{dz} (u(z) + iv(z)) = \frac{d}{dz} u(z) + i \frac{d}{dz} v(z). -$$ -I explained this by proving that a real - valued function that is holomorphic must be constant, and so for $\frac{d}{dz} u$ and $\frac{d}{dz} v$ to exist $u$ and $v$ must be constant in $z$. However $f$ is not necessarily constant so applying the linearity in this way is invalid. I also showed that the real - and imaginary part functions $\Re()$ and $\Im()$ are not holomorphic. -Intuitively I think that real - valued and complex - valued functions are a different 'thing' and the $\frac{d}{dz}$ operator lives in the 'complex analysis world', so even though it is linear over complex functions it does not make sense to use it in the context of real - valued functions. However I was wondering whether there is a clearer or perhaps deeper explanation to this. - -REPLY [3 votes]: The real and imaginary part are not analytic. The local Taylor expansion is -$u(z + h) = u(z) + ah + b\bar{h} + $ [sum of higher-degree polynomial functions of $(h,\bar{h})$]. -The degree-1 approximation of $u(z)$ near $z_0$ is thus not of the form $u(z_0) +B (\Delta z)$ but $u(z_0) + B (\Delta z) + C(\overline{\Delta z})$. -In the typical situation where $C \neq 0$, the difference quotient $\frac{u(z + h) - u(z)}{h}$ will not converge to a limit as $h \to 0$, since it equals $(B+C\frac{\overline{h}}{h}+o(1))$ which converges only along (asymptotically) radial paths. -So the perturbation with respect to $z$ makes good sense, $du = u(z + dz) - u(z)$, but its description in terms of $dz$ is no longer consistent with the idea of approximate $\mathbb{C}$-linear dependence on $dz$ alone. Rather, you need to introduce $d\overline{z}$ and the derivative operators dual to it and $dz$.<|endoftext|> -TITLE: Finding the logarithm of a matrix? -QUESTION [7 upvotes]: Find $B$ if $A=e^B$ and -$A=\begin{bmatrix} -2&1&0\\ -0&2&0\\ -0&0&4\\ -\end{bmatrix}$. -Besides, I would be very happy if give some general remark(Best approach). I have seen the wiki article on log of a matrix but it was too complicated(for me). - -REPLY [8 votes]: You can make use of the block structure of $A$: -$$ -A = -\begin{pmatrix} -C & 0 \\ -0 & 4 -\end{pmatrix} -= e^B -= \sum_{k=0}^\infty \frac{1}{k} B^k -\Rightarrow -B = -\begin{bmatrix} -D & 0 \\ -0 & x -\end{bmatrix} -$$ -so we can assume $4 = e^x \Rightarrow x = \ln(4)$. -For the block matrices we get -$$ -C = -\begin{pmatrix} -2 & 1 \\ -0 & 2 -\end{pmatrix} -= -e^D = \sum_{k=0}^\infty \frac{1}{k!} D^k -$$ -and try an upper triangular matrix -$$ -D = -\begin{pmatrix} -y & z \\ -0 & y -\end{pmatrix} -$$ -and get the powers -$$ -D^2 = -\begin{pmatrix} -y & z \\ -0 & y -\end{pmatrix} -\begin{pmatrix} -y & z \\ -0 & y -\end{pmatrix} -= -\begin{pmatrix} -y^2 & 2 y z \\ -0 & y^2 -\end{pmatrix} -\\ -D^3 = -\begin{pmatrix} -y^2 & 2 y z \\ -0 & y^2 -\end{pmatrix} -\begin{pmatrix} -y & z \\ -0 & y -\end{pmatrix} -= -\begin{pmatrix} -y^3 & 3 y^2 z \\ -0 & y^3 -\end{pmatrix} -\\ -\vdots -\\ -D^k = -\begin{pmatrix} -y^k & k y^{k-1} z \\ -0 & y^k -\end{pmatrix} -\quad (k \ge 1) -$$ -which suggest -$$ -C = -\begin{pmatrix} -2 & 1 \\ -0 & 2 -\end{pmatrix} -= -e^D -= -\begin{pmatrix} -e^y & e^y z \\ -0 & e^y -\end{pmatrix} -$$ -so $y = \ln(2)$ and $z = 1/e^y = 1/2$. -This gives -$$ -B = -\begin{pmatrix} -\ln(2) & 1/2 & 0 \\ -0 & \ln(2) & 0 \\ -0 & 0 & \ln(4) -\end{pmatrix} -$$<|endoftext|> -TITLE: On volume forms and norms on exterior powers -QUESTION [5 upvotes]: Let $V$ be a $n$-dimensional vector space. Given an inner product on $V$ one may define an inner product on the simple $k$-vectors of $\Lambda^k(V)$ by -$$\langle v_1 \wedge \cdots \wedge v_k, w_1 \wedge \cdots \wedge w_k\rangle_{\Lambda^k(V)} - := \operatorname{det}\left(\langle v_i, w_j \rangle_V\right)$$ -and extend it bilinearly. As usual this induces a norm on $\Lambda^k(V)$. -Burago/Ivanov claim in -[Lemma 2.4, p. 6] that an oriented volume form $\omega\in \Lambda^2(V^\ast) \cong \left(\Lambda^2(V)\right)^\ast$ on $V$ determines a linear isometry $J: V \to V^\ast$ in "a standard way". -I don't understand the "isometry"-part. Here is what I have so far: -Define the mapping $J:V \to V^\ast$ by $J(u)(v) := \omega(u \wedge v), v\in V$. I can show that this is an isomorphism. I can define a somewhat "dual volume form" -$\omega^\ast \in \Lambda^2(V) \cong \Lambda^2(V^{\ast\ast}) \cong \left(\Lambda^2(V^\ast)\right)^\ast$ by means of -$$\omega^\ast(l\wedge g) := \omega\left(J^{-1}(l)\wedge J^{-1}(g)\right)$$ -Thus, -$$\omega^\ast\left(J(u)\wedge J(u')\right) = \omega(u\wedge u').$$ -This looks quite promising already. -(I am able to generalise this to $\Lambda^n(V)$ via $\widetilde J: \Lambda^{n-1}(V) \to V^\ast, \sigma \mapsto \omega(\sigma \wedge \cdot)$ and Hodge dual) -The way Burago/Ivanov use the "isometry"-part in their paper is $\left|J(v) \wedge J(v')\right| = \left|v \wedge v'\right|$ though (where the norms are on the respective exterior powers). -Is there a relationship between the induced norm on the exterior power and a corresponding volume form? -Maybe by choosing an orthonormal basis for $V$ and taking the standard volume form $\varepsilon^1 \wedge \cdots \wedge \varepsilon^n$ determined by the dual basis $\{\varepsilon^i\}$ ? - -REPLY [3 votes]: I will answer my own question in some more generality. -Let $V$ be an $n$-dimensional real inner product space. -On $\Lambda^k(V)$ we define the inner product -\begin{align} - \left\langle v_1 \wedge \cdots \wedge v_k, w_1 \wedge \cdots \wedge w_k\right\rangle_{\Lambda^k(V)} - := \operatorname{det}\left(\langle v_i, w_j \rangle_V\right) -\end{align} -and extend it bilinearly. -This induces a norm -$ - \left\|\sigma\right\|_{\Lambda^k(V)} := \sqrt{\left\langle \sigma, \sigma\right\rangle_{\Lambda^k(V)}}. -$ -Given an orthonormal basis $\{e_1,e_2,\ldots,e_n\}$ for $V$ one can easily calculate that $e_1\wedge e_2\wedge \cdots\wedge e_n$ is a unit $n$-vector in the one-dimensional real vector space $\Lambda^n(V)$. -Introduce the basis $\{\varepsilon^1,\varepsilon^2,\ldots,\varepsilon^n\}$ for $V^\ast$ dual to $\{e_1,e_2,\ldots,e_n\}$, that is, $\varepsilon^i(e_j) = \delta^i_j$. -The Riesz representation theorem gives an inner product on $V^\ast$ and the dual basis is orthonormal with respect to this inner product. Then analoguously, $\varepsilon^1\wedge \varepsilon^2\wedge \ldots\wedge \varepsilon^n$ is a unit $n$-form in $\Lambda^n(V^\ast)$. -Now let $\omega\in \Lambda^n(V^\ast)$ be the standard volume form given by $\omega = \varepsilon^1\wedge \varepsilon^2\wedge \ldots\wedge \varepsilon^n$. Then $\omega(e_1\wedge e_2\wedge \cdots\wedge e_n)=1$. -Any $n$-vector $v_1\wedge v_2\wedge \cdots\wedge v_n\Lambda^n(V)$ is a multiple $c$ of the basis $n$-vector $e_1\wedge e_2\wedge \cdots\wedge e_n$. - -Lemma 1 -If $v_1\wedge v_2\wedge \cdots\wedge v_n = c e_1\wedge e_2\wedge \cdots\wedge e_n$ then - \begin{align} - \left\|v_1\wedge v_2\wedge \cdots\wedge v_n\right\|_{\Lambda^n(V)} - = \left|\omega(v_1\wedge v_2\wedge \cdots\wedge v_n)\right|. -\end{align} - -Proof: -\begin{align} - \left\|v_1\wedge v_2\wedge \cdots\wedge v_n\right\|_{\Lambda^n(V)} - &= \frac{\left\|v_1\wedge v_2\wedge \cdots\wedge v_n\right\|_{\Lambda^n(V)}} - {\left\|e_1\wedge e_2\wedge \cdots\wedge e_n\right\|_{\Lambda^n(V)}}\\ - &= |c| \\ - &= \left|\frac{\omega(v_1\wedge v_2\wedge \cdots\wedge v_n)} - {\omega(e_1\wedge e_2\wedge \cdots\wedge e_n)}\right| \\ - &= \left|\omega(v_1\wedge v_2\wedge \cdots\wedge v_n)\right| -\end{align} -Let us define the map $\widetilde J: \Lambda^{n-1}(V) \to V^\ast$ by $\widetilde J(\sigma)(v) := \omega(\sigma \wedge v)$. -This map is an isomorphism which can be seen by a straight forward calculation and I won't do it here. -The Hodge dual gives an isomorphism $\star: \Lambda^k(V) \to \Lambda^{n-k}(V)$ which is characterised by -\begin{align} - (\star \lambda) \wedge \theta = \left\langle\lambda,\theta\right\rangle_{\Lambda^{k}(V)} e_1\wedge e_2\wedge \cdots\wedge e_n -\end{align} -where $\lambda, \theta \in \Lambda^k(V)$ are arbitrary. -The composite map $J := \widetilde J\circ\star: V \to V^\ast$ is therefore also an isomorphism. -Define an $n$-vector $\omega^\ast := J_\ast \omega \in \Lambda^n(V) \cong \left(\Lambda^n(V^\ast)\right)^\ast$ as the pushforward of the volume form $\omega$ by $J$, that is, -\begin{align} - \omega^\ast(\phi^1\wedge \phi^2\wedge\cdots\wedge \phi^n) - := \omega\left(J^{-1}(\phi^1)\wedge J^{-1}(\phi^2)\wedge\cdots\wedge J^{-1}(\phi^n)\right). -\end{align} -for $\phi^1\wedge \phi^2\wedge\cdots\wedge \phi^n\in \Lambda^n(V^\ast)$. -This is well-defined because $J$ is an isomorphism and nonzero because $\omega$ is. Therefore, $\omega^\ast$ is a volume form on $V^\ast$ (what I called dual volume form in the question) - -Lemma 2 -For all $i=1,2,\ldots,n$ it holds that $J(e_i) = \epsilon^i$ and thus - \begin{align*} - \omega^\ast(\epsilon^1\wedge\epsilon^2\wedge\cdots\wedge\epsilon^n) &= 1. -\end{align*} - -Proof: -By the equation for the Hodge dual we have -\begin{align*} - J(e_i)(e_j) - &= (\widetilde J \circ \star)(e_i)(e_j) - = \widetilde J (\star e_i)(e_j) \\ - &= \omega(\star e_i \wedge e_j) \\ - &= \omega\left(\left\langle e_i,e_j\right\rangle_{V} e_1\wedge e_2\wedge\cdots\wedge e_n\right) \\ - &= \delta^i_j \omega\left(e_1\wedge e_2\wedge\cdots\wedge e_n\right) - = \delta^i_j -\end{align*} -The second assertation follows from the definition of $\omega^\ast$. -Finally, we can prove the following result. - -Proposition -Let $\{e_1,e_2,\ldots,e_n\}$ be an orthonormal basis for $V$ and $\omega$ the standard volume form. The isomorphism $J$ as above is isometric in the following sense - \begin{align*} - \left\|J(v_1)\wedge J(v_2)\wedge\cdots\wedge J(v_n)\right\|_{\Lambda^n(V^\ast)} - = \left\|v_1\wedge v_2\wedge\cdots\wedge v_n\right\|_{\Lambda^n(V)} -\end{align*} - for all $v_1\wedge v_2\wedge\cdots\wedge v_n\in \Lambda^n(V)$. - -Proof: -Lemma 1 and 2 yield -\begin{align*} - \left\|J(v_1)\wedge J(v_2)\wedge\cdots\wedge J(v_n)\right\|_{\Lambda^n(V^\ast)} - &= \frac{\left\|J(v_1)\wedge J(v_2)\wedge\cdots\wedge J(v_n)\right\|_{\Lambda^n(V^\ast)}} - {\left\|\epsilon^1\wedge\epsilon^2\wedge\cdots\wedge \epsilon^n\right\|_{\Lambda^n(V^\ast)}} \\ - &= \left|c\right| \\ - &= \left|\frac{\omega^\ast\left(J(v_1)\wedge J(v_2)\wedge\cdots\wedge J(v_n)\right)} - {\omega^\ast\left(\epsilon^1\wedge\epsilon^2\wedge\cdots\wedge \epsilon^n\right)}\right| \\ - &= \left|\omega^\ast\left(J(v_1)\wedge J(v_2)\wedge\cdots\wedge J(v_n)\right)\right| \\ - &= \left|\omega(v_1\wedge v_2\wedge \cdots \wedge v_n)\right| \\ - &= \left\|v_1\wedge v_2\wedge \cdots \wedge v_n\right\|_{\Lambda^n(V)}. -\end{align*}<|endoftext|> -TITLE: Description of generated Grothendieck topology -QUESTION [5 upvotes]: Let $C$ be a small category, and let $\tau$ be a set of sieves in $C$. Assume that $\tau$ contains all the maximal sieves, and is stable under pullbacks. How to describe the Grothendieck topology $\tau'$ generated by $\tau$ explicitly? I guess that $$\tau'(X) = \{T \text{ sieve on } X : \exists S \in \tau(X) \forall (a : Y \to X)\in S \,(a^* (T) \in \tau(Y))\}$$ works, but I am not sure. Perhaps we have to iterate this. - -REPLY [4 votes]: Here is a one-step construction. Say a $\tau$-tree on an object $X$ in $\mathcal{C}$ is a set $\Phi$ that satisfies the following conditions: - -Every element of $\Phi$ is a composable sequence of morphisms in $\mathcal{C}$, say $(f_1, \ldots, f_n)$, such that ($f_1 \circ \cdots \circ f_n$ is defined and) $\operatorname{codom} f_1 = X$. -The empty sequence is in $\Phi$. -If $(f_1, \ldots, f_n, f_{n+1}) \in \Phi$ then $(f_1, \ldots, f_n) \in \Phi$. -For every $(f_1, \ldots, f_n) \in \Phi$, -$$\{ u : (f_1, \ldots, f_n, u) \in \Phi \}$$ -is either empty or a $\tau$-sieve. (If this set is empty, we say $(f_1, \ldots, f_n)$ is a leaf of $\Phi$.) -Every element of $\Phi$ occurs as a prefix of some leaf of $\Phi$. - -Now, say a $\tau$-covering sieve on $X$ is a sieve on $X$ that contains -$$\{ f_1 \circ \cdots \circ f_n : (f_1, \cdots, f_n) \text{ is a leaf of } \Phi \}$$ -for some $\tau$-tree $\Phi$ on $X$. I leave it to you to verify that this defines the smallest Grothendieck topology on $\mathcal{C}$ that contains $\tau$.<|endoftext|> -TITLE: Expectation of maximum of arithmetic means of i.i.d. exponential random variables -QUESTION [7 upvotes]: Given the sequence $(X_n), n=1,2,... $, of iid exponential random variables with parameter $1$, define: -$$ M_n := \max \left\{ X_1, \frac{X_1+X_2}{2}, ...,\frac{X_1+\dots+X_n}{n} \right\} $$ -I want to calculate $\mathbb{E}(M_n)$. Running a simulation leads me to believe that -$$ \mathbb{E}(M_n)=1+\frac{1}{2^2}+\cdots+\frac{1}{n^2} = H_n^{(2)}.$$ -Is this correct? If yes, how would one go proving it? I tried using induction and the fact that $M_{n+1}=\max \{M_n, \frac{1}{n}(X_1+\cdots+X_{n+1}) \}$ along with the equality $E(X_1|X_1+\cdots+X_{n+1})=\frac{1}{n}(X_1+\cdots+X_{n+1})$ but didn't manage to accomplish anything. - -REPLY [4 votes]: For any $x>0$ and $n>1$, the following relation holds (with $\mathbb P(M_1\leqslant x)=1-e^{-x}$): - -$$ -\mathbb P(M_n \leqslant x)=\mathbb P(M_{n-1} \leqslant x) - e^{-nx}\frac{x^{n-1}n^{n-2}}{(n-1)!}\tag{1} -$$ - -Consequently, $\mathbb P(M_n \leqslant x) = 1 - \sum\limits_{r=1}^{n} e^{-rx} \frac{x^{r-1}r^{r-2}}{(r-1)!}$. Therefore, -$$\mathbb E[M_n]=\int\limits_{0}^{\infty} \mathbb P(M_n>x) \mathrm dx = \sum\limits_{r=1}^{n}\int\limits_{0}^{\infty}e^{-rx}\frac{x^{r-1}r^{r-2}}{(r-1)!}\mathrm dx = \sum\limits_{r=1}^{n}\frac{1}{r^2}\,.$$ - - Proof of $(1)$: -$$ -\mathbb P(M_{n-1} \leqslant x) - \mathbb P(M_n \leqslant x) = e^{-nx}\int\limits_{0}^{x}\int\limits_{0}^{2x-x_1}\ldots\int\limits_{0}^{(n-1)x-\sum_{i=1}^{n-2}x_i}\mathrm dx_{n-1} \ldots \mathrm dx_1 \\= e^{-nx}\frac{x^{n-1}n^{n-2}}{(n-1)!}\,, -$$ -where the volume integral may be evaluated by successive application of Leibniz's integral rule .<|endoftext|> -TITLE: fair die or not, from 3D printer -QUESTION [9 upvotes]: I made a 3D printed die today, but depending on the heat applied, it may or may not be a "fair" die (i.e. have an equal chance of landing on each face). -I have just tried rolling it 150 times. The frequency results came out to: -$$ \begin{array}{c|c} \hline -1's & 21 \\ -\hline - 2's & 30 \\ -\hline -3's & 23 \\ -\hline -4's & 31 \\ -\hline -5's & 21 \\ -\hline -6's & 24 \\ -\hline -\end{array} $$ -How would I calculate the chance that this die is fair? - -REPLY [9 votes]: Goodness-of-fit test. Computation in R. Results agree with Comment by @Peter. - obs = c(21, 30, 23, 31, 21, 24) - chisq.test(obs) - - Chi-squared test for given probabilities - - data: obs - X-squared = 3.92, df = 5, p-value = 0.561 - -If observed counts are $X_i$ and expected counts are $E = 150/6 = 25,$ -Then the chi-squared goodness-of-fit statistic is -$Q = \sum_{i=1}^6 (X_i - E)^2/E,$ which is approximately -distributed as $Chisq(DF = 5).$ The critical value for a -test at level 5% is 11.07. We fail to reject the null hypothesis -that all six faces are equally likely because $Q = 3.92 < 11.07.$ -Power. However, I'm wondering if 150 rolls is enough. Suppose your -die is markedly biased so that faces 1, 2, and 3 each have probability 5/36 and faces 4, 5, and 6 each have probability 7/36. Then the following -simulation shows that only about 27 in 100 tests with 150 rolls -would reject the hypothesis of fairness. That is, the power of -the goodness-of-fit test against this particular degree of bias -is about 27%. More modestly biased dice would fail the test at an even lower rate. - m = 10^5; q = numeric(m); E = 150/6 - for(i in 1:m) { - faces = sample(1:6, 150, repl=T, prob=c(5,5,5,7,7,7)/36) - x = table(faces); q[i]=sum((x-E)^2/E)} - mean(q > qchisq(.96, 5)) - ## 0.2702 - -The histogram shows values of $Q$ for 100,000 tests, each using 150 rolls of such a biased die. The vertical line is the critical value for a test at level 5%. The curve is the density of $Chisq(5)$. - -Prompted by a comment, I ran a slight modification of the R code that shows -89% power for the same biased die as above, but using 600 rolls for the test. The corresponding graph is shown below. -Note: I have posted a Bayesian analysis of the data for face 1 -separately.<|endoftext|> -TITLE: Evaluate $\lim_{R\to\infty}\left(\int_0^R\left|\frac{\sin x}{x}\right|dx-\frac{2}{\pi}\log R\right)$ -QUESTION [6 upvotes]: Is there a closed form of -$$\lim_{R\to\infty}\left(\int_0^R\left|\frac{\sin x}{x}\right|dx-\frac{2}{\pi}\log R\right)$$ -I am pretty interested whether we can find out a closed form of this limit. -We can show that for $R=n\pi,n\in\mathbb{N}$, we have -$$\begin{aligned} -\int_0^R\left|\frac{\sin x}{x}\right|dx&=\sum_{k=0}^{n-1}\int_{k\pi}^{(k+1)\pi}\frac{|\sin x|}{x}dx\\ -&=\sum_{k=0}^{n-1}\int_{0}^{\pi}\frac{\sin x}{(k+1)\pi-x}dx\\ -&\leq \int_0^\pi\frac{\sin x}{\pi-x}dx+\sum_{k=1}^{n-1}\int_{0}^{\pi}\frac{\sin x}{k\pi}dx\\ -&=\int_0^\pi\frac{|\sin(\pi-x)|}{x}dx+\sum_{k=1}^{n-1}\frac{2}{k\pi}dx\\ -&=\int_0^\pi\frac{\sin x}{x}dx+\frac{2}{\pi}\sum_{k=1}^{n-1}\frac{1}{k} -\end{aligned}$$ -On the other hand we have -$$\begin{aligned} -\int_0^R\left|\frac{\sin x}{x}\right|dx&\geq \sum_{k=0}^{n-1}\int_{0}^{\pi}\frac{\sin x}{(k+1)\pi}dx\\ -&=\sum_{k=1}^n\frac{1}{k\pi}\int_0^\pi\sin xdx\\ -&=\frac{2}{\pi}\sum_{k=1}^n\frac{1}{k} -\end{aligned}$$ -Then I tried to apply the squeeze rule, but this does not lead to anything appetizing. Anybody know any tricks for this problem? - -REPLY [2 votes]: Getting a closed form seems probably hard, if not impossible. But at least we can show the limit exists. This follows from the following inequality: -$$\left|\int_{k\pi}^{(k+1)\pi}\frac{|\sin(t)|}{t}-\frac2\pi\int_{k\pi}^{(k+1)\pi}\frac1t\right|\le\frac{c}{k^2}.\quad(1)$$ -Which you prove by comparing both integrals to $\frac2{k\pi}$. For the first, $$\int_{k\pi}^{(k+1)\pi}\frac{|\sin(t)|}{t}-\frac2{k\pi} -=\int_{k\pi}^{(k+1)\pi}|\sin(t)|\left(\frac1t-\frac1{k\pi}\right).$$Now if $k\pi\le t\le(k+1)\pi$ then $$\left|\frac1t-\frac1{k\pi}\right|=\frac{t-k\pi}{k\pi t}\le\frac{1}{k^2\pi}.$$Inserting this above shows that $$\left|\int_{k\pi}^{(k+1)\pi}\frac{|\sin(t)|}{t}-\frac2{k\pi}\right|\le\frac2{k^2\pi}.\quad(2)$$ -Similarly $$\left|\frac2\pi\int_{k\pi}^{(k+1)\pi}\frac1t-\frac2{k\pi}\right|\le\frac c{k^2},\quad(3)$$and then (1) follows from (2) and (3).<|endoftext|> -TITLE: Concrete Mathematics - How is it that A(2n + 1) = 2A(n)? -QUESTION [7 upvotes]: This is might sound like a stupid question, and I'm pretty sure I know the answer to this question, but I'm not certain. -Anyway, on page 14 of Concrete Mathematics, the author has just finished going over the Josephus problem: -Josephus -$$ J(1) = 1;$$ -$$ J(2n) = 2J(n) - 1;$$ -$$ J(2n + 1) = 2J(n) + 1 $$ -He then derives a more closed form (as I understand it) representation of $J(n)$, being: -$$ J(2^m + l) = 2l + 1$$ -where, -$$0 \le l < 2^m; n = 2^m + l, \text{for} \space n \ge 1$$ -In general, for each version of $J$, he defines three corresponding constants: $\alpha$, $\beta$, $\gamma$: -Recurrence 1.11 (as per the book) -Let $f(n)$ represent the general form of $J(n)$: -$$ f(1) = \alpha $$ -$$ f(2n) = 2f(n) + \beta$$ -$$ f(2n + 1) = 2f(n) + \gamma$$ -Where $J(n) \implies (\alpha, \beta, \gamma) = (1, -1, 1)$ -He then derives a hypothesis, which involves this form of $f(n)$: -$$f(n) = \alpha A(n) + \beta B(n) + \gamma C(n)$$ -where, -$$ A(n) = 2^m$$ -$$ B(n) = 2^m - 1 - l$$ -$$ C(n) = l$$ -So, he begins his proof by "choosing particular values and combining them"; -notably, he selects the constants $(\alpha, \beta, \gamma) = (1, 0, 0)$. This implies that $f(n) = A(n)$. -The result yields the following: -$$ A(1) = 1; $$ -$$ A(2n) = 2A(n), \text{for} \space n \ge 1 $$ -$$ A(2n + 1) = 2A(n), \text{for} \space n \ge 1 $$ -My confusion stems from the fact that, all of a sudden, we're mapping -$A(2n + 1) = 2A(n)$, with a 1 getting eaten by the function... -How is it that $A(2n + 1) = A(2n) = 2A(n)$? -Are these implying that -$$ A(2n + 1) = 2A(n) + 1\gamma$$ -with $\gamma = 0$? - -REPLY [4 votes]: In this type of problem you start with the function defined for only one value: typically $n=1$. You then use a bootstrapping mechanism to define the function for further values of $n$. -Example 1 -We could have $T(1)=5$ and $T(n+1)=3T(n)$. -This would give the sequence $T(1)=5, T(2)=15, T(3)=45, T(4)=135, ...$ -In this case the sequence steps through all possible values of $n$ starting from $n=1$ and continuing forever... -Example 2 -Now consider $T(1)=5$ and $T(2n)=3T(n)-2$. -This would give the sequence $T(1)=5, T(2)=13, T(4)=37, T(8)=109, ...$ -In this case the sequence is limited to certain values of $n$; we have no idea what $T(3)$ or $T(5)$ or $T(6)$ might be. -We need a mechanism for "filling in the gaps", so another definition is required. -If we have $T(2n+1)=5T(n)-18$, then this would give us $T(3)=7$. -We already have $T(4)$. -$T(5)$ can be found by using $T(2 \times 2+1)=5T(2)-18=47$. -$T(6)$ can now be found by using $T(2 \times 3)=3T(3)-2=19$. -$T(7)$ can be found by using $T(2 \times 3+1)=5T(3)-18=17$. -This mechanism will now give the sequence for all values of $n$. -Your concern is about the function $A(n)$. This is defined by the system: -$A(1)=\alpha$ -$A(2n)=2A(n)+\beta$ -$A(2n+1)=2A(n)+\gamma$ -but in the special case where $\alpha =1, \beta=0, \gamma=0$ we get: -$A(1)=1$ -$A(2n)=2A(n)+0=2A(n)$ -$A(2n+1)=2A(n)+0=2A(n)$ -This does have the curious effect of having $A(2n)=A(2n+1)$, but this will happen whenever $\beta=\gamma$. -Consider -$A(1)=\alpha$ -$A(2n)=2A(n)+\beta$ -$A(2n+1)=2A(n)+\gamma$ -where $\alpha =3, \beta=5, \gamma=5$ -This gives -$A(1)=3$ -$A(2n)=2A(n)+5$ -$A(2n+1)=2A(n)+5$ -The sequence will go: -$A(1)=3$ -$A(2)=2 \times 3 + 5=11$ -$A(3)=2 \times 3 + 5=11$ -$A(4)=2 \times A(2) + 5=27$ -$A(5)=2 \times A(2) + 5=27$ -$A(6)=2 \times A(3) + 5=27$ -$A(7)=2 \times A(3) + 5=27$ -$A(8)=2 \times A(4) + 5=59$ -$A(9)=2 \times A(4) + 5=59$ -$A(10)=2 \times A(5) + 5=59$ -$A(11)=2 \times A(5) + 5=59,...$<|endoftext|> -TITLE: Given $\mathbb Q$ and $X_t$ is $\mathbb Q$-Brownian, find $\frac{d\mathbb Q}{d\mathbb P}$ / Uniqueness of Brownian or Radon-Nikodym derivative -QUESTION [7 upvotes]: The problem: - -Let $T >0$, and let $(\Omega, \mathscr F, \{ \mathscr F_t \}_{t \in [0,T]}, \mathbb P)$ be a filtered probability space where $\mathscr F_t = \mathscr F_t^W$ where $W = \{W_t\}_{t \in [0,T]}$ is standard $\mathbb P$-Brownian motion. -Let $X = \{X_t\}_{t \in [0,T]}$ be a stochastic process where $X_t = W_t + \sin t$, and let $\mathbb Q$ be an equivalent probability measure s.t. $X$ is standard $\mathbb Q$-Brownian motion. -Give $\frac{d \mathbb Q}{d \mathbb P}$. - -Girsanov Theorem: - -Let $T >0$, and let $(\Omega, \mathscr F, \{ \mathscr F_t \}_{t \in [0,T]}, \mathbb P)$ be a filtered probability space where $\mathscr F_t = \mathscr F_t^W$ where $W = \{W_t\}_{t \in [0,T]}$ is the standard $\mathbb P$-Brownian motion. -Let the Girsanov kernel $\{\theta_t\}_{t \in [0,T]}$ be a $\mathscr F_t$-adapted stochastic process s.t. $\int_0^T \theta_s^2 ds < \infty$ a.s. and $\{L_t\}_{t \in [0,T]}$ is a $( \mathscr F_t , \mathbb P)$ martingale where -$$L_t := \exp(-\int_0^t \theta_s dW_s - \frac 1 2 \int_0^t \theta_s^2 ds)$$ -Let $\mathbb Q$ be the probability measure defined by -$$Q(A) = \int_A L_T dP \ \forall A \in \ \mathscr F$$ -or $$L_T = \frac{d \mathbb Q}{d \mathbb P}$$ -Then $\{W_t^Q\}_{t \in [0,T]}$ defined by -$$W_t^Q := W_t + \int_0^t \theta_s ds$$ -is standard $\mathbb Q$-Brownian motion. - - -The solution given: -$$X_t = W_t + \int_0^t \cos s ds$$ -Let $\theta_t = \cos t$: - -It is $\mathscr F_t$-adapted - -$\int_0^T \theta_s^2 ds < \infty$ a.s. - -$E[\exp(\frac 1 2 \int_0^T \theta_t^2 dt)] < \infty$ - - -Then $\{L_t\}_{t \in [0,T]}$ is a $( \mathscr F_t , \mathbb P)$ martingale, by Novikov's condition, where -$$L_t := \exp(-\int_0^t \cos s dW_s - \frac 1 2 \int_0^t \cos^2 s ds)$$ -Thus, by Girsanov's Theorem, we have -$$\frac{d\mathbb Q}{d\mathbb P} = L_T...?$$ - -How exactly does that last line follow? -What I find strange is that the Girsanov Theorem defines $\mathbb Q$ and then concludes $X_t$ is standard $\mathbb Q$-Brownian motion while the problem says there is some $\mathbb Q$ s.t. $X_t$ is standard $\mathbb Q$-Brownian motion and then asks about $\frac{d \mathbb Q}{d \mathbb P}$. Is the problem maybe stated wrong? -To say that $L_T$ is indeed the required density $\frac{d \mathbb Q}{d \mathbb P}$, I think we need to use the converse of the Girsanov Theorem (or here), or maybe the problem should instead give us $\frac{d \mathbb Q}{d \mathbb P}$ and then ask us to show that $L_T = \frac{d \mathbb Q}{d \mathbb P}$ possibly showing that $E[\frac{d \mathbb Q}{d \mathbb P} | \mathscr F_t] = L_t$ or some other route. - -I tried something slightly different: -I define $\hat{\mathbb P}$ s.t. -$$L_T = \frac{d\hat{\mathbb P}}{d\mathbb P}$$ -or -$$\hat{\mathbb P} = \int_A L_T d\mathbb P$$ -It follows by Girsanov Theorem that $X_t$ is standard $\hat{\mathbb P}$-Brownian motion. Since we are given that there is some $\mathbb Q$ equivalent to $\mathbb P$ s.t. $X_t$ is also standard $\mathbb Q$-Brownian motion, it follows by the uniqueness of the Radon-Nikodym derivative that -$$\frac{d\hat{\mathbb P}}{d\mathbb P} = \frac{d\mathbb Q}{d\mathbb P}$$ -$\therefore, \frac{d\mathbb Q}{d\mathbb P}$ is given by $L_T$. -Is that right? I think I'm missing a step somewhere. -So, is that indeed what the solution given is meant to be but just omitted pointing out uniqueness of the Radon-Nikodym derivative, if such justification is right? - -Edit based on comment below and this: Even if Radon-Nikodym derivative is unique, $\mathbb Q$ may not be unique? If so, is it then that $\hat{\mathbb P}$ is merely a candidate for one of many possible $\mathbb Q$'s? -I think we conclude $\hat{\mathbb P} = \mathbb Q$ based on $X_t$ being standard Brownian motion under both measures. Is there a proposition for that? Uniqueness of Brownian motion measure or something? - -REPLY [2 votes]: The R-N derivative process $Z_t:=d\Bbb Q|_{\mathcal F_t}/d\Bbb P|_{\mathcal F_t}$ is a strictly positive $\Bbb P$-martingale. As such it can be written as a stochatic exponential $\exp(M_t-{1\over 2}\langle M\rangle_t)$, with $M$ a $(\mathcal F_t^W,\Bbb P)$-local martingale. Thus $M$ admits a stochastic integral representation as $M_t=\int_0^t H_s\,dW_s$, with $H$ predicatable and $\int_0^T H_s^2\,ds<\infty$ $\Bbb P$-a.s. By Girsanov's theorem, the process $W_t-\int_0^t H_s\,ds$ is a $\Bbb Q$-local martingale. By hypothesis, $W_t+\sin t$ is also a $\Bbb Q$-local martingale. Subtracting we find that the process $\int_0^t H_sds+\sin t$ is a continuous $\Bbb Q$-local martingale that is also of finite variation. Consequently, $\int_0^t H_sds+\sin t=0$ for all $t\ge0$, $\Bbb Q$-a.s. (hence also $\Bbb P$-a.s.). It follows that $H_t(\omega)=-\cos t$ for $\Bbb P\otimes \lambda$-a.e $(\omega,t)\in \Omega\times[0,T]$. (Here $\lambda $ is Lebesgue measure.) In particular, $M_t=-\int_0^t\cos s\,dW_s$, and $d\Bbb Q/d\Bbb P=\exp(-\int_0^T\cos s\,dW_s-{1\over 2}\int_0^T\cos^2s\,ds)$.<|endoftext|> -TITLE: Generating set of countable, co-countable sigma algebra on $\mathbb{R}$ -QUESTION [9 upvotes]: I am trying to prove that the countable, co-countable sigma algebra on $\mathbb{R}$ cannot be countably generated. -In more precise terms. -Let $\Sigma$ be the $\sigma$-algebra generated by countable subsets of $\mathbb{R}$, that is -$$ \Sigma = \sigma (\{A\subseteq \mathbb{R} \:|\: A \textrm{ is countable}\})$$ -It is easy to see that $A\in \Sigma$ iff $A$ is countable or co-countable. -Question: Is there a countable family $\{A_n\}_{n\in\mathbb{N}}$ such, for all $n\in\mathbb{N}$, $A_n\in \Sigma$ and -$$ \Sigma = \sigma (\{A_n \:|\: n\in\mathbb{N}\})?$$ -I think the answer is NO, and I am trying to prove it. Can someone please help me in proving this? -My attempt is to prove by contradiction. That is assuming that the countable generating set exists then to show that sigma algebra generated by this set would miss some singletons of $\mathbb{R}$. Since the given sigma algebra contains all singletons this leads to contradiction. I am following this approach because I know that the set of all singletons generate the given sigma algebra and they are uncountable. - -REPLY [8 votes]: Your idea to prove by contradiction is correct. Here are the details. -Suppose there is a countable family $\{A_n\}_{n\in\mathbb{N}}$ such, for all $n\in\mathbb{N}$, $A_n\in \Sigma$ and -$$ \Sigma = \sigma (\{A_n \:|\: n\in\mathbb{N}\})$$ -For each $n\in\mathbb{N}$, define -\begin{align} -&B_n = A_n & \textrm{ if $A_n$ countable}; -\\& B_n = A_n^c & \textrm{ if $A_n$ cocountable} -\end{align} -Then we have, for all $n\in\mathbb{N}$, $B_n$ is countable and, it is easy to see that: -$$ \Sigma = \sigma (\{A_n \:|\: n\in\mathbb{N}\})= \sigma (\{B_n \:|\: n\in\mathbb{N}\}) \tag{1}$$ -Let $C=\bigcup_{n\in\mathbb{N}}B_n$. Since $C$ is a countable union of countable sets, we have that $C$ is countable. -Since, for each $n\in\mathbb{N}$, $B_n$ is a countable subset of $C$, we have $B_n\in \sigma(\{\{p\} \:|\: p\in C\})$ and so we have -$$\sigma (\{B_n \:|\: n\in\mathbb{N}\})\subseteq \sigma(\{\{p\} \:|\: p\in C\}) $$ -On the other hand, for each $p\in C$, $\{p\}\in \Sigma$ (because $\{p\}$ is obviously countable). So, considering $(1)$, for each $p\in C$, $\{p\}\in \sigma (\{B_n \:|\: n\in\mathbb{N}\})$, and we can conclude that -$$\sigma(\{\{p\} \:|\: p\in C\}) \subseteq \sigma (\{B_n \:|\: n\in\mathbb{N}\})$$ -and so we have -$$\Sigma= \sigma (\{B_n \:|\: n\in\mathbb{N}\})= \sigma(\{\{p\} \:|\: p\in C\}) $$ -Let $\Sigma_0= \{E \:|\: E\subset C\} \cup \{E\cup C^c \:|\: E\subset C \}$. - It is easy to prove that $\Sigma_0$ is a $\sigma$-algebra, and for each $p\in C$, $\{p\}\in \Sigma_0$. So -$$\Sigma= \sigma(\{\{p\} \:|\: p\in C\}) \subseteq \Sigma_0 \tag{2}$$ -Now, note that, since $C$ is countable, $\mathbb{R}\setminus C\neq \emptyset$, that is, $C^c \neq \emptyset$. Let $q$ be any element in $C^c$. We have $\{q\}\in \Sigma$ (because $\{q\}$ is obviously countable) but $\{q\}\notin \Sigma_0$. Contradiction. -Remark 1: We can easily prove that $$\sigma(\{\{p\} \:|\: p\in C\}) = \Sigma_0$$ -but all we need is the inclusion presented in $(2)$. -Remark 2: All we used from $\mathbb{R}$ is that it is uncountable. The proof above works for any uncountable space $\Omega$.<|endoftext|> -TITLE: $f(g(h(x)))=0$ has $8$ real roots -QUESTION [10 upvotes]: Find all quadratic polynomials $f(x),g(x)$ and $h(x)$ such that the polynomial $f(g(h(x)))=0$ has roots $1,2,3,4,5,6,7$ and $8$. -I don't know what to do. Making a $8$ degree equation is quite tedious. Thanks. - -REPLY [3 votes]: The polynomial $$F(x)=\prod_{1\le k\le 8}(x-k)$$ has a derivative equal to -$$4(2x-9)(x^6-27x^5+288x^4-1539x^3+4299x^2-5886x+3044)$$ where the second 6-degree factor is irreducible (verify this, for example, in WolframAlpha). -Therefore if $F(x)=f(g(h(x)))$ then -$$F’(x)=f’(g(h(x)))\cdot g’(h(x))\cdot h’(x)$$ which is a contradiction because $F’(x)$ is a product of only two irreducible factors.<|endoftext|> -TITLE: Trilogarithm $\operatorname{Li}_3(z)$ and the imaginary golden ratio $i\,\phi$ -QUESTION [10 upvotes]: I experimentally discovered the following conjectures: -$$\Re\Big[1800\operatorname{Li}_3(i\,\phi)-24\operatorname{Li}_3\left(i\,\phi^5\right)\Big]\stackrel{\color{gray}?}=100\ln^3\phi-47\,\pi^2\ln\phi-150\,\zeta(3),\tag1$$ -$$\Im\Big[720\operatorname{Li}_3(i\,\phi)-320\operatorname{Li}_3\left(i\,\phi^3\right)-48\operatorname{Li}_3\left(i\,\phi^5\right)\Big]\stackrel{\color{gray}?}=9\,\pi^3-780\,\pi\ln^2\phi,\tag2$$ -where $\phi=\frac{1+\sqrt5}2$ is the golden ratio and $\operatorname{Li}_3(z)$ is the trilogarithm. They check numerically with at least $20000$ decimal digits. It appears that Maple and Mathematica know nothing about these identities. -Are these known identities? How can we prove them? - -Update: It also appears that -$$\begin{align}&\Re\operatorname{Li}_3(i\,\phi)\stackrel{\color{gray}?}=\frac1{32}\operatorname{Li}_3\left(\phi^{-4}\right)+\frac3{16}\operatorname{Li}_3\left(\phi^{-2}\right)-\frac38\ln^3\phi-\frac14\zeta(3)\\ -\,\\ -&\Re\operatorname{Li}_3\left(i\,\phi^3\right)\stackrel{\color{gray}?}=\frac9{32}\operatorname{Li}_3\left(\phi^{-4}\right)+\frac12\operatorname{Li}_3\left(\phi^{-3}\right)+\frac38\operatorname{Li}_3\left(\phi^{-2}\right)-\frac38\operatorname{Li}_3\left(\phi^{-1}\right)-\frac{43}8\ln^3\phi-\frac{15}{32}\zeta(3)\\ -\,\\ -&\Re\operatorname{Li}_3\left(i\,\phi^5\right)\stackrel{\color{gray}?}=\frac{75}{32}\operatorname{Li}_3\left(\phi^{-4}\right)-\frac58\operatorname{Li}_3\left(\phi^{-2}\right)-\frac{45}2\ln^3\phi-\frac34\zeta(3)\end{align}$$ -These together with a known value for $\operatorname{Li}_3\left(\phi^{-2}\right)$ imply $(1)$. - -REPLY [5 votes]: I'll try to prove Tito Piezas III's (+1) neat conjecture : -$$\tag{1}32\;\Re\operatorname{Li}_3\left(i\,\phi^k\right)\stackrel{\color{blue}?}=\operatorname{Li}_3\left(\phi^{-4k}\right)-4\operatorname{Li}_3\left(\phi^{-2k}\right)+10\,k\operatorname{Li}_3\left(\phi^{-2}\right)-\frac{4k^3+5k}{6}\,\ln^3(\phi^2)-8k\,\zeta(3)$$ -The classical identity $\;\displaystyle\frac {\operatorname{Li}_3(x^2)}4=\operatorname{Li}_3(x)+\operatorname{Li}_3(-x)\;$ (c.f. Lewin $1981$ "Polylogaritms and associated functions" p.$154$) allows to rewrite the two first terms at the right as : -$$\tag{2}32\;\Re\operatorname{Li}_3\left(i\,\phi^k\right)\stackrel{\color{blue}?}=4\operatorname{Li}_3\left(-\phi^{-2k}\right)+10\,k\operatorname{Li}_3\left(\phi^{-2}\right)-\frac{4k^3+5k}{6}\,\ln^3(\phi^2)-8k\,\zeta(3)$$ -and gives further : -$\;\dfrac {\operatorname{Li}_3\left(-\phi^{2k}\right)}4=\operatorname{Li}_3\left(i\,\phi^k\right)+\operatorname{Li}_3\left(-i\,\phi^k\right)=2\Re\operatorname{Li}_3\left(i\,\phi^k\right)\;$ that is : -$$\tag{3}4\operatorname{Li}_3\left(-\phi^{2k}\right)\stackrel{\color{blue}?}=4\operatorname{Li}_3\left(-\phi^{-2k}\right)+10\,k\operatorname{Li}_3\left(\phi^{-2}\right)-\frac{4k^3+5k}{6}\,\ln^3(\phi^2)-8k\,\zeta(3)$$ -Another usual identity is $\;\displaystyle \operatorname{Li}_3(-x)-\operatorname{Li}_3(-1/x)=-\zeta(2)\log(x)-\frac{\ln^3(x)}6\;$ (same Lewin page) -applied to $\,x:=\phi^{2k}\,$ $(3)$ becomes : -\begin{align} --8k\zeta(2)\ln(\phi)-\frac{4k^3\ln^3(\phi^{2})}6\stackrel{\color{blue}?}=&10\,k\operatorname{Li}_3\left(\phi^{-2}\right)-\frac{4k^3+5k}{6}\,\ln^3(\phi^2)-8k\,\zeta(3)\\ --8k\zeta(2)\ln(\phi)+\frac{5k\ln^3(\phi^{2})}6+8k\,\zeta(3)\stackrel{\color{blue}?}=&10\,k\operatorname{Li}_3\left(\phi^{-2}\right)\\ -\tag{4}\operatorname{Li}_3\left(\phi^{-2}\right)=&\frac 45(\zeta(3)-\zeta(2)\ln(\phi))+\frac 23{\ln^3(\phi)}\\ -\end{align} -This last identity is proved in Lewin's $1991$ book (page $2$ with $\,\rho=\dfrac 1{\phi}\,$) and may be compared with alpha's evaluation knowing that $\,\operatorname{csch}^{-1}(2)=\ln(\phi)$.<|endoftext|> -TITLE: Need to find the ellipse of maximum area inscribed in a semicircle. -QUESTION [16 upvotes]: An ellipse inscribed in a fixed semi circle touches the semi-circular arc at two distinct points and also touches the bounding diameter. Its major axis is parallel to the bounding diameter. -When does the ellipse have the maximum possible area? What is its eccentricity $e$ in that case? - -I managed to solve it using a coordinate based approach combined with some calculus and obtained the correct answer, that is, when $e=\sqrt{\dfrac 23}$. -But that solution was too boring and time-consuming, and I don't think that it was what the examiner had in his mind, since this is a question from KVPY 2014 SB which means it is intended to be solved in like, 2 minutes tops on a single piece of paper approximately $200\space cm^2$ in area. -Can anyone please help me find a quick, Exact geometric method to solve this question? -EDIT: The paper area condition seems too restrictive so any synthetic method would suffice. - -REPLY [9 votes]: Stretch the original figure by a factor $\sqrt{3}$ in vertical direction. The half circle $H$ then becomes a half ellipse $E$, and the isosceles right triangle $ABC$ with $C=(0,\sqrt{2})$ transforms into the equilateral triangle $\Delta:=ABD$ with $D=(0,\sqrt{6})$. The oblique legs of $\Delta$ are tangent to $E$ at their midpoints $M$ and $N$. We now solve the analogous problem with respect to $E$. I claim that the largest (i.e., largest with respect to area) ellipse inscribed in $E$ is the incircle $I$ of $\Delta$. Proof: It is well known that the largest ellipse contained in the equilateral triangle $\Delta$ is $I$. As all feasible ellipses for the modified problem are subsets of $\Delta$ they all have an area $\leq {\rm area}(I)$. Since $I$ itself is a feasible ellipse the claim follows. - -The radius of $I$ computes to $$r={1\over3}|OD|={\sqrt{6}\over3}\ .$$ Returning to the original problem we see that the largest ellipse inscribed in $H$ has semiaxes -$$a=r={\sqrt{6}\over3},\quad b={r\over\sqrt{3}}={\sqrt{2}\over3}\ ,$$ -and touches the semicircle at the points $\bigl(\pm{1\over\sqrt{2}}, \>{1\over\sqrt{2}}\bigr)$. The eccentricity of this ellipse comes to $$e:{\sqrt{a^2-b^2}\over a}=\sqrt{{2\over3}}\ .$$<|endoftext|> -TITLE: Is the quadric $3$-fold $v^2 + w^2 + x^2 + y^2 + z^2 = 0$ isomorphic to $P^3$? -QUESTION [7 upvotes]: The subset of projective $4$-space given by $5$-tuples $[v:w:x:y:z]$ with $v^2 + w^2 + x^2 + y^2 + z^2 = 0$ is birational to projective $3$-space. I think it has the same cohomology as projective $3$-space. Is it actually isomorphic to projective $3$-space$?$ -In other words are there homogeneous polynomials $v(a,b,c,d), w(a,b,c,d), x(a,b,c,d), y(a,b,c,d), z(a,b,c,d)$ whose squares add to $0?$ - -REPLY [4 votes]: Let $Q$ be the quadric 3-fold in $\mathbb{P}^4$. By adjunction formula, $K_Q=\mathcal{O}_Q(-5+2)=\mathcal{O}_Q(-3)$, and $\text{deg}\mathcal{O}_Q(-3)=2\times(-3)=-6$. On the other hand, $K_{\mathbb{P}^3}=\mathcal{O}_{\mathbb{P}^3}(-4)$ and its degree is $-4$. So $Q$ and $\mathbb{P}^3$ are not isomorphic.<|endoftext|> -TITLE: Number of 5 digit numbers $< 40,000$ -QUESTION [6 upvotes]: The numbers to be used are : 2, 3, 4, 4, 5 -The way I approached this is: -Total number of combinations possible is : -$$\frac{5!}{2!}$$ -Total number of combinations starting with 4 : -$$4!$$ -Total number of combinations starting with 5 : -$$\frac{4!}{2!}$$ -$\therefore$ the total number of numbers $<$ 40,000 : -$$\frac{5!}{2!}-\Big(4!+\frac{4!}{2!}\Big)$$ -I came across with this question and I don't have access to the solution. -I'm not confident if this is correct. - -REPLY [9 votes]: To double-check the answer, you can use brute force: -$ python ->>> sum(sorted(str(x)) == sorted(str(23445)) for x in range(40000)) -24<|endoftext|> -TITLE: Find the value of $\sin(\frac{1}{4}\arcsin\frac{\sqrt{63}}{8})$ -QUESTION [6 upvotes]: Find the value of $\sin(\frac{1}{4}\arcsin\frac{\sqrt{63}}{8})$ - -Let $\sin(\frac{1}{4}\arcsin\frac{\sqrt{63}}{8})=x$ -$\arcsin\frac{\sqrt{63}}{8}=4\arcsin x$ -$\arcsin\frac{\sqrt{63}}{8}=\arcsin(4x\sqrt{1-x^2})(2x^2-1)$ -$\frac{\sqrt{63}}{8}=(4x\sqrt{1-x^2})(2x^2-1)$ -$\frac{63}{64}=16x^2(1-x^2)(2x^2-1)^2$ -Let $x^2=t$ -$\frac{63}{64}=16t(1-t)(2t-1)^2$ -$64t^4-128t^3+80t^2-16t+\frac{63}{64}=0$ -Now solving this fourth degree equation is getting really difficult,even after using rational roots theorem. -Is there a better method possible?The answer given is $x=\frac{\sqrt2}{4}$.Please help me. - -REPLY [12 votes]: Let -$$x = \arcsin{\frac{\sqrt{63}}{8}} = \arccos{\frac18} $$ -Use succession of half-angle formulae: -$$\sin{\frac{x}{4}} = \sqrt{\frac{1-\cos{(x/2)}}{2}} = \sqrt{\frac{1-\sqrt{\frac{1+\cos{(x)}}{2}} }{2}} $$ -In this case, $\cos{x} = 1/8$ so that -$$\sin{\left (\frac14 \arcsin{\frac{\sqrt{63}}{8}} \right )} = \sqrt{\frac{1-\sqrt{\frac{9/8}{2}} }{2}} = \frac1{2 \sqrt{2}}$$<|endoftext|> -TITLE: A coordinate free book on linear and multilinear algebra defining determinants using exterior algebra -QUESTION [6 upvotes]: I would like to find an advanced introduction to linear and multilinear algebra that is -1)Coordinate free -2)Use tensor products and exterior algebras to define determinants -3)DOES NOT assume a previous course in linear algebra, but only assumes some mathematical maturity and perhaps a little abstract algebra like group and field definitions and basic theorems.And definitely DOES NOT assume ANYTHING about determinants, so defines them from scratch. -Thanks in advance. - -REPLY [3 votes]: S.Winitzki "Linear Algebra via Exterior Products"<|endoftext|> -TITLE: Lipshitz Integral for $a=0$ -QUESTION [5 upvotes]: I knew that this, $$\displaystyle{\int_0^\infty e^{-ax}J_0(bx)dx=\frac{1}{\sqrt{a^2+b^2}}},$$ holds for $a>0$ but, in an exercise from Arfken, it said that this holds for $a\geq0$. -How can I prove that? - -REPLY [2 votes]: Another way. We know that: -$$ J_0(x) = \frac{1}{2\pi}\int_{-\pi}^{\pi}\exp\left(i x \sin(t)\right)\,dt \tag{1}$$ -hence by assuming $a>0$ we may apply Fubini's theorem to get: -$$ \begin{eqnarray*}\int_{0}^{+\infty}e^{-ax}J_0(x)\,dx &=& \frac{1}{2\pi}\int_{-\pi}^{\pi}\int_{0}^{+\infty}\exp\left(i x \sin(t)-ax\right)\,dx\,dt\\&=&\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{dt}{a-i\sin(t)}\\&=&\frac{1}{\pi}\int_{0}^{\pi}\frac{a\,dt}{a^2+\sin^2(t)}\\&=&\frac{2a}{\pi}\int_{0}^{\pi/2}\frac{dt}{a^2+\sin^2(t)}\\&=&\frac{2a}{\pi}\int_{0}^{+\infty}\frac{du}{a^2(1+u^2)+u^2}\\&=&\color{red}{\frac{1}{\sqrt{1+a^2}}}\tag{2}\end{eqnarray*}$$ -through the substitution $t=\arctan(u)$. Now $J_0(x)$ is not a Lebesgue-integrable function over $\mathbb{R}^+$, since $|J_0(x)|$ decays like $\frac{1}{\sqrt{x}}$, but it is improperly Riemann-integrable over $\mathbb{R}^+$. So assuming that the given identity is stated "in the Riemann way", $(2)$ holds also for $a=0$. The two-parameters identity of yours can be deduced from $(2)$ through a simple change of variable.<|endoftext|> -TITLE: Six x’s has to be placed in the squares in the adjacent figure, such that each row contains at least one x. This can be done in -QUESTION [5 upvotes]: Question is - - -Six x’s has to be placed in the squares in the adjacent figure, such - that each row contains at least one x. This can be done in how many ways? - -Due to the limit on the number of x's, I find this question very diffiult to solve. -I tried representing the figure as $$_,_|_,_,_,_|_,_$$ -And 3 x's are already use for each row and we are left with 3 more x's -but I'm unable to proceed further. How to tackle such problems? - -REPLY [2 votes]: Here is an approach via generating functions. Consider the more general problem of placing $r$ x's in the figure. Let $a_r$ be the number of ways this can be done, subject to the constraint of having at least one x in each row. Then the generating function for $\{a_r\}$ is -$$f(x) = [(1+x)^2-1]^2 \cdot [(1+x)^4-1] = x^8+8 x^7+26 x^6+44 x^5+40 x^4+16 x^3$$ -The answer to the original problem is the coefficient of $x^6$, i.e. $26$.<|endoftext|> -TITLE: Convergence of the series $\sum\limits_{n=1}^{\infty}\frac{\cos(\alpha\sqrt{n})}{n^q}$ -QUESTION [6 upvotes]: Determine for which real values of $\alpha$ and $q$ the following series converges $\sum\limits_{n=1}^{\infty}\frac{\cos(\alpha\sqrt{n})}{n^q}$? -So far I managed to prove that 1) for $q\leqslant0,\alpha\in\mathbb{R}$ the series diverges; 2) for $q>1,\alpha\in\mathbb{R}$ the series converges absolutely; 3) for $0 \frac{1}{2}$. -So as D. Thomine expected, the series converges for $q > \frac{1}{2}$ and $\alpha \in \mathbb{R}\setminus \{0\}$. The argument given in the comment shows (when the details are carried out) that the series is divergent for $q \leqslant \frac{1}{2}$.<|endoftext|> -TITLE: How to evaluate $\sum_{n=1}^m 2^n \arctan 2^n \theta$ -QUESTION [6 upvotes]: I need to evaluate $$\sum_{n=1}^m 2^n \arctan 2^n \theta$$ as a function of $m$ and $\theta$. All I've done so far is write out the series explicitly: -$$\sum_{n=1}^m 2^n \arctan 2^n \theta = 2 \arctan 2\theta + 4\arctan 4\theta + 8\arctan 8\theta + \cdots + 2^m \arctan 2^m \theta$$ -and I initially considered pairing every two terms up to use the $\arctan x + \arctan y$ trick, but it doesn't work because each $\arctan$ term has a different coefficient. - -REPLY [6 votes]: Not for nothing, but just in case the OP really wants to evaluate -$$\sum_{n=1}^m 2^n \tan{2^n \theta} $$ -then use the fact that -$$\cot{x}-2 \cot{2 x} = \tan{x} $$ -and let $x=2^n \theta$. In this case, we get a telescoping sum with the result -$$\sum_{n=1}^m 2^n \tan{2^n \theta} = 2\cot{2 x} - 2^{m+1} \cot{2^{m+1} \theta}$$<|endoftext|> -TITLE: Check if a point lies in a circle defined by three other points. -QUESTION [6 upvotes]: I'm learning Computational Geometry, and need to check whether a point p lies inside a circle defined by a triangle(made by 3 points $a,b,c$, in counterclockwise order). -A very convenient method is to check the following determinant's sign -$D=\begin{vmatrix} -a_x & a_y & a_x^2+a_y^2 & 1 \\ -b_x & b_y & b_x^2+b_y^2 & 1 \\ -c_x & c_y & c_x^2+c_y^2 & 1 \\ -p_x & p_y & p_x^2+p_y^2 & 1 -\end{vmatrix}$ -When $D>0$ $p$ is inside the circle defined by $a,b,c$,and outside when $D<0$ , right on the circle when $D=0$ -And when $D=0$, it's clear the system of equations of D has a non-trivial solution. -That means a circle determined by that solution go through $a,b,c,p$ simultaneously. -But how can one get the inside/outside information just check the sign of $D$? -How to prove it? - -REPLY [3 votes]: I get the answer finally. -I see the question about how to determine the equation of a circle defined by three points. Zaz's answer mentioned this page, which can solve my problem indirectly. -Notice that: -$D = -p_x M_{41} + p_y M_{42} - (p_x^2 +p_y^2) M_{43} + M_{44}$ -and if we regard $p$ as a moving point, then $D=0$ is actually an equation of a circle: -$(p_x+\frac{1}{2}\frac{M_{41}}{M_{43}})^2+(p_y-\frac{1}{2}\frac{M_{42}}{M_{43}})^2=(\frac{1}{2}\frac{M_{41}}{M_{43}})^2+(\frac{1}{2}\frac{M_{42}}{M_{43}})^2+\frac{M_{44}}{M_{43}}$ -i.e. the circle locate at $(x_0,y_0)=(\frac{1}{2}\frac{M_{41}}{M_{43}},-\frac{1}{2}\frac{M_{42}}{M_{43}})$ -with radius $r_0^2=x_0^2+y_0^2+\frac{M_{44}}{M_{43}}$. -When $D>0$ the circle equation become $(x-x_0)^2+(y-y_0)^20$ and vice versa. -Note: According to Cramer's rule, $(-\frac{M_{41}}{M_{43}},\frac{M_{42}}{M_{43}},\frac{M_{44}}{M_{43}})$ is exactly the solution of the equation -$ -\begin{pmatrix} -a_x & a_y & 1 \\ -b_x & b_y & 1 \\ -c_x & c_y & 1 -\end{pmatrix} -\boldsymbol{x}=\begin{pmatrix}a_x^2+a_y^2 \\ b_x^2+b_y^2 \\c_x^2+c_y^2\end{pmatrix} -$ -And this 3 parameters perfectly define a circle pass through $a,b,c$.One can easily verify when $p$ is $a,b$ or $c$, $D=0$ by the equation above.<|endoftext|> -TITLE: If $n$ is a product of primes, what is the number of divisors? -QUESTION [5 upvotes]: Let $n=p_1p_2...p_k$ Then the number of divisors is what? -I assumed it was $1+ \binom k1+ \binom k2 + \binom k3 + ... + \binom kk=2^k$ -Is this correct? -Prove that the number of divisors is odd $\iff$ n is a perfect square - -REPLY [6 votes]: It depends. Your answer -$$ \sum_{m = 0}^k \binom km = 2^k $$ -is correct, if all $p_i$ are distinct. Note that this makes a difference, $n = 2 \cdot 2$ has $(1,2,4)$ as divisors (namely 3), where $6 = 2 \cdot 3$ has $(1,2,3,6)$, so $4 = 2^2$. -In general, if we have -$$ n = \prod_i p_i^{\alpha_i} $$ -with distinct primes $p_i$, for each $i$, we can choose $0, \ldots, \alpha_i$ factors $p_i$ in a divisor of $n$, so alltogether we have -$$ \prod_i (\alpha_i + 1) $$ -divisors. Note that is number is odd iff all $\alpha_i$ are even. But then -$$ n = \prod_i p_i^{\alpha_i} = \prod_i p_i^{2\beta_i} = \left(\prod_i p_i^{\beta_i}\right)^2. $$<|endoftext|> -TITLE: About the decimal period of $\frac 17$ -QUESTION [7 upvotes]: It is easy to verify that $$\frac 17=\frac {142857}{999999}$$ where $142857$ is the decimal period of $\frac 17$. -This period, which has six different digits, has the property that when multiplied by $1,2,3,4,5,6$, the respective products have the same six different digits in different position (Clearly, multiplied by $7$ must give $999999$). -Is $ 142857 $ the only six-digit number that has this property? - -REPLY [6 votes]: Thus will be true for the period of $1/n$ iff the order of $10 \bmod n$ is $6$, but you'll have to consider different sets of multipliers. -For $n<100$, the examples are $n=7, 13, 21, 39, 63, 77, 91, 97$. (*) -For $n=13$, the number is $076923$ (let's accept this as having six digits) and there are two cycles: one for the multipliers $1,3,4,9,10,12$ and one for $2,5,6,7,8,11$. -(*) Apparently, there are only $53$ examples; see A059892 and A226477.<|endoftext|> -TITLE: Prove that $\frac{1}{a(a-b)(a-c)} +\frac{1}{b(b-c)(b-a)} +\frac{1}{c(c-a)(c-b)} =\frac{1}{abc}$ for all sets of distinct nonzero numbers $a,b,c$. -QUESTION [6 upvotes]: Prove that $$\cfrac{1}{a(a-b)(a-c)} +\cfrac{1}{b(b-c)(b-a)} - +\cfrac{1}{c(c-a)(c-b)} =\cfrac{1}{abc}$$ for all sets of distinct nonzero numbers $a,b,c $. - -Now my question is not about how to solve this but rather why the technique which shows my book works. -Technique: - -Rather than showing that the left side equals $\cfrac{1}{abc}$,we show - that $$\cfrac{1}{a(a-b)(a-c)} +\cfrac{1}{b(b-c)(b-a)} - +\cfrac{1}{c(c-a)(c-b)} -\cfrac{1}{abc}=0 $$ -Writing the left side with the common denominator - $abc(a-b)(a-c)(b-c)$,we have - $$\cfrac{bc(b-c)-ac(a-c)+ab(a-b)-(a-b)(a-c)(b-c)}{abc(a-b)(a-c)(b-c)}=0$$ -We can show that this is $0$ by showing that the numerator is $0$.We - can do this by looking at the numerator as a polynomial in $c$,meaning - let $a$ and $b$ be constants and $c$ be a variable,or - $$f(c)=bc(b-c)-ac(a-c)+ab(a-b)-(a-b)(a-c)(b-c)$$ -Since $f(c)$ is a quadratic equation ,if we can show that this - quadratic has $3$ different roots,then $f(c)=0$ for all $c$. - -The proof ends up with showing that $f(a)=0$,$f(b)=0$ and $f(0)=0$. -Now,while I can understand why a quadratic with $3$ roots is the zero polynomial I can't understand why we can treat the numerator as a polynomial and so treat $a,b$ as constants while $c$ as a variable. -Furthermore when we let it be a polynomial we also let $c=a=b$ but the problem in the beginning states that ${a,b,c}$ is all sets of distinct nonzero numbers,so I thought that we can't let $c=a=b$ by definition. -So can someone explain in depth why this is legit to do ? - -REPLY [3 votes]: For a proof that might satisfy a thirst for more symmetry, but which uses a very similar technique, consider the equivalent identity $$\cfrac{bc}{(a-b)(a-c)} +\cfrac{ac}{(b-c)(b-a)} +\cfrac{ab}{(c-a)(c-b)} =1.$$ Let $f(x)$ be the polynomial function defined by $$f(x) = \cfrac{(x-b)(x-c)}{(a-b)(a-c)} +\cfrac{(x-c)(x-a)}{(b-c)(b-a)} +\cfrac{(x-a)(x-b)}{(c-a)(c-b)}.$$ Observe that $f$ is a quadratic polynomial with $f(a)=f(b)=f(c)=1$. It follows that $f(x)-1$ is a quadratic with three roots, so $f(x)-1=0$ identically. Now compare constant terms of the identity $f(x)=1$.<|endoftext|> -TITLE: $f$ holomorphic on $D\setminus \{0\}$ and takes no values in $(-\infty,0],$ then $0$ removable -QUESTION [5 upvotes]: If $f$ is holomorphic on $D\setminus \{0\}$ and takes no values in $(-\infty,0]$ then $0$ is a removable singularity. - -I thought to prove this by elimination, but I can't really tell anything about the behavior of $f$ around $0$. How can one translate the information about the definition of $f$ in semi-open interval $(-\infty,0]$. - -REPLY [10 votes]: $G := \Bbb C \setminus (-\infty, 0]$ can be mapped conformally onto -the unit disk. That is generally true for all simply-connected -domains due to the Riemann mapping theorem. In this particular case -the mapping can be described explicitly as -$$ - \varphi(z) = \frac{\sqrt z - 1}{\sqrt z + 1} -$$ -where $\sqrt z$ is the holomorphic branch mapping $G$ onto the right -halfplane. -Then $g := \varphi \circ f$ is holomorphic in $D\setminus \{0\}$ -with values in the unit disk, i.e. $g$ is bounded. -It follows that $g$ has a removable singularity at $z= 0$, -and then the same holds for $f$. -One could also use the "Great Picard Theorem" which states that -a holomorphic function takes on all possible complex values, with at most a single exception, infinitely often in a punctured neighbourhood of an -essential singularity. But that is an "advanced" result in -complex analysis.<|endoftext|> -TITLE: Fractals using just modulo operation -QUESTION [49 upvotes]: Let us calculate the remainder after division of $27$ by $10$. -$27 \equiv 7 \pmod{10}$ -We have $7$. So let's calculate the remainder after divison of $27$ by $7$. -$ 27 \equiv 6 \pmod{7}$ -Ok, so let us continue with $6$ as the divisor... -$ 27 \equiv 3 \pmod{6}$ -Getting closer... -$ 27 \equiv 0 \pmod{3}$ -Good! We have finally reached $0$ which means we are unable to continue the procedure. -Let's make a function that counts the modulo operations we need to perform until we finally arrive at $0$. -So we find some remainder $r_{1}$ after division of some $a$ by some $b$, then we find some remainder $r_{2}$ after division of $a$ by $r_{1}$ and we repeat the procedure until we find such index $i$ that $r_{i} = 0$. -Therefore, let $$ M(a, b) = i-1$$ -for $a, b \in \mathbb{N}, b \neq 0 $ -(I like to call it "modulity of a by b", thence M) -For our example: $M(27, 10) = 3$. -Notice that $M(a, b) = 0 \Leftrightarrow b|a $ (this is why $i-1$ feels nicer to me than just $i$) -Recall what happens if we put a white pixel at such $(x, y)$ that $y|x$: - -This is also the plot of $M(x, y) = 0$. -(the image is reflected over x and y axes for aesthetic reasons. $(0, 0)$ is exactly in the center) -What we see here is the common divisor plot that's already been studied extensively by prime number researchers. -Now here's where things start getting interesting: -What if we put a pixel at such $(x, y)$ that $M(x, y) = 1$? - -Looks almost like the divisor plot... but get a closer look at the rays. It's like copies of the divisor plot are growing on each of the original line! -How about $M(x, y) = 2$? - -Copies are growing on the copies! -Note that I do not overlay any of the images, I just follow this single equation. -Now here is my favorite. -Let us determine luminosity ($0 - 255$) of a pixel at $(x, y)$ by the following equation: -$$255 \over{ M(x,y) + 1 }$$ -(it is therefore full white whenever $y$ divides $x$, half-white if $M(x, y) = 1$ and so on) - -The full resolution version is around ~35 mb so I couldn't upload it here (I totally recommend seeing this in 1:1): -https://drive.google.com/file/d/0B_gBQSJQBKcjakVSZG1KUVVoTmM/view?usp=sharing -What strikes me the most is that some black stripes appear in the gray area and they most often represent prime number locations. -Trivia - -The above plot with and without prime numbers marked with red stripes: -http://i.imgur.com/E9YIIbd.png -http://i.imgur.com/vDgkT8j.png -The above plot considering only prime $x$: - -Formula: $255 \over{ M(p_{x},y) }$ (note I do not add $1$ to the denominator because it would be full white only at $y$ equal $1$ or the prime. Therefore, the pixel is fully white when $p_{x}$ mod $y = 1$ ) -Full 1:1 resolution: https://drive.google.com/file/d/0B_gBQSJQBKcjTWMzc3ZHWmxERjA/view?usp=sharing -Interestingly, these modulities form a divisor plot of their own. -Notice that for $ M(a, b) = i-1, r_{i-1}$ results in either $1$ or a divisor of $a$ (which is neither $1$ nor $a$). -I put a white pixel at such $(x, y)$ that for $M(x, y) = i - 1$, it is true that $r_{i-1}\neq 1 \wedge r_{i-1} | x$ (the one before last iteration results in a remainder that divides $x$ and is not $1$ (the uninteresting case)) -http://i.imgur.com/I85rlH5.png -It is worth our notice that growth of $M(a, b)$ is rather slow and so if we could discover a rule by which to describe a suitable $b$ that most often leads to encountering a proper factor of $a$, we would discover a primality test that works really fast (it'd be $O(M(a, b))$ because we'd just need to calculate this $r_{i-1}$). -Think of $M'(a, b)$ as a function that does not calculate $a$ mod $b$ but instead does $M(a, b)$ until a zero is found. -These two are plots of $M'''(x, y)$ with and without primes marked: -http://i.imgur.com/gE0Bvwg.png -http://i.imgur.com/vb5YxVP.png -Plot of $M(x, 11)$, enlarged 5 times vertically: -http://i.imgur.com/K2ghJqe.png -Can't notice any periodicity in the first 1920 values even though it's just 11. -For comparison, plot of $x$ mod $11$ (1:1 scale): -http://i.imgur.com/KM6SCF3.png -As it's been pointed out in the comments, subsequent iterations of $M(a, b)$ look very much like Euclidean algorithm for finding the greatest common divisors using repeated modulo. A strikingly similar result can be obtained if for $(x, y)$ we plot the number of steps of $gcd(x, y)$: - -I've also found similar picture on wikipedia: - -This is basically the plot of algorithmic efficiency of $gcd$. -Somebody even drew a density plot here on stackexchange. -The primes, however, are not so clearly visible in GCD plots. Overall, they seem more orderly and stripes do not align vertically like they do when we use $M(a, b)$ instead. -Here's a convenient comparative animation between GCD timer (complexity plot) and my Modulity function ($M(x, y)$). Best viewed in 1:1 zoom. $M(x, y)$ appears to be different in nature from Euclid's GCD algorithm. - - -Questions - -Where is $M(a, b)$ used in mathematics? -Is it already named somehow? -How could one estimate growth of $M(a, b)$ with relation to both $a$ and $b$, or with just $a$ increasing? -What interesting properties could $M(a, b)$ possibly have and could it be of any significance to number theory? - -REPLY [7 votes]: Some additional notes, which I cannot add to my previous answer, because apparently I am close to a 30K character limit and MathJax complains. - -Addendum -The fundamental pattern which emerges in $\phi(n)$ then, is that of the Farey series dissection of the continuum. This pattern is naturally related to Euclid's Orchard. -Euclid's Orchard is basically a listing of the Farey sequence of (all irreducible) fractions $p_k/q_k$ for the unit interval, with height equal to $1/q_k$, at level $k$: - -Euclid's Orchard on [0,1]. -In turn, Euclid's Orchard is related to Thomae's Function and to Dirichlet's Function: - -Thomae's Function on [0,1]. -The emergence of this pattern can be seen easier in a combined plot, that of the GCD timer and Euclid's Orchard on the unit interval: - -Farey series dissection of the continuum of [0,1]. -Euclid's Orchard is a fractal. It is the most "elementary" fractal in a sense, since it characterizes the structure of the unit interval, which is essential in understanding the continuum of the real line. -Follow some animated gifs which show zoom-in at specific numbers: - -Zoom-in at $\sqrt{2}-1$. - -Zoom-in at $\phi-1$. -The point of convergence of the zoom is marked by a red indicator. -White vertical lines which show up during the zoom-in, are (a finite number of open covers of) irrationals. They are the irrational limits of the convergents of the corresponding continued fractions which are formed by considering any particular tower-top path that descends down to the irrational limit. -In other words, a zoom-in such as those shown, displays some specific continued fraction decompositions for the (irrational on these two) target of the zoom. -The corresponding continued fraction decomposition (and its associated convergents) are given by the tops of the highest towers, which descend to the limits. - -Addendum #2 (for your last comment to my previous answer) -For the difference between the two kinds of graphs you are getting - because I am fairly certain you are still confused - what you are doing produces two different kinds of graphs. If you set M(x,y) to their natural value, you are forcing a smooth graph like the the GCD timer. If you start modifying M(x,y) or set it to other values (($M(x,y)=k$ or if you calculate as $M(x,p^k)$), you will begin reproducing vertical artifacts which are characteristic of $\phi$. And that, because as you correctly observe, doing so, you start dissecting the horizontal continuum as well (in the above case according to $p^k$, etc). In this case, the appropriate figure which reveals the vertical cuts, would be like the following: - - -Appendix: -Some Maple procedures for numerical verification of some of the theorems and for the generation of some of the figures. -generate fig.1: - -with(numtheory): with(plots): N:=10000; - liste:=[seq([n,phi(n)],n=1..N)]: with(plots):#Generate fig.1 - p:=plot(liste,style=point, symbol=POINT,color=BLACK): display(p); - -Generate fig.2: - -q:=plot({x-1,x/2,x/3,2*x/3,2*x/5, 4*x/5,4*x/15,8*x/15,2*x/7,3*x/7, - 4*x/7,6*x/7,8*x/35,12*x/35,16*x/35,24*x/35},x=0..N,color=grey): - display({p,q});#p as in example 1. - -Generate fig.3: - -F:=proc(n) #Farey series local a,a1,b,b1,c,c1,d,d1,k,L; - a:=0;b:=1;c:=1;d:=n;L:=[0]; while (c < n) do k:=floor((n+b)/d); - a1:=c;b1:=d;c1:=kc-a;d1:=kd-b;a:=a1;b:=b1;c:=c1;d:=d1; - L:=[op(L),a/b]; od: L; end: - n:=10; - for m from 1 to nops(F(n)) do f:=(m,x)->F(n)[m]*x; od: - q:={}; with(plots): for m from 1 to nops(F(n)) do - qn:=plot(f(m,x),x=0..10000,color=grey); q:=q union {qn}; od: - display(p,q); - -Implements Theorem 4.1: - -S:=proc(L,N) local LS,k,ub; LS:=nops(L);#find how many arguments if - LS=1 then floor(logL[LS]); else ub:=floor(logL[LS]); - add(S(L[1..LS-1],floor(N/L[LS]^k)),k=1..ub); fi; end: - -Brute force approach for Theorem 4.1: - -search3:=proc(N,a1,a2,a3,s) local cp,k1,k2,k3; cp:=0; for k1 from 1 to - s do for k2 from 1 to s do for k3 from 1 to s do if - a1^k1*a2^k2*a3^k3 <= N then - cp:=cp+1;fi;od; od; od; cp; end: - -Verify Theorem 4.1: - -L:=[5,6,10];N:=1738412;S(L,N); 37 s:=50 #maximum exponent for brute - force search search3(N,5,6,10,s); 37 #identical - -Times GCD algorithm: - -reduce:=proc(m,n) local T,M,N,c; M:=m/gcd(m,n);#GCD(km,kn)=GCD(m,n) - N:=n/gcd(m,n); c:=0; while M>1 do T:=M; M:=N; N:=T;#flip M:=M mod - N;#reduce c:=c+1; od; c; end: - -Generate fig.6: - -max:=200; nmax:=200; rt:=array(1..mmax,1..nmax); for m from 1 to mmax - do for n from 1 to nmax do rt[m,n]:=reduce(n,m); # assign GCD steps - to array od; od; - n:='n';m:='m'; rz:=(m,n)->rt[m,n]; # convert GCD steps to function - p:=plot3d(rz(m,n), - m=1..mmax,n=1..nmax, - grid=[mmax,nmax], - axes=NORMAL, - orientation=[-90,0], - shading= ZGREYSCALE, - style=PATCHNOGRID, - scaling=CONSTRAINED): display(p);<|endoftext|> -TITLE: Description of the Universe $V$ -QUESTION [5 upvotes]: For me, the concept "set" seams very ambiguous. This does not satisfy me because sets are used very often in mathematics, and so many questions in mathematics are not definite for me. I want to read an intuitive description of our universe of sets. -This universe $V$ is often explained in the context of the so called "von Neumann universe". Wikipedia says that the von Neumann universe is - -often used to provide an interpretation or motivation of the axioms of $\mathsf{ZFC}$. - -The von Neumann universe is the universe of sets one has at the back of one's mind when speaking about sets. -Can you explain the von Neumann universe so that I will have an intuition for this universe? -Here is the thing that disturbs me when I read some explanations of the von Neumann universe: They used ordinals. But ordinals are sets, and using ordinals to explain what sets are is circular somehow. -I am not speaking of a definition of the concept "set" and I am also not speaking about an axiomatization of $V$. I just want get an intuition about $V$. -How would platonists describe the universe? -Or, more specifically, what would Andreas Blass tell me if I ask him about $V$? - -REPLY [8 votes]: If you want to consider the "simple" foundational approach to theories like $\sf ZFC$, then sets are primitive objects and $V$ is a given universe to begin with. -The axioms of $\sf ZFC$ tell you what sort of properties $V$ and its $\in$ relation satisfy. For example, they tell you there is a set which is inductive, and they tell you that $\in$ is well-founded (as far as $V$ is concerned), and that if $X$ is a member of $V$, the there is a set in $V$ which is the power set of $X$ and so on. -What the von Neumann hierarchy gives you is the understanding that if $V$ is already given, then we can write this wonderful filtration of $V$ into a very nice hierarchy. Additional theorems like the reflection theorem also tell you more about this hierarchy and its deep connection with the structure of $V$ as a universe of sets. -But in either case you start with $V$ as a given concept, and a set is just something which belongs to $V$. -So what intuition can you get on $V$? Frankly, not a whole lot. It's a very complicated object, not to mention that different universes of sets can be very different, so you don't have nearly enough information about $V$ as it is. Does it satisfy this axiom or that axiom? Does it have large cardinals? What is the truth value of the continuum hypothesis in $V$? Are there Suslin trees? Is $V$ a set-generic extension of a smaller universe? All these are questions that just $\sf ZFC$ simply cannot answer. So as far as a Platonist go, you might get very different answers from one Platonist to another. -What respite can I offer you instead? I can suggest that you take comfort that in mathematics the intuition you have initially (or hoping to develop "immediately") is almost always wrong. It is through the understanding that this intuition fails us that we learn to work closely with the definitions (and axioms), and slowly we develop some sort of often-ineffable intuition about whatever it is that we work with. This can take several years to accomplish. But it is an extremely rewarding process to slowly realize that you understand what the hell is going on.<|endoftext|> -TITLE: The finite product of $L^p$ spaces is reflexive ($1 -TITLE: Examples of group-theoretic results more easily obtained through topology or geometry -QUESTION [53 upvotes]: Earlier, I was looking at a question here about the abelianization of a certain group $X$. Since $X$ was the fundamental group of a closed surface $\Sigma$, it was easy to compute $X^{ab}$ as $\pi_1(\Sigma)^{ab} = H_1(\Sigma)$, then use the usual machinery to compute $H_1(\Sigma)$. That made me curious about other compelling examples of solving purely (for some definition of 'purely') algebraic questions that are accessible via topology or geometry. The best example I can think of the Nielsen-Schreier theorem, which is certainly provable directly but has a very short proof by recasting the problem in terms of the fundamental group of a wedge product of circles. Continuing this line of reasoning leads to things like graphs of groups, HNN-extensions, and other bits of geometric group theory. -What are some other examples, at any level, of ostensibly purely group-theoretic results that have compelling, shorter topological proofs? The areas are certainly closely connected; I'm looking more for what seem like completely algebraic problems that turn out to have completely topological resolutions. - -REPLY [5 votes]: One influence of geometry and topology on group theory has been to extend group theoretic methods. Philip Higgins described to me how he first thought of using groupoids by reading about covering spaces in the book on Homology Theory by Hilton and Wylie, and realising that the account of covering spaces was all about groupoids. You can read about the applications he found in the 1971 book Categories and Groupoids (downloadable); this gives groupoid proofs of subgroup theorems mentioned on this page. Note that groupoids allow notions of fibration and covering morphisms, modelling topological notions. -An application of higher groupoids to homotopy theory which Loday and I published in 1984 led to a nonabelian tensor product of groups which act on each other in a "compatible" way. A simple example is two normal subgroups $M,N$ of a group $P$. The commutator map $[\,, \,]:M \times N \to P, (m,n) \mapsto mnm^{-1}n^{-1}$ has properties of $[mm',n], [m,nn']$, which make it, not bimultiplicative, but a biderivation. The universal biderivation is written $M \times N \to M \otimes N$, so the commutator map factors through a morphism of groups $\kappa: M \otimes N \to P$. Graham Ellis proved that $M \otimes N $ is finite if $M,N$ are finite. -There are applications of this tensor product to the homology of groups. For example if $1 \to R \to F \to P\to 1$ is an exact sequence with $F$ free, then $$H_3(P) \cong \text{Ker} (R \wedge F \to P)$$ where $R \wedge F$ is the quotient of $R \otimes F$ by the relations $r \otimes r =1$ for $r \in R$; and to algebraic topology: for example $$\pi_3S(K(P,1))\cong \text{Ker} (\kappa: P \otimes P \to P),$$ where $S$ is suspension. A number of group theorists have taken up the ideas, particularly on calculating $P \otimes P$ for classes of groups, see this bibliography, which has over 170 items, including a description dating from 1952 of $H_2(P)$ essentially as the kernel of $P \wedge P \to P$.<|endoftext|> -TITLE: What makes graph automorphisms interesting? -QUESTION [5 upvotes]: I've completed a short course on graph theory and we never studied graph isomorphisms in depth, but I've seen at least a bit of this covered in most graph theory books I've grabbed, that grabbed my attention. -Is there any (big?) connection with another field that makes graph automorphisms interesting (besides the trivial 'automorphisms form a group under composition')? - -REPLY [4 votes]: For a start, a number of the sporadic simple groups were first discovered as automorphism groups of graphs. The Higman-Sims group is perhaps the simplest example.<|endoftext|> -TITLE: Series with a reciprocal of the central binomial coefficient -QUESTION [7 upvotes]: How can we prove the following identities -$$\sum_{n=1}^\infty n^{-3}{\binom{2n}n}^{-1}=\pi\operatorname{Cl}_2\left(\frac{2\pi}{3}\right)-\frac{4}{3}\zeta(3)\tag{1}$$ -$$\sum_{n=1}^\infty (n+1)^{-2}{\binom{2n}n}^{-1}=\frac{2\pi^2}{9}-2\pi\operatorname{Cl}_2\left(\frac{2\pi}{3}\right)+\frac{8}{3}\zeta(3)-1\tag{2}$$ -where $\operatorname{Cl}_2(x)$ is the Clausen integral? -Some identities of this sort are proved in this paper. - -REPLY [8 votes]: $(1)$: We want to evaluate $\quad\displaystyle S:=\sum_{m=1}^{\infty} \frac{1}{m^3\binom{2m}{m}}$ -From this answer we obtained : -$$\tag{1}2(\arcsin(x))^2=\sum_{m=1}^{\infty} \frac{(2x)^{2m}}{m^2\binom{2m}{m}}$$ -Integrating this multiplied by $\,\dfrac 2x\,$ from $\,0$ to $\dfrac 12$ will thus give : -\begin{align} -S&=2\sum_{m=1}^{\infty} \frac{1}{m^2\binom{2m}{m}}\int_0^{\frac12}\dfrac{(2x)^{2m}}{x}\,dx\\ -\tag{2}S&=4\int_0^{\frac 12} \frac{(\arcsin(x))^2}x\,dx\\ -\tag{3}S&=4\int_0^{\pi/6} \frac{t^2}{\tan(t)}\,dt\\ -S&=4\left[\left.t^2\log(\sin(t))\right|_{\,0}^{\,\pi/6}-2\int_0^{\pi/6} t\,\log(\sin(t))\,dt\right]\\ -\tag{4}S&=-\frac{\log(2)\pi^2}9-8\int_0^{\pi/6} t\,\log(\sin(t))\,dt\\ -\end{align} -The Clausen integral verifies : -$\;\displaystyle\operatorname{Cl}_2(x)'=-\log(2\sin(t/2))\;$ so let's rewrite $(4)$ and use integration by parts of $\operatorname{Cl}_2(x)$ : -\begin{align} -S&=-\frac{\log(2)\pi^2}9-\frac 84\int_0^{\pi/3} t\;\log(2\sin(t/2))-t\log(2)\,dt\\ -\tag{5}S&=-2\int_0^{\pi/3} t\;\log(2\sin(t/2))\,dt\\ -&=2\left[t\;\operatorname{Cl}_2(x)\left.\right|_0^{\pi/3}-\int_0^{\pi/3} \operatorname{Cl}_2(t)\,dt\right]\\ -&=\frac{2\pi}3\operatorname{Cl}_2\left(\frac{\pi}3\right)+2\,\left(\operatorname{Cl}_3\left(\frac{\pi}3\right)-\operatorname{Cl}_3\left(0\right)\right)\\ -\end{align} -Since $\;\displaystyle\operatorname{Cl}_{2n}(x):=\sum_{k=1}^\infty \frac{\sin(k\,x)}{k^{\,2n}},\ \operatorname{Cl}_{2n+1}(x):=\sum_{k=1}^\infty \frac{\cos(k\,x)}{k^{\,2n+1}}\;$ we have indeed $\;\operatorname{Cl}_3(x)'=-\operatorname{Cl}_2(x)$. -Now $\,\operatorname{Cl}_3(0)=\zeta(3)\,$ and $\,\operatorname{Cl}_3\left(\dfrac{\pi}3\right)=\dfrac{\zeta(3)}3\,$ (prove this using the series for $\operatorname{Cl}_3$) while $\,\operatorname{Cl}_2\left(\dfrac{2\pi}3\right)=\dfrac 23\operatorname{Cl}_2\left(\dfrac{\pi}3\right)\,$ can't be written in simpler form (without using polylogarithms) so that your $(1)$ is indeed right : -$$\boxed{\displaystyle\sum_{n=1}^\infty \frac 1{n^{3}{\binom{2n}n}}=\pi\operatorname{Cl}_2\left(\frac{\pi}{3}\right)-\frac{4}{3}\zeta(3)}\tag{6}$$ - -$(2)$: Concerning $\;\displaystyle \sum_{m=1}^{\infty} \frac{1}{(m+1)^2\binom{2m}{m}}\ $ the link (i.e. the derivative of $(1)$ multiplied by $\dfrac x2$) gives us : -$$\tag{7}\frac{2x \arcsin\ x}{\sqrt{1-x^2}}=\sum_{m=1}^{\infty} \frac{(2x)^{2m}}{m\binom{2m}{m}}$$ -The derivative of this (multiplied by $x/2$) will give us the general : -$$\tag{8}\boxed{\frac {x^2}{1-x^2}+x\frac {\arcsin(x)}{\sqrt{1-x^2}^3}=\sum_{m=1}^{\infty} \frac{(2x)^{2m}}{\binom{2m}{m}}}$$ -Multiplying by $\,x$, integrating and dividing by $x^2/2$ we get : -$$\tag{9}2\frac {\arcsin(x)}{x\,\sqrt{1-x^2}}-\frac{\arcsin(x)^2}{x^2}-1=\sum_{m=1}^{\infty} \frac{(2x)^{2m}}{(m+1)\binom{2m}{m}}$$ -The indefinite integral of $\;\displaystyle 2\;x\frac {\arcsin(x)}{x\,\sqrt{1-x^2}}\;$ is simply $\,\arcsin(x)^2\,$ while the integral of $\;\displaystyle x\frac{\arcsin(x)^2}{x^2}$ is more complicated but we found $\;\displaystyle\int_0^{1/2}\frac {\arcsin(x)^2}x\,dx=\frac S4$ earlier and can therefore conclude that : -$$\int_0^{1/2} \sum_{m=1}^{\infty} \frac{x(2x)^{2m}}{(m+1)\binom{2m}{m}}\,dx=\sum_{m=1}^{\infty} \frac{2^{-3}(1)^{2m+2}}{(m+1)^2\binom{2m}{m}}=\left.\arcsin(x)^2-\frac{x^2}2\right|_{\,0}^{\,1/2}-\frac S4$$ -or : -$$\sum_{m=1}^{\infty} \frac 1{(m+1)^2\binom{2m}{m}}=8\left(\frac{\pi}6\right)^2-\frac 88-2\left(\pi\operatorname{Cl}_2\left(\frac{\pi}{3}\right)-\frac{4}{3}\zeta(3)\right)$$ -which is indeed your equality $(2)$ : -$$\tag{10}\boxed{\displaystyle\sum_{m=1}^{\infty} \frac 1{(m+1)^2\binom{2m}{m}}=\frac{2\pi^2}{9}-2\pi\operatorname{Cl}_2\left(\frac{2\pi}{3}\right)+\frac{8}{3}\zeta(3)-1}$$ -To add to the links provided by Vladimir Reshetnikov : - -an excellent link concerning central binomial series is to Gourévitch's $\pi$ pages. -here Sprugnoli's "Sums of reciprocals of the central binomial coefficients" may be helpful too.<|endoftext|> -TITLE: Geometry of Elementary Symmetric Polynomials -QUESTION [11 upvotes]: The elementary symmetric polynomials appear when we expand a linear factorization of a monic polynomial: we have the identity - $$ - \prod _{j=1}^{n}(\lambda -X_{j})=\lambda ^{n}-e_{1}(X_{1},\ldots ,X_{n})\lambda ^{n-1}+e_{2}(X_{1},\ldots ,X_{n})\lambda ^{n-2}+\cdots +(-1)^{n}e_{n}(X_{1},\ldots ,X_{n}). -$$ - For $n = 3$: - $$ -{\begin{aligned} -e_{1}(X_{1},X_{2},X_{3})&=X_{1}+X_{2}+X_{3},\\ -e_{2}(X_{1},X_{2},X_{3})&=X_{1}X_{2}+X_{1}X_{3}+X_{2}X_{3},\\ -e_{3}(X_{1},X_{2},X_{3})&=X_{1}X_{2}X_{3}.\\\ -\end{aligned}} -$$ - -In the example given, we get a plane for $e_{1}(X_{1},X_{2},X_{3})=c_1$ and two-sheeted hyperboloid for $e_{2}(X_{1},X_{2},X_{3})=c_2$, which gives a cone if $c_2=0$. -Is there a general description of the geometry of $e_k(X_1,\dots,X_n)=c_k$? - -REPLY [7 votes]: Here's a partial answer; we'll treat $k = 1, 2, n$ and $(n, k) = (4, 3)$. -Note, by the way, that if $c_k = 0$, the resulting equation $e_k(X_1, \ldots, X_n) = 0$ is homogeneous, so by projectivizing we may regard the solution set as a codimension $1$ projective variety in $\Bbb P^{n - 1} = \Bbb P(\Bbb F^n)$. -Now, of course, $$e_1(X_1, \ldots, X_n) = X_1 + \cdots + X_n = c$$ defines a hyperplane, namely, the one through $(c, 0, \ldots, 0)$ with normal vector $${\bf U} := \pmatrix{1\\ \vdots\\1}.$$ -For any $n$, we can regard $$e_2(X_1, \ldots, X_n) = \sum_{i < j} X_i X_j$$ as a quadratic form on $\Bbb R^n$ endowed with coordinates ${\bf X} := (X_a)$, namely the one with associated matrix -$$[e_2] := \frac{1}{2}\, -\pmatrix{ -0 & 1 & 1 & \cdots & 1\\ -1 & 0 & 1 & \cdots & 1\\ -1 & 1 & 0 & \cdots & 1\\ -\vdots & \vdots & \vdots & \ddots & \vdots\\ -1 & 1 & 1 & \cdots & 0} = \frac{1}{2}({\bf U}^T {\bf U} - I_n)$$ -with respect to the standard basis. One can show that $[e_2]$ has nonzero determinant (for example, since it is a rank-one update of the invertible matrix $-\frac{1}{2}I_n$, we can use the Sherman-Morrison Formula), so the quadratic form is nondegenerate (at least for $n > 1$; for $n = 1$, $e_2(X_1) = 0$, and we henceforth disregard this case). It's easy to see $[e_2]$ has $1$ positive eigenvalue and $n - 1$ negative eigenvalues, so $e_2$ has signature $(1, n - 1)$. We can conclude the following about the level set -$$ -\Sigma_c := \{e_2({\bf X}) = c\}, \qquad c \in \Bbb R . -$$ - -For $c > 0$, $\Sigma_c$ is a nondegenerate $2$-sheeted quadric hypersurface. (The two sheets are separated by the hyperplane $e_1({\bf X}) = \{{\bf U} \cdot {\bf X} = 1 \}$.) -For $c = 0$, $\Sigma_0$ is a nondegenerate cone; its projectivization is an $(n - 2)$-sphere in $\Bbb R \Bbb P^{n - 1}$. -For $c < 0$, $\Sigma_c$ is a nondegenerate $1$-sheeted quadric hypersurface for $n > 2$ (but again is $2$-sheeted for $n = 2$, in which case it is simply a hyperbola). - -When $k = n$, the variety $\{e_n({\bf X}) = 0\}$ is the union of the coordinate hyperplanes $\{X_a = 0\}$, $a = 1, \ldots, n$. -The varieties $\{e_k({\bf X}) = c\}$ not covered by the above cases are generally less familiar, but at least some of them were studied classically and even have specific names. For example, in the simplest remaining case, $n = 4$, $k = 3$, i.e., the variety $\{e_3(X_1, X_2, X_3, X_4) = 0\}$ is sometimes called Cayley's Surface (or more precisely, its projectivization in $\Bbb R \Bbb P^3$ is).<|endoftext|> -TITLE: Showing this function on the Cantor set is onto [0,1] -QUESTION [5 upvotes]: The excerpt below is taken from Rosenthal's A First Look at Rigorous Probability. $K$ refers to the cantor set. - -My question refers to the statement "It is easily checked that $f(K) =[0,1]$. I am thinking that this can by proved by taking any number in $[0,1]$, writing the binary expansion for it (that is, write it of the form $\sum_{n=1}^\infty b_n\cdot2^{-n},\ b_n\in \{0,1\}$ and then show that there is a point in the cantor set that will give $d_n = b_n \forall n$. How would I do this last step? That is, how would I show that such a point exists in the cantor set, $K$? -To state the question again: how I would show that there is a $y\in K$ which corresponds to some point in $[0,1]$? -A secondary question is: Can we show that $f(K) = [0,1]$, where $f$ is as in the attached image, without explicitly using binary/ternary expansions, and preferably also not using compactness? It is not important that this secondary question is answered: if the first question is answered and this one is not, I will select an answer for the question and then probably just create a separate question eventually for this and add a bounty if necessary. -Thank you. -Note: This question is technically addressed here, but the answers seems to say to me "here is a function, it is a surjection, without explaining why it is a surjection. - -REPLY [3 votes]: The quoted argument is probably intended to be filled in as follows. Let $x\in[0,1]$; then $x$ has a binary expansion -$$x=\sum_{k\ge 1}\frac{b_k}{2^k}\;,$$ -where each $b_k\in\{0,1\}$. The sequence $\langle b_k:k\in\Bbb Z^+\rangle$ now defines a nest of closed intervals as follows. -Let $I_0=[0,1]$. Given $I_k$ for some $k\in\Bbb N$, let $I_{k+1}$ be the closed left third of $I_k$ if $b_{k+1}=0$, and let $I_{k+1}$ be the right closed third of $I_k$ if $b_{k+1}=1$. The sequence $\langle I_k:k\in\Bbb N\rangle$ is a decreasing nest of closed intervals, so $\bigcap_{k\in\Bbb N}I_k\ne\varnothing$. On the other hand, the length of $I_k$ is $3^{-k}$, so the diameter of $\bigcap_{k\in\Bbb N}I_k$ is $0$, and it follows that there is a unique $y\in\bigcap_{k\in\Bbb N}I_k$. -All that remains is to verify that $x=f(y)$, which can be done by showing that $b_k=d_k(y)$ for each $k\in\Bbb Z^+$. But this is clear from the construction of the intervals $I_k$: for each $k\in\Bbb N$ we have $d_{k+1}(y)=0$ iff $y$ is to the left of the nearest open interval removed at step $k+1$ iff $I_{k+1}$ is the closed left third of $I_k$ iff $b_{k+1}=0$.<|endoftext|> -TITLE: How can we interpret derivations as elements of the tangent sheaf -QUESTION [5 upvotes]: Suppose $X$ is an algebraic variety and $\delta : X \to X \times X$ is the diagonal map. I am defining the cotangent sheaf $\Omega^1_X$ as $\delta^{-1}(I/I^2)$ where $I$ is the ideal sheaf of functions in $\mathcal{O}_{X\times X}$ which vanishes on the diagonal. I'm then using the definition of the tangent sheaf as the dual sheaf -$$ -\Theta_X := \mathcal{H}om_{\mathcal{O}_X}(\Omega^1_X, \mathcal{O}_X). -$$ -I know that if we have an element $\alpha$ in $\Theta_X$ then precomposing with the map $d(f) = f\otimes 1 - 1 \otimes f \text{ mod } I^2$ gives us a derivation. But how can we go in the opposite direction and interpret a derivation of the structure sheaf as an element of the tangent sheaf? I'm not too worried about nitty gritty details, but an overall idea would be nice. Thanks for any help! - -REPLY [4 votes]: Assume $X$ is an affine $k$-scheme, say $X = \operatorname{Spec} A$ where $A$ is a $k$-algebra. Your definition of the cotangent sheaf amounts to this: taking $I = \ker (A \otimes_k A \to A)$, $\Omega = I / I^2$ (regarded as an $A$-module). However, there is another definition: for every $A$-module $M$, there is a natural bijection between $A$-module homomorphisms $\Omega \to M$ and $k$-derivations $A \to M$. -Indeed, as you say, given an $A$-module homomorphism $\phi : \Omega \to M$, we can define a $k$-derivation $\psi : A \to M$ by $\psi (a) = \phi (a \otimes 1 - 1 \otimes a)$; and conversely, given a $k$-derivation $\psi : A \to M$, we can define an $A$-module homomorphism $\phi : \Omega \to M$ by $\phi (a \otimes b) = \psi (a) b$. It is straightforward to verify these are mutually inverse. -In particular, $A$-module homomorphisms $\Omega \to A$ correspond to $k$-derivations $A \to A$.<|endoftext|> -TITLE: Is $\frac{1}{2^{2^{0}}}+\frac{1}{2^{2^{1}}}+\frac{1}{2^{2^{2}}}+\frac{1}{2^{2^{3}}}+....$ irrational? -QUESTION [10 upvotes]: $$\frac{1}{2^{2^{0}}}+\frac{1}{2^{2^{1}}}+\frac{1}{2^{2^{2}}}+\frac{1}{2^{2^{3}}}+\cdots$$ -Is this infinite sum irrational? Is there a known way to prove it? - -REPLY [6 votes]: $\newcommand{\abs}[1]{\left\lvert{#1}\right\rvert}$ -A slightly different way to prove its irrationality could rely on this trick (basically the case $n=1$ of Liouville's theorem on diophantine approximation, just much less powerful): - -Observation: Let $\xi\in\Bbb Q$. Let $\xi=\frac pq$, with $q>0$ and $p,q$ coprime integers. - Let $m,n$ integers such that $n>0$ and $\frac{m}{n}\neq\xi$. Then $$\abs{\xi-\frac mn}\ge\frac1{qn}$$ - -Proof: Indeed, \begin{align}\abs{\dfrac pq-\dfrac mn}=\dfrac{\abs{np-qm}}{qn}\ge\frac1{qn}&&\text{since }np-qm\in\Bbb Z\setminus\{0\}\end{align} - -Corollary: If $\xi\in\mathbb Q$, there exists a constant $\beta>0$ depending only on $\xi$ such that, whenever $\frac{m}{n}\neq\xi$, it holds $\abs{\xi-\dfrac mn}\ge\dfrac\beta n$ - -Now, back to your case: let $\xi:=\sum\limits_{k=0}^\infty 2^{-2^k}$ and let $\xi_n:=\sum\limits_{k=0}^n 2^{-2^k}$ -You can see that $$\xi_n=\frac{\sum_{k=0}^n 2^{2^n-2^k}}{2^{2^n}}\\ -0<\abs{\xi-\xi_n}=\sum_{k=n+1}^\infty 2^{-2^k}\le\sum_{h=2^{n+1}}^\infty2^{-h}= \frac{2}{2^{2^{n+1}}}=\frac{2}{\left(2^{2^n}\right)^2}$$ -Since $\dfrac{\beta}{2^{2^n}}\le\abs{\xi-\xi_n}\le\dfrac{2}{\left(2^{2^n}\right)^2}$ cannot hold definitely whatever the positive constant $\beta$, the corollary above yields that $\xi$ cannot be rational.<|endoftext|> -TITLE: Do there exist bounded operators with unbounded inverses? -QUESTION [5 upvotes]: I have just been introduced to the concept of invertibility for bounded linear operators. Specifically, we defined a bounded operator $A$ to be invertible if there exists a bounded $A^{-1}$ which is its right and left inverse, i.e. $AA^{-1}=\mathrm{id}_{\mathrm{Im}A},A^{-1}A=\mathrm{id}_{\mathrm{Dom}A}$. So I was wondering: is the requirement of boundedness (or equivalently of continuity) of the inverse important? Or, is it asking more than is granted by the invertibility? The open mapping theorem states a continuous linear surjective operator between Banach spaces is an open map, and thus if it has an inverse, that inverse is necessarily continuous. So for Banach spaces, we could avoid requiring this continuity explicitly, as it is automatic. But what about non-Banach spaces? -So the question is: do there exist normed spaces $X,Y$ and operators between them which are linear and bounded but have un-bounded inverses? - -REPLY [2 votes]: The easier example is the identity with different norms in each side, the norm in the right strictly finer than the norm in the left.<|endoftext|> -TITLE: The derivation of the Weierstrass elliptic function -QUESTION [5 upvotes]: I am wondering if any of you could point me to any books and/or lecture notes that explain the Weierstrass $\wp$ function for a self-studying student of elliptic curves and functions. I am interested in any resources that may give the history of the Weierstrass function and its derivation. I do understand the basics of lattices and doubly-periodic functions, but I am having trouble seeing the thought process that led to the creation of the function itself. I have tried searching through many books and while I have found several good ones, I haven't yet found one that shows the thought process that led to the Weierstrass $\wp$ function. (Sorry, but I am not sure how to format the letter that is traditionally associated with this function). -What resources are there to understand the derivation of $\wp$? - -REPLY [6 votes]: HINT: here I give you a brief note, hoping that something you can help. -Knowing that all $L$-elliptic function (i.e. meromorphic and $L$-periodic) must have a finite order at least equal to $2$, the function $\wp$ gives a solution to the problem of construct a $L$-elliptic function taking exactly this minimum of twice all complex value $u$ in all fundamental parallelogram $P$ for the lattice $L$. -Assuming that such a function exists, it can have either just one pole of order 2 or two simple poles (two distinct kinds of these functions!); both exist in fact, the first one with a pole of order two are the $L$-functions $\wp$ of Weierstrass and the second one, the elliptic functions of Jacobi with two simple poles. The first function is defined by -$$\wp(z)=\frac{1}{z^2}+\sum_{\omega\in L\setminus\{0\}}\left[\frac{1}{(z-\omega)^2}-\frac{1}{\omega^2}\right]$$ -(a simpler form with $\sum \frac{1}{(z-\omega)^2}$ is useless because it is not convergent and the variation shown in $\wp$ has no other purpose than to ensure the necessary convergence). -If $P$ is the fundamental parallelogram of vertices $0,\omega_1,\omega_2,\omega_1+\omega_2$, the values $\wp(z_0)$ for $z_0=0,\frac{\omega_1}{2},\frac{\omega_2}{2},\frac{\omega_1+\omega_2}{2}$ are taken just once because $z_0$ is in each case a point of order two in $P$ (this because $0$ is a double pole with residue zero and the other three are double points since $\wp’=0$ and $\wp’’\ne 0$). All the other points $z\in P$ are simple and for all complex value $u\ne \wp(z_0)$ one has $\wp(z_1)=\wp(z_2)=u$ for certain unique $z_1$ and $z_2$ in $P$ which either are symmetric respect to $\frac {\omega_1+\omega_2}{2}$ (in whose case $z_1$ and $z_2$ are interior points of $P$) or are symmetric respect to $\frac{\omega_k}{2}$; $k=1,2$ (in whose case $z_1$ and $z_2$ are in the boundary of $P$, in the two sides assigned to $P$) - -►$\wp(z)$ has a derivative $\wp’(z)$ which is odd of order three and all $L$-elliptic function belongs to the field $\mathbb C(\wp, \wp’)$ of rational expressions of $\wp(z)$ and $\wp’(z)$ with complex coefficients; this field is a quadratic extension of the field $\mathbb C(\wp)$ formed by all the even $L$-elliptic functions. -►$\wp$ and $\wp’$ are transcendental functions but are not algebraically independent and it is verify that $$(\wp’)^2=4\wp^3-g_2 \wp-g_3$$ where the called invariants of $\wp$ are defined by $$g_2=60\sum_{\omega\in L\setminus\{0\}}\frac{1}{\omega^4}$$ and $$g_3=140\sum_{\omega\in L\setminus\{0\}} \frac{1}{\omega^6}$$ -Besides one has an “addition theorem” given by $$\wp(u+v)=\frac 14\left[\frac{\wp’(u)-\wp’(v)}{\wp(u)-\wp(v)}\right]^2-\wp(u)-\wp(v)$$ -These two equalities allow us go from the transcendent to algebraic and this is the link between the elliptic functions and the cubic curves of genus 1.<|endoftext|> -TITLE: Visualizing order 3 mapping class of genus 2 surface -QUESTION [6 upvotes]: Let $\Sigma_2$ be a closed genus $2$ surface. There exists an orientation-preserving diffeomorphism $f:\Sigma_2 \rightarrow \Sigma_2$ of order $3$. The diffeomorphism has $4$ fixed points (each, of course, of order $3$) and from Riemann-Hurewitz you can see that the quotient is a sphere $S^2$. -To construct $f$, it is enough to construct the appropriate branched cover of $S^2$. But this is easy: let $p_1,\ldots,p_4$ be $4$ distinct points of $S^2$ and let $X = S^2 \setminus \{p_1,\ldots,p_4\}$. There then exists a surjection $\phi:H_1(X;\mathbb{Z}) \rightarrow \mathbb{Z}/3$ which for all $1 \leq i \leq 4$ takes a loop around $p_i$ to a generator of $\mathbb{Z}/3$. Let $\widetilde{X}$ be the degree $3$ regular cover of $X$ associated to $\phi$. Then an Euler characteristic calculation shows that $\widetilde{X}$ is diffeomorphic to a genus $2$ surface minus $4$ point. The desired branched cover $\Sigma_2 \rightarrow S^2$ is then obtained by filling in these $4$ points. -I am having trouble visualizing the above construction. I can work everything out and e.g. construct a triangulation of $\Sigma_2$ that is preserved by $f$, but I cannot "see" $f$. Is there a picture of this diffeomorphism somewhere, or at least a more visual way of understanding it? - -REPLY [3 votes]: With the assistance of Big Bird. Look carefully at one of his "feet", and imagine it spinning. Let me know if you need a few more hints...<|endoftext|> -TITLE: "Increasingify" a function / Total variation of a function -QUESTION [6 upvotes]: Let $f : [a,b] \rightarrow \mathbb{R}$ be a $C^1$ function such that $f$ is monotonic on each $[t_k, t_{k+1}]$, with $a = t_0 < t_1 < ... < t_N = b$. -Let g be the increasing-ified version of $f$, i.e. on each interval where $f$ is decreasing we define $g(x) = -f(x) + constant$, such that the function $g$ is continuous. -More precisely : - -if $f$ is increasing or constant on $[t_0, t_{1}]$, then $g = f$ on this interval -if $f$ is decreasing on $[t_0, t_{1}]$, then $g = -f$ on this interval and thus $g$ is increasing on this interval -we do the same on each following interval $[t_k, t_{k+1}]$ : if $f$ is decreasing, we set $g(x) = -f(x) + \alpha_k$, where $\alpha_k$ is chosen such that $g$ is continuous. - -Example : $f(x) = \sin(x)$ in red, the function $g$ in green: - -Questions: -1) This concept surely exists somewhere. How is it called? -2) Without loss of generaly, let's assume $a=0$ and $f(0)=0$. It seems that $g$ is : -$$ g(x) = \int_0^x | f'(t)| d t.$$ -Is that true? -3) It seems that $R(x) = g(x) / x$ looks like a good measure of how much $f(t)$ "moves" vertically when $t$ goes from $0$ to $x$, i.e. : - -if $g(x) / x$ is close to zero, $f$ has very little variation (nearly constant) on $[0, x]$ -if $g(x) / x$ is big, $f$ has much variation on $[0, x]$ - -Does this ratio have a name? -Example: with the previous example, $R(10) \simeq 6.54 / 10 = 0.654$ -Example: with $f(x) = \sin(x^2)$, we have $R(10) \simeq 63.49 / 10 = 6.349$ - -Note: now having written this whole thing, I thing this is related to length of arc length / rectification. But still, I'd like to know more about these things. - -REPLY [3 votes]: To summarize what Martin R pointed out: - -$g$ is the total variation function, it can be written as $g(x) =V_a^x f$ where $V_a^x$ is the total variation of $f$ restricted to $[a,x]$. -Yes, for $C^1$ smooth functions $V_a^x f= \int_a^x |f'(t)|\,dt$. The same holds more generally: whenever $f$ has finite variation on $[a,x]$, it is differentiable almost everywhere, and the integral of $|f'|$ gives the variation. -The ratio $\frac{1}{x-a}V_a^x f$ does not appear to have a name; it could be called mean variation of $f$ on $[a,x]$, or the running average of $|f'|$. Its distant relative is mean oscillation. - -Note: For a real-valued continuous function f, defined on an interval [a, b] ⊂ ℝ, its total variation on [a, b] is a measure of the one-dimensional arclength of the curve with parametric equation x ↦ f(x), for x ∈ [a, b].<|endoftext|> -TITLE: What is the Diophantine Prime-Representing Polynomial with the Least Variables? -QUESTION [7 upvotes]: Recently I was reading Jones et al.'s famous paper "Diophantine Representation of the Set of Prime Numbers." -They present a Prime-Representing Polynomial in 26 variables, and outline the construction of a 12 variable one; this is the best they say they can do. -They also prove that the degree of such an expression would increase as the number of variables decreases. -Since that paper was published (1976), how far along have we come in terms of reducing the number of variables needed? -What is the "best" result, by that metric? - -REPLY [4 votes]: Apparently the record is still $10$ variables (with degree approx $10^{45}$) by Matijasevic (1977). -Ribenboim (1991) referenced nothing less than that. These entries in the Prime Glossary and Mathworld also don't mention anything smaller, though Wikipedia cites Matijasevic as showing it can be potentially reduced to just $9$ variables. -P.S. Regarding the Jones et al polynomial in 26 variables whose positive values are all primes, this has the form, -$$P(a,b,\dots z) = (k+2)(1-x_1^2-x_2^2-\dots-x_{14}^2)$$ -so the only way for $P$ to be positive is if all $x_i = 0$. Thus, it is a set of 14 Diophantine equations in disguise.<|endoftext|> -TITLE: Is $a^{\ln b} = b^{\ln a}$? -QUESTION [7 upvotes]: I was struggling with a math problem, namely, a limit with a power to the log of something. While I was struggling with it, I found out that $$a^{\ln b} = b^{\ln a}$$ for all positive values that I've tested. Is it true? And if so, can you provide a proof? - -REPLY [17 votes]: Do this: -$$a^{\ln(b)} = e^{\ln(a)\ln(b)} = b^{\ln(a)}.$$<|endoftext|> -TITLE: How to get the SVD of $2AA^T-\operatorname{diag}(AA^T)$ given $A$ and its SVD $A=USV^T$? -QUESTION [13 upvotes]: Given a matrix $A\in R^{n\times d}$ with $n>d$, and we can have some fast ways to (approximately) calculate the SVD (Singular Value Decomposition) of $A$, saying $A=USV^T$ and $V\in R^{d\times d}$. It is straightforward to know that the SVD of $2AA^T$ is $U(2SS)V^T$, that is to say the SVD of $2AA^T$ requires $O(nd^2)$ time similar to that of $A$. -However, to get the SVD of $2AA^T-\operatorname{diag}(AA^T)\in R^{n\times n}$ where $\operatorname{diag}(AA^T)$ is a square diagonal matrix who only has the diagonal elements of $AA^T$ in its diagonal, if running SVD directly on $2AA^T-\operatorname{diag}(AA^T)$, we might need $O(n^3)$ time. My question is, do you have any method or equation to use $A$ and its SVD $USV^T$ to indirectly get the SVD of $2AA^T-\operatorname{diag}(AA^T)$? Many thanks for your help. - -REPLY [2 votes]: Let $A,B$ be hermitian matrices. Since we consider proper elements of the matrices, we may assume that $B$ is diagonal; assume that we know $spectrum(A)=(\alpha_i),spectrum(B)=(\beta_i)\subset \mathbb{R}^n$ (in non-increasing order). What can we say about $spectrum(A+B)=(\gamma_i)\subset \mathbb{R}^n$ (in non-increasing order)? - -The answer is given by the Horn's conjecture (cf. http://math.ucr.edu/~jdolan/schubert2.pdf ) whose proof is dated 1998-1999. The NS condition contains only one equality, the obvious one, $\sum_i\gamma_i=\sum_i\alpha_i+\sum_i\beta_i$ and many linear inequalities linking some $\alpha_i,\beta_i,\gamma_i$. Finally, for generic hermitian matrices $A,B$, the $(\gamma_i)$ satisfy only one equality OR the $(\gamma_i)$ goes throught a real algebraic subset of dimension $n-1$. - -Although $A,B$ are linked by $B=r.diag(A)$, it is of little importance because the spectra of $A,B$ are linked by only one equality, the obvious one. Indeed one has - -Proposition (Schur-Horn): for every $p< n, \sum_{i=1}^p \beta_i\leq\sum_{i=1}^p\alpha_i$ and $\sum_{i=1}^n \beta_i=\sum_{i=1}^n\alpha_i$ IFF there is a hermitian matrix $C$ s.t. $diag(C)=(\beta_i)$ and $spectrum(C)=(\alpha_i)$. - -Application to your question. We consider the "thin SVD" $A=USV^T$ where $UU^T=I_n,V^TV=I_d$ and $S\in M_d$ is diagonal. Then $AA^T=US^2U^T$ (your formula is false), that is the standard orthogonal diagonalization of $AA^T$; in particular, I do not see the point to compute the SVD of $A$. The required formula is in the form $AA^T+r.diag(AA^T)=W\Sigma W^T$. We saw that there is no real link between eigenvalues of $S^2$ and $\Sigma$ and, consequently, between their eigenvectors $U,W$. - -Conclusion. The answer to your question is NO and do not dream, there is no hope.<|endoftext|> -TITLE: How fundamental is Euler's identity, really? -QUESTION [7 upvotes]: Euler's identity, obviously, states that $e^{i \pi} = -1$, deriving from the fact that $e^{ix} = \cos(x) + i \sin(x)$. The trouble I'm having is that that second equation seems to be more of a definition than a result, at least from what I've read. It happens to be convenient. Similarly, the exact nature of using radians as the "pure-number" input to trig functions is a similar question of convenience -- would it be fundamentally wrong to define sine and cosine as behaving the same way as they do now, except over a period of $1$ rather than $2 \pi$? In such a system, $e^{i \pi} = \cos(\pi) + i\sin(\pi) = \cos(\pi - 3) + i\sin(\pi - 3)$, or transforming back into our $2\pi$-period system to get a result, $\cos(\pi\frac{(\pi - 3)}{1}) + i\sin(\pi\frac{(\pi - 3)}{1})$, which is approximately $0.903 + 0.430i$. (Hopefully I did that right.) -Since there are equally mathematically true systems where $e^{i \pi}$ gives you inelegant results, I'm asking whether the fact that $e^{i \pi} = -1$ really demonstrates some hidden connection between $e$ and $\pi$ and the reals and imaginaries, as it rests largely on what seem to me to be arbitrary definitions of convenience rather than fundamental mathematical truths. - -REPLY [6 votes]: One of the more remarkable things about this identity is that it falls out of so many different definitions of the terms. Its not just a convenience or a happenstance, it arises from almost any valid definition of exponentiation. -So, for instance, consider the definition $$e^x = \lim_{n\to\infty} \left(1 + \frac{x}{n}\right)^n$$ which is historically where $e$ first arose, in the work of Jacob Bernoulli. -So now we can ask: does this definition lead to Euler's identity? Or, more explicitly, is $$\lim_{n\to\infty} \left(1 + \frac{ix}{n}\right)^n = \cos(x) +i\sin(x)\ ?$$ Of course here, and later, we use the radian version of the trig functions, and $x\in \mathbb{R}$. -To answer this, lets assume that $|zw| = |z||w|$ and $\arg (zw) = \arg(z) +\arg(w) (\mod 2\pi)$. We can derive these identities using algebra, and results from geometry that are more than 2000 years old. Furthermore, these functions are continuous, which is obvious for $|\cdot|$ and is true for $\arg$ in the correct topology. -We can now calculate the modulus of the relevant limit. -$$\begin{align*} |e^{ix}| &= \left| \lim_{n\to\infty} \left(1 + \frac{ix}{n}\right)^n \right|\\ -&= \lim_{n\to\infty} \left| \left(1 + \frac{ix}{n}\right)^n \right| \\ -&= \lim_{n\to\infty} \left| \left(1 + \frac{ix}{n}\right)\right|^n \\ -&= \lim_{n\to\infty} \left( 1 + \frac{x^2}{n^2} \right)^{n/2} \\ -&= \lim_{n\to\infty} \left(\left( 1 + \frac{x^2}{n^2} \right)^{n^2}\right)^{1/2n} \\ -&= \lim_{n\to\infty} \left(e^{x^2}\right)^{1/2n} \\ -&= 1 -\end{align*}$$ -The first line is our definition, the second is justified by continuity, the third by our modulus identity, the fourth by the definition of modulus, and from there we play with exponents and use our definition of the exponential (there's another way to do it with logs, but this ought to be fine). -We can also calculate the argument. -$$\begin{align*} \arg(e^{ix}) &= \arg\left( \lim_{n\to\infty} \left(1 + \frac{ix}{n}\right)^n \right)\\ -&= \lim_{n\to\infty} \arg\left(\left(1 + \frac{ix}{n}\right)^n \right)\\ -&= \lim_{n\to\infty} n \arg\left(1 + \frac{ix}{n}\right)\\ -&= \lim_{n\to\infty} n \arctan\left(\frac{x}{n}\right) \\ -&= \lim_{h\to\infty} \frac{ \arctan(xh) - \arctan(0) }{ h } \\ -&= \left. \frac{\text{d}\arctan'(xt)}{\text{d}t}\right\vert_{t=0} \\ -&= x. -\end{align*}$$ the justifications here are much the same as before, with a little calculus thrown in at the end. -Taking our two results together, and using a little more geometry, we have that $$e^{ix} = \lim_{n\to\infty} \left(1 + \frac{ix}{n}\right)^n = \cos(x) + i \sin(x)$$ and by implication $$e^{i\pi} = \lim_{n\to\infty} \left(1 + \frac{i\pi}{n}\right)^n = -1$$. So, this isn't just some arbitrary thing, it appears with all the definitions of exponentiation that can be easily extended to the complex numbers. -Anyway, I hope this adds something to your understanding @Why-Seven-Six.<|endoftext|> -TITLE: Finding Symmetry Group $S_3$ in a function -QUESTION [11 upvotes]: I was considering functions $f: \Bbb{C} \rightarrow \Bbb{C}$ and I defined the following instrument (I call it the Symmetry Group of a function) -$$ \text{Sym}(f) = \left< m(x)|f(m(x))=f(x) \right> $$ -An intuitive example is to consider $\text{Sym}(e^x)$ and observe that -$$m(x) = x + 2i \pi $$ -has the property that -$$ e^{m(x)} = e^{x+2i\pi}=e^x e^{2i \pi} = e^x $$ -And the group generated by $m(x)$ under composition is the set of functions -$$ x + 2i\pi k, k \in \Bbb{Z}$$ -Which is isomorphic to $\Bbb{Z}$ under function composition. So one can then say that $$\text{Sym}(e^x) \cong \Bbb{Z}$$ -What I was curious about was if there are any elementary functions such that -$$ \text{Sym}(g(x)) \cong S_3$$ -In attempt to build one I considered -$$ g(x) = x^{\frac{-1 + i \sqrt{3}}{2}} + x^{\left( {\frac{-1 + i \sqrt{3}}{2}}\right)^2} + x + \frac{1}{x} +x^{-\frac{-1 + i \sqrt{3}}{2}}+ x^{-\left({\frac{-1 + i \sqrt{3}}{2}}\right)^2} $$ -G has as a generator for its symmetries the functions $L_1 = \frac{1}{x}$ and $L_2 = x^{\frac{-1 + i \sqrt{3}}{2}}$ -Which can be observed as -$$G(L_1) = G(L_2) = G(x)$$ -But the problem is that $L_1(L_2) = L_2(L_1)$ so clearly this isn't a generating set for $S_3$. It's not obvious at this point, how to go about making a function that has $S_3$ as its underlying symmetry group - -Some Examples: -$\Bbb{Z}_2$ can be realized as $\text{Sym}\left(x + \frac{1}{x}\right)$ as this function is invariant under the substitutions $x \rightarrow x$ and $x\rightarrow \frac{1}{x}$ -The proof arises from the following: Suppose we wish to find all transformations $T$ $x$ such that -$$ x + \frac{1}{x} = T(x) + \frac{1}{T(x)}$$ -We can the derive that -$$ T(x)^2 - \left(x+ \frac{1}{x}\right)T(x) + 1 = 0$$ -Which yields that -$$ T(x) = \frac{x + \frac{1}{x} \pm \sqrt{x^2+2+\frac{2}{x^2}-4}}{2}$$ -simplifying to -$$ T(x) = \frac{x + \frac{1}{x} \pm (x-\frac{1}{x})}{2}$$ -and that gives $T(x) = x, T(x) = \frac{1}{x}$ observe the these transformations form a group of order $2$ so they must be isomorphic to $\Bbb{Z}_2$ -And in general I conjecture that: -$\Bbb{Z}_n$ can be realized as $$\text{Sym} \left( x + x^{\sqrt[n]{1}_1} + x^{\sqrt[n]{1}_2} + ... x^{\sqrt[n]{1}_{n-1}}\right)$$ - -REPLY [3 votes]: There's a little ambiguity as to the type of functions you are considering. For example, you seem ok with allowing the function to have some singularities at zero as your $z+1/z$ example indicates. The group $S_3$ acts naturally on $\mathbf{C} \cup \{\infty\}$ via the following rational functions: -$$\Sigma = \left\{z, \ 1/z, \ 1-z,\frac{1}{1-z}, \ 1 - \frac{1}{z}, \ \frac{z}{z-1}\right\}$$ -In particular, if $h(z)$ is any function $h: \mathbf{C} \cup \{\infty\} \rightarrow \mathbf{C} \cup \{\infty\}$ then -$$f(z) = h(z) + h(1/z) + h(1-z) + h\left(\frac{1}{1-z}\right) + h\left(1-\frac{1}{z}\right) + h\left(\frac{z}{z-1}\right)$$ -will be invariant under $\Sigma$. It could be the case that $f(z)$ is invariant under more symmetries, of course. For example, if $h(z) = z$, then $f(z) = 3$. -On the other hand, if $h(z) = z^2 + c$ for any constant $c$, then $f(z)$ is a non-trivial rational function. Moreover, one finds that (in this case) -$$f(x) - f(y) = \frac{2(x-y)(x+y-1)(xy - 1)(1-x+xy)(1-y+xy)(-x-y+xy)}{(x-1)^2 x^2 (y-1)^2 y^2}.$$ -Assuming that $y \in \mathrm{Sym}(f)$, the numerator is zero, and so (under very weak continuity hypotheses) one of the six factors in the numerator are zero, leading to $y \in \Sigma$. So it seems that $f(z)$ is a suitable function in your case. A particularly nice choice of constant $c$ is $c = -7/4$, in which case $f(2) = 0$, and so -$$f(z) = f(z) - f(2) = \frac{(z-2)^2 (z+1)^2 (2z - 1)^2}{2 z^2 (z-1)^2}.$$ -In this case, the square-root of this function is invariant under the even elements of $\Sigma = S_3$ and sent to its negative under the odd elements. -A slightly more general nice family (but no longer a square) is given (for a parameter $t$) by -$$f(x) = \frac{2(x-t)(x+t-1)(xt - 1)(1-x+xt)(1-t+xt)(-x-t+xt)}{(x-1)^2 x^2 (t-1)^2 t^2}.$$ - -I might as well add a complete list of such examples coming from polynomials. Suppose that $f(x)$ is a polynomial, and $y \in \mathrm{Sym}(f)$. Then we must have $f(x) - f(y) = 0$. But $f(x) - f(y)$ is a rational function in $y$, and so has a finite number of algebraic solutions. If we insist that our functions are entire functions on $\mathbf{C} \cup \{\infty\}$, then this forces $y$ to be a rational function (other algebraic functions will not be single valued), and (by degree considerations) a function of the form: -$$y = \frac{a x + b}{c x + d}.$$ -The choice of constants is only well defined up to scaling. This gives an injective map: -$$\mathrm{Sym}(f) \rightarrow \mathrm{PGL}_2(\mathbf{C}).$$ -The finite subgroups of the right hand side are well known, and so, in particular, we deduce: -Claim: Let $f$ be a rational function. Then $\mathrm{Sym}(f)$ is either cyclic, dihedral, or one of the exceptional groups $A_4$, $S_4$, and $A_5$. -Cyclic examples are easy to construct. Let $f(x) = x^n$, and then $\mathrm{Sym}(f)$ consists of $y = \zeta x$ for an $n$th root of unity $\zeta$. This corresponds to the map: -$$a \in \mathbf{Z}/n \mathbf{Z} \mapsto \left( \begin{matrix} \zeta^a & 0 -\\ 0 & 1 \end{matrix} \right) \in \mathrm{PGL}_2(\mathbf{C}).$$ -Naturally one can also take $f(x) = h(x^n)$ for a generic rational function $h(x)$. -Note that other examples (such as $f(x) = x + x^{-1}$) can be obtained from -these examples by suitable change of variables, namely, because -$$\left( \begin{matrix} 1 & - 1 \\ 1 & 1 \end{matrix} \right) -\left( \begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix} \right) -\left( \begin{matrix} 1 & - 1 \\ 1 & 1 \end{matrix} \right)^{-1} = -\left( \begin{matrix} -1 & 0 \\ 0 & 1 \end{matrix} \right),$$ -and we find that -$$h(x) = x + \frac{1}{x}, \qquad h\left(\frac{x-1}{x+1}\right) = g(x^2), \qquad -g(x) = 2 \cdot \frac{x+1}{x-1}.$$ -Note that the dihedral representation of $D_{2n}$ inside $\mathrm{PGL}_2(\mathbf{C})$ is -given by the image of $\mathbf{Z}/n \mathbf{Z}$ together with the matrix -$$\left( \begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix} \right) ,$$ -Hence we can write down the examples -$$f(x) = h\left(x^n + \frac{1}{x^n}\right),$$ -for a generic function $h$ (taking $h(x) = x$ will do). Here $\mathrm{Sym}(f)$ is generated by $x \mapsto \zeta x$ and $x \mapsto 1/x$. -One can construct the other examples in a similar manner. For fun, I computed an example with $\mathrm{Sym}(f) = A_4$. The group $A_4$ has (several) projective representations -$$A_4 \rightarrow \mathrm{PGL}_2(\mathbf{C})$$ -realized by $2$-dimensional representations of the Schur cover $\mathrm{SL}_2(\mathbf{F}_3)$. -One such example maps the non-trivial elements of the - Klein $4$-subgroup $K$ to -$$K \setminus \{e\} = \left\{\left( \begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix} \right), \left( \begin{matrix} -1 & 0 \\ 0 & 1 \end{matrix} \right), \left( \begin{matrix} 0 & -1 \\ -1 & 0 \end{matrix} \right)\right\},$$ -and this group is normalized by the order three (in $\mathrm{PGL}_2$) element -$$\left( \begin{matrix}i & -i \\ 1 & 1 \end{matrix} \right)$$ -Writing down the corresponding $12$ elements of $A_4$ and letting -$$f(z) = \sum_{A_4 \subset \mathrm{PGL}_2(\mathbf{C})} h(\gamma z),$$ -doing a calculation -as above with $h(z) = z^2 + c$, one finds that -$$\begin{aligned} -f(x) - f(y) = & \ 2(x - y)(x + y)(-1 + xy)(1 + xy)(-i - ix - y + xy)(i + ix - y + xy) - (-i - x - iy + xy)\\ - \times & \ -\frac{(i + x - iy + xy)(i - x + iy + xy)(-i + x + iy + xy) - (i - ix + y + xy)(-i + ix + y + xy)}{(-1 + x)^2x^2(-i + x)^2(i + x)^2 - (1 + x)^2(-1 + y)^2y^2(-i + y)^2(i + y)^2(1 + y)^2} \end{aligned} -$$ -Since translating $f(x)$ preserves the symmetry group, one can (for example) choose $f(x)$ to vanish at $x = y$ for any fixed $y$, and then $f(x)=f(x) - f(y)$ as above. For example, if $y = 2$, then -$$450 f(x) = \frac{(-2 + x) (2 + x) (-1 + 2 x) (1 + 2 x) (9 + x^2) (5 - 6 x + 5 x^2) (5 + 6 x + 5 x^2) (1 + 9 x^2)}{(-1 + x)^2 x^2 (1 + - x)^2 (1 + x^2)^2}.$$<|endoftext|> -TITLE: Hamel basis for $\mathbb R$ over the field $\mathbb Q$ -QUESTION [9 upvotes]: A set $S\subseteq\mathbb R$ is said to be linearly independent if for distinct $x_1,\ldots,x_k$ ($k\in\mathbb N$) and for integers $n_1,\ldots,n_k$, $$n_1x_1+\ldots+ n_kx_k=0$$ implies that $$n_1=\ldots=n_k=0.$$ -It is not difficult to see that this definition is equivalent to the one in which $n_1,\ldots,n_k$ are allowed to be rationals. -By Zorn’s lemma, there exists a maximal such linearly independent $S$, which is a Hamel basis for $\mathbb R$ over $\mathbb Q$. Now, Problem 14.7 in Billingsley’s Probability and Measure (1995) claims that - -[E]ach real $x$ can be written uniquely as $x=n_1x_1+\cdots+n_kx_k$ for distinct points $x_i$ in $S$ and integers $n_i$. [emphasis added] - -I think this is not right: -(1) In the above definition of linear independence, integers and rationals are interchangeable, but... -(2) ...in the Hamel-basis representation, rationals must be allowed. Integer coordinates are not enough to represent all of $\mathbb R$ for any maximal linear independent $S$. -I wonder if someone could confirm this is a typo. Thank you. - -Here is a proof sketch for why integers are not sufficient. Let $x\in S$. Now, if Billingsley’s claim were true, it would be the case that $$\frac{x}{2}=\sum_{i=1}^kn_ix_i$$ for distinct $x_i\in S$ and integers $n_i$. But then $$x-2n_1x_1-\ldots-2n_kx_k=0.$$ Because of linear independence, $x$ must coincide with some other $x_i$. Since the $x_i$’s are distinct, there must be precisely one such $x_i$. Then, by matching the coefficients, it must be the case that $$1-2n_i=0,$$ or $n_i=1/2$, which is not an integer. - -REPLY [5 votes]: Yes, the coefficients in the vector representation of a real number $x$ over $\Bbb Q$, must be rational. See: http://mathworld.wolfram.com/HamelBasis.html.<|endoftext|> -TITLE: Show $a^p \equiv b^p \mod p^2$ -QUESTION [10 upvotes]: I am looking for a hint on this problem: - -Suppose $a,b\in\mathbb{N}$ such that $\gcd\{ab,p\}=1$ for a prime $p$. Show that if $a^p\equiv b^p \pmod p$, then we have: $$a^p \equiv b^p \pmod {p^2}.$$ - -I have noted that $a,b$ are necessarily coprime to $p$ already, and Fermat's little theorem ($x^p\equiv x \pmod p$), but I do not see how I should apply it in this case if at all. -Any hints are appreciated! - -REPLY [4 votes]: You could generalize this further. Here is one of the Lifting the Exponent Lemmas (LTE): - -Define $\upsilon_p(a)$ to be the exponent of the largest prime power of $p$ that divides $a$. -If $a,b\in\mathbb Z$, $n\in\mathbb Z^+$, $a\equiv b\not\equiv 0\pmod{p}$, then $$\upsilon_p\left(a^n-b^n\right)=\upsilon_p(a-b)+\upsilon_p(n)$$ - -In your case, by Fermat's Little theorem $a^p\equiv b^p\not\equiv 0\pmod{p}\iff a\equiv b\not\equiv 0\pmod{p}$, therefore $$\upsilon_p\left(a^p-b^p\right)=\upsilon_p(a-b)+\upsilon_p(p)=\upsilon_p(a-b)+1$$ -Therefore $p^2\mid a^p-b^p$.<|endoftext|> -TITLE: How to find where a function is increasing at the greatest rate -QUESTION [6 upvotes]: Given the function $f(x) = \frac{1000x^2}{11+x^2}$ on the interval $[0, 3]$, how would I calculate where the function is increasing at the greatest rate? - -Differentiating the function will give its slope. Since slope is defined as the rate of change, then getting the maxima of the function's derivative will indicate where it is increasing at the greatest rate. -The derivative of $f(x)$ is $\frac{22000x}{(11+x^2)^2}$ -Applying the first derivative test, the critical number is $\sqrt{\frac{11}{3}}$. The function increases before the critical number and decreases after it, so the critical number is a maximum. $\sqrt{\frac{11}{3}}$ is the answer. - -REPLY [2 votes]: Differentiating the function will give its slope. Since slope is defined as the rate of change, then getting the maxima of the function's derivative will indicate where it is increasing at the greatest rate. -The derivative of $f(x)$ is $\frac{22000x}{(11+x^2)^2}$ -Applying the first derivative test, the critical number is $\sqrt{\frac{11}{3}}$. The function increases before the critical number and decreases after it, so the critical number is a maximum. $\sqrt{\frac{11}{3}}$ is the answer.<|endoftext|> -TITLE: Forcing names, parameters in definitions, and the Iterative Conception of Set -QUESTION [5 upvotes]: So, I've been trying to learn as much as I can about forcing. I know that a model provides its own (trivial) forcing extension. What I'm curious about is whether there is a way to think of the iterative hierarchy in terms of (trivial?) forcing extensions? -The iterative hierarchy for $\mathsf{ZF}$ can be given in the traditional manner: - -The first level, $V_0$, is defined to be the empty set, $\emptyset$ - (if the set theory is impure, this is also where the - urelements, i.e., the non-set individuals, reside). Subsequent - levels are formed by taking the powerset of the previous stage such - that the level immediately following $V_n$ (for finite ordinals $n$), - the successor level $V_{n+1}$, is defined as - $\mathcal{P}(V_n)$. Once you have formed all of the finite stages, you - form the first limit level $V_\omega$ by taking the infinite union of the preceding levels, $\bigcup_{n<\omega} V_n$. The process then repeats for the - successor levels of $V_\omega$, $V_{\omega + 1}, V_{\omega + - 1},\dots$, with the successor levels of $V_\omega$ unionized to form - the next limit level $V_{\omega \cdot 2}$. The universe of - sets, $V$, is the union of all the levels: $V = \bigcup_{\alpha \in - O} V_\alpha$, where $O$ is the class of all ordinals. - -Is there a way to understand the various levels as forcing extensions of preceding levels? I'm asking because it's common, in talk of the iterative conception, to allow that sets formed at previous levels can be used as parameters in defining new sets. These parameters sort of reminded me of the names that get introduced when you add constants for your forcing language. -Is there any interesting connection between the levels of $\mathsf{ZF}$ and the forcing extensions of a theory? Can we understand the levels as something akin to forcing extensions of preceding levels? - -REPLY [5 votes]: Not really - they are fundamentally different types of extension. One of the crucial properties of forcing is that a forcing extension $V\subset V[G]$ is an end extension - no set "gets new elements" - but not a top extension (forcing adds no new ordinals). A top extension is one in which every new set has higher rank than every old set - that is, exactly as $V_{\alpha+1}$ is to $V_\alpha$. So in a sense, forcing is really "orthogonal" to the cumulative hierarchy - iterating powersets builds "up," forcing builds "sideways. - -Building off a comment below: note that nontrivial forcing extensions are never elementary extensions. Now, iterating powerset doesn't yield elementary extensions level-by-level either - that is, $V_\alpha$ is never an elementary substructure of $V_{\alpha+1}$ (since $V_{\alpha+1}$ satisfies "there is a set $X$ of maximal rank," so must $V_\alpha$, so $\alpha=\beta+1$; but then $V_\alpha$ and $V_\beta$ disagree on what the largest ordinal is!) - but we can have $V_\alpha\prec V_\beta$ for certain $\alpha<\beta$. -To find such $\alpha$ and $\beta$: Fix $\alpha_0$. Now by iteratively closing under Skolem functions, we can find an $\alpha>\alpha_0$ such that $V_\alpha\prec V$. Fix $\beta_0>\alpha$, and similarly find $\beta>\beta_0$ such that $V_\beta\prec V$. Then we use the general fact from model theory that $$A\preccurlyeq C, B\preccurlyeq C, A\subseteq B\implies A\preccurlyeq B.$$ -(This argument actually takes more than ZFC - in particular, as written it implies the consistency of ZFC! - but not too much more.)<|endoftext|> -TITLE: Phase portrait of system of nonlinear ODEs -QUESTION [5 upvotes]: How can we sketch by hand the phase portrait of a system of nonlinear ODEs like the following? -$$\begin{align} \dot{x} &= 2 - 8x^2-2y^2\\ \dot{y} &= 6xy\end{align}$$ -I can easily find the equilibria, which are -$$\left\{ (0, \pm 1), \left(\pm \frac{1}{2}, 0\right) \right\}$$ -The corresponding stable subspace for $\left(\pm \frac{1}{2}, 0\right)$ is -$$\mbox{span} \left\{ \left(\frac{2i}{\sqrt{6}}, 1 \right), \left(-\frac{2i}{\sqrt{6}}, 1 \right) \right\}$$ -and the unstable subspace for $(0, \pm 1)$ is -$$\mbox{span} \left\{ (0, 1), (1, 0) \right\}$$ -respectively. But I can't see how to use these pieces of information to sketch the phase portrait. Any help would really be appreciated! - -REPLY [3 votes]: The basic process is to find the critical points, evaluate each critical point by finding eigenvalues/eigenvectors using the Jacobian, determine and plot $x$ and $y$ nullclines, plot some direction fields and use all of this type of information to draw the phase portrait. -You can see two different views of this process at this website and notes. -For your particular problem -$$x' = 2 - 8x^2-2y^2 \\ y' = 6xy$$ -We find the critical points where we simultaneously get $x' = 0, y' = 0$ so -$$(x, y) = (0, -1), (0, 1), \left(-\dfrac{1}{2}, 0\right), \left(\dfrac{1}{2}, 0\right)$$ -The Jacobian is -$$J(x, y) = \begin{bmatrix}\dfrac{\partial x'}{\partial x} & \dfrac{\partial x'}{\partial y}\\\dfrac{\partial y'}{\partial x} & \dfrac{\partial y'}{\partial y}\end{bmatrix} = \begin{bmatrix}-16 x & -4y\\6y & 6x\end{bmatrix}$$ -Evaluate eigenvalue/eigenvector for each critical point -$J(0, -1) \implies \lambda_{1,2} = \pm 2 i \sqrt{6}, v_{1,2} = \left(\mp i \sqrt{\frac{2}{3}}, 1\right) \implies$ spiral -$J(0, 1) \implies \lambda_{1,2} = \pm 2 i \sqrt{6}, v_{1,2} = \left(\pm i \sqrt{\frac{2}{3}}, 1\right) \implies$ spiral -$J(-\frac{1}{2}, 0) \implies \lambda_{1,2} = (8, -3), v_{1} = (1,0), v_2 = (0, 1) \implies$ saddle -$J(\frac{1}{2}, 0) \implies \lambda_{1,2} = (-8, 3), v_{1} = (1,0), v_2 = (0, 1) \implies$ saddle -Using all the above (critical points, eigenvalues/eigenvectors, x-nullcline (red and black curves), y-nullcline (green curve), direction fields, etc.), you can now sketch the phase portrait. Exercise - make sure to add direction fields from the two sets of notes linked above so you understand how to do that. The phase portrait will look like:<|endoftext|> -TITLE: Convergent subsequence of $\sin(n)$ -QUESTION [7 upvotes]: According to theorem every bounded sequence in $R$ has a convergent subsequence.Can anybody make a convergent subsequence of $\sin{n}$. - -REPLY [3 votes]: Here's a semi-explicit one. The convergents of the continued fraction of $\pi$ are a sequence of rational approximations $p_n/q_n$ with $p_n$, $q_n$ positive integers tending to $\infty$, -$$ \left| \pi - \dfrac{p_n}{q_n}\right| < \dfrac{1}{q_n^2}$$ -and thus $$|\sin(p_n)| = |\sin(p_n - \pi q_n)| \le |p_n - \pi q_n| < 1/q_n \to 0 \ \text{as}\ n \to \infty$$<|endoftext|> -TITLE: Why do we care about two subgroups being conjugate? -QUESTION [40 upvotes]: In classifications of the subgroups of a given group, results are often stated up to conjugacy. I would like to know why this is. -More generally, I don't understand why "conjugacy" is an equivalence relation we care about, beyond the fact that it is stronger than "abstractly isomorphic." -My vague understanding is that while "abstractly isomorphic" is the correct "intrinsic" notion of isomorphism, so "conjugate" is the correct "extrinsic" notion. But why have we designated this notion of equivalence, and not some other one? -To receive a satisfactory answer, let me be slightly more precise: - -Question: Given two subgroups $H_1, H_2$ of a given group $G$, what properties are preserved under conjugacy that may break under general abstract isomorphism? - -For example, is it true that $G/H_1 \cong G/H_2$ iff $H_1$ is conjugate to $H_2$? Or, is it true that two subgroups $H_1, H_2 \leq \text{GL}(V)$ are conjugate iff their representations are isomorphic? I'm sure these are easy questions to answer -- admittedly, I haven't thought fully about either -- but I raise them by way of example. What are other such equivalent characterizations? - -REPLY [2 votes]: It has already been mentioned by arctic tern mainly by giving examples and is implicit in the answers of others, but just to make it more explicit. There is a close connection between the action of a group $G$ on a set $\Omega$ and its point stabilizers $G_{\alpha} = \{g \in G: \alpha^g = \alpha\}$ for $\alpha \in \Omega$. The action on each orbit is equivalent to the action of a point stabilizer on that orbit and we have that the point stabilizers on each orbit are conjugate and $G_{\alpha}^g = G_{\alpha^g}$, If $G$ acts transitive, this is key to the observation that for a point $\alpha$ the other points fixed by $G_{\alpha}$ are $\alpha^{N_G(G_{\alpha})}$ and by orbit-stabilizer we have $|N_G(G_{\alpha}) : G_{\alpha}|$ points that are fixed by $G_{\alpha}$ and each element that acts on the fixed point of $G_{\alpha}$ normalizes $G_{\alpha}$, hence conjugation and fixed points are closely related. Taking this further, if we have a single $g \in G_{\alpha}$, the total number of points fixed by $g$ is $|N_G(G_{\alpha}) : G_{\alpha}| \cdot k$, where $k$ denotes the number of distinct conjugates of $G_{\alpha}$ that contain $g$. If for example different point stabilizers intersect trivially (a situation that is not that uncommon), $\alpha^{N_G(G_{\alpha})}$ is precisely the set fixed by every single $g \in G_{\alpha}$. This is sometimes useful if you look at groups on which several restrictions on the fixed points of nontrivial elements are imposed. -Also as mentioned by Qiaochu Yuan two subgroups are conjugate iff the action on the cosets are equivalent. Further two transitive actions on two sets $\Omega, \Gamma$ of $G$ are equivalent if every point stabilizer of the action on $\Omega$ is also a point stabilizer of the action on $\Gamma$, and vice versa. Which is just another way to say this by the fundamental relation between conjugate point stabilizers and the points fixed by elements of $G$.<|endoftext|> -TITLE: Convergence of Martingale: Exercise (Durrett 5.5.7) -QUESTION [6 upvotes]: Exercise 5.5.7 in Durrett's "Probability Theory and Examples 4ed" states: -Let $X_n \in [0 , 1] $ be adapted to $\mathcal{F_n}$. Let $\alpha$, $\beta>0$ with $\alpha+\beta=1$ and suppose: $$P(X_{n+1}=\alpha+\beta X_n \mid\mathcal{F_n})=X_n \qquad\qquad P(X_{n+1}=\beta X_n \mid\mathcal{F_n})=1-X_n$$ Show $P\left(\lim_{n}X_n=0 \: \text{or}\: 1\right)=1 $ and if $X_0=\theta$ then $P\left(\lim_{n}X_n=1\right)=\theta $. -I know that, first of all I should show that $X_n$ is martingale with respect to $\mathcal{F_n}$, i.e: -$$E[X_{n+1}\mid \mathcal{F_n}]=X_n \qquad \forall n \in \mathbb{N}$$ -\begin{align}E[X_{n+1}\mid \mathcal{F_n}]&=E[X_{n+1}[I(X_{n+1}=\alpha+\beta X_n)+I(X_{n+1}=\beta X_n)]\:\mid \mathcal{F_n}]\\&=E[(\alpha+\beta X_n)I(X_{n+1}=\alpha+\beta X_n)+\beta X_nI(X_{n+1}=\beta X_n)\:\mid \mathcal{F_n}]\\&=(\alpha+\beta X_n)E[I(X_{n+1}=\alpha+\beta X_n)\:\mid \mathcal{F_n}]+\beta X_nE[I(X_{n+1}=\beta X_n)\:\mid \mathcal{F_n}]\\&=(\alpha+\beta X_n)P[X_{n+1}=\alpha+\beta X_n\:\mid \mathcal{F_n}]+\beta X_nP[X_{n+1}=\beta X_n\:\mid \mathcal{F_n}]\\&=(\alpha+\beta X_n)X_n+\beta X_n(1-X_n)=\alpha X_n+\beta X_n^2+\beta X_n-\beta X_n^2=X_n(\alpha+\beta)\\&=X_n\end{align} So $X_n$ is a martingale. Now I am confused to use which Theorem and why I can use it to reach what the problem requested. I would be thankful if anyone could explain the rest of solution with detail. Thanks in advance. - -REPLY [3 votes]: As you already stated, the sequence $(X_n)_n$ is a positive martingale, thus it converges almost surely. Moreover, it is uniformly integrable, thus it also converges in $L_1$. So, denote by $X$ its limit and write -$$ -B_n = \{ X_n = \alpha + \beta X_{n-1} \}; -$$ -$$ -B = \limsup_n B_n = \{ B_n, \text{occurs } i.o.\} -$$ -Take $\omega \in B$ and suppose $(X_n(\omega))_n$ converges to $X(\omega)$. Now, extract a subsequence, $(X_{n_k}(\omega))_k$, from $(X_n(\omega))_n$ such that $X_{n_k}(\omega) = \alpha + \beta X_{n_k-1}(\omega)$ pair-wisely. -Well, this subsequence must converges to $X(\omega)$, right? So, it is a Cauchy sequence, i.e., there exists $k_0(\omega)$ such that, for all $k_1,k_2 \ge k_0$ we have -$$|X_{n_{k_1}}(\omega) - X_{n_{k_2}}(\omega)| \le \varepsilon.$$ -But, choosing $k_1$ large enough, $X_{n_{k_1}}(\omega) = \alpha + \beta X_{n_{k_1}-1}(\omega) $, thus -$$ -|X_{n_{k_1}}(\omega) - X_{n_{k_1} - 1}(\omega)| = |\alpha + \beta X_{n_{k_1}-1}(\omega) - X_{n_{k_1}-1}(\omega)| \ge \alpha - \alpha X_{n_{k_1}-1}(\omega) -$$ -since $\alpha + \beta = 1$. Thus, our subsequence should converge to $1$. Similar argument to the occurrence of $B_n^c$ i.o. implies that $X_n \longrightarrow 0 $ if $\omega \in \limsup B_n^c$. -Thus, $X = 1$ in $\liminf_n B_n$ and equals $0$ in $\liminf_n B_n^c$. -This proves $X \in \{ 0,1 \}$ -The last part is @Stef 's comment combined to convergence in $L_1$. -$$ -E[X] = 1\cdot P(X=1) = \lim_nE[X_n] = \theta. -$$ -Honestly, I would love to see an argument more probabilistic instead of this analytic I just gave. Maybe using Levy's 0-1 Law ? If you have another approach, please, share with me. -Hope this help!<|endoftext|> -TITLE: Why is the polynomial $2x^2 + 4$ reducible over $\mathbb{Z}$ -QUESTION [8 upvotes]: I was going through the book of Gallian, Contemporary Abstract Algebra, and got the following result: - -The polynomial $f(x) = 2x^2 + 4$ is irreducible over $\mathbb{Q}$ but reducible over $\mathbb{Z}$, since $2x^2 + 4 = 2(x^2 + 2)$ and neither $2$ nor $x^2 + 2$ is a unit in $\mathbb{Z}[x]$. - -I am not getting how the polynomial is reducible over $\mathbb Z$. Can anyone explain this point? Thanks for the help. - -REPLY [11 votes]: The element $2$ is a unit in $\mathbb Q[x]$ (it has inverse $\frac 12$) but not in $\mathbb Z[x]$ (since $\frac 12\notin \mathbb Z[x]$). We can write $$f(x) = 2(x^2+2).$$Since a polynomial $f$ is reducible iff it can be written as the product of two non-units, this means that $f$ is reducible in $\mathbb Z[x]$. However, this factorisation does not show that $f$ is reducible in $\mathbb Q[x]$ since $2$ is a unit. One can show in other ways (e.g. because $f$ has no rational roots) that $f$ is irreducible over $\mathbb Q$. -This condition is not arbitrary: it means that the ideal $(2x^2+4)$ is prime in $\mathbb Q[x]$ (and is equal to the maximal ideal ($x^2+2)$), but it is not prime in $\mathbb Z[x]$, since $2x^2+4\in(2x^2+4)$, but $$2,x^2+2\notin (2x^2+4).$$<|endoftext|> -TITLE: How do I prove that the trace of a matrix to its $k$th power is equal to the sum of its eigenvalues raised to the $k$th power? -QUESTION [14 upvotes]: Let $A$ be an $n \times n$ matrix with eigenvalues $\lambda_{1},...\lambda_{n}$. How do I prove that tr$(A^k) = \sum_{i=1}^{n}\lambda_{i}^{k}$? - -REPLY [10 votes]: $tr(A)=\sum \lambda_i$ -If $\lambda_i$ is eigenvalue of $A$, then $\lambda_i^k$ is eigenvalue of $A^k$. This mapping preserves multiplicities. - -The first one is a classic result, easily deduced from the characteristic polynomial. The second one is a little trickier (if there are repeated eigenvalues).<|endoftext|> -TITLE: Tiling of a $9\times 7$ rectangle -QUESTION [11 upvotes]: Can a rectangle $9\times 7$ be tiled by "L-blocks" (an L-block consists of $3$ unit squares)? -Although the problem seems to be easy, coloring didn't help me. The general theory is interesting, but I'm looking for an elementary and relatively simple solution (suitable for a high school olympiad). - -REPLY [6 votes]: Here's Python code to find the solutions for this puzzle for any grid size. It outputs the solutions both as text and as PPM files. This code was tested on Python 2.6.6 but it should run correctly on Python 2.5 or any later version. -#! /usr/bin/env python - -''' Knuth's Algorithm X for the exact cover problem, -using dicts instead of doubly linked circular lists. -Written by Ali Assaf - -From http://www.cs.mcgill.ca/~aassaf9/python/algorithm_x.html -and http://www.cs.mcgill.ca/~aassaf9/python/sudoku.txt - -Converted to Python 2.5+ syntax by PM 2Ring 2013.01.27 - -Trominoes version -Fill a rectangular grid with L-trominoes -22656 solutions for 9 x 7 - -See http://math.stackexchange.com/q/1580934/207316 - -Now with PPM output in 4 colours; graph colouring also done via Algorithm X -''' - -from __future__ import print_function -import sys -from itertools import product -from operator import itemgetter - -#Algorithm X functions -def solve(X, Y, solution): - if not X: - yield list(solution) - else: - c = min(X, key=lambda c: len(X[c])) - for r in list(X[c]): - solution.append(r) - cols = select(X, Y, r) - for s in solve(X, Y, solution): - yield s - deselect(X, Y, r, cols) - solution.pop() - -def select(X, Y, r): - cols = [] - for j in Y[r]: - for i in X[j]: - for k in Y[i]: - if k != j: - X[k].remove(i) - cols.append(X.pop(j)) - return cols - -def deselect(X, Y, r, cols): - for j in reversed(Y[r]): - X[j] = cols.pop() - for i in X[j]: - for k in Y[i]: - if k != j: - X[k].add(i) - -#Invert subset collection -def exact_cover(X, Y): - newX = dict((j, set()) for j in X) - for i, row in Y.items(): - for j in row: - newX[j].add(i) - return newX - -#---------------------------------------------------------------------- - -#Solve tromino puzzle -def fill_grid(width, height): - #A 2x2 block of grid cells - cells =((0,0), (1,0), (0,1), (1,1)) - - #Set to cover - X = product(range(width), range(height)) - - #Subsets to cover X with. All possible L-block at each grid location - Y = {} - for x, y, i in product(range(width - 1), range(height - 1), range(4)): - #Turn the 2x2 block into an L-block by dropping the cell at j==i - Y[(x, y, i)] = [(x+u, y+v) for j,(u,v) in enumerate(cells) if j != i] - - #Invert subset collection - X = exact_cover(X, Y) - - #An empty grid to hold solutions - empty = [[0] * width for _ in range(height)] - - keyfunc = itemgetter(1, 0, 2) - for s in solve(X, Y, []): - #Convert cell tuple list into grid form - s.sort(key=keyfunc) - grid = empty[:] - for k, (x, y, i) in enumerate(s): - for j, (u,v) in enumerate(cells): - if j != i: - grid[y+v][x+u] = k - yield grid - -#---------------------------------------------------------------------- - -#Colour a graph given its nodes and edges -def colour_map(nodes, edges, ncolours=4): - colours = range(ncolours) - - #The edges that meet each node - node_edges = dict((n, set()) for n in nodes) - for e in edges: - n0, n1 = e - node_edges[n0].add(e) - node_edges[n1].add(e) - - for n in nodes: - node_edges[n] = list(node_edges[n]) - - #Set to cover - coloured_edges = list(product(colours, edges)) - X = nodes + coloured_edges - - #Subsets to cover X with - Y = {} - #Primary rows - for n in nodes: - ne = node_edges[n] - for c in colours: - Y[(n, c)] = [n] + [(c, e) for e in ne] - - #Dummy rows - for i, ce in enumerate(coloured_edges): - Y[i] = [ce] - - X = exact_cover(X, Y) - - #Set first two nodes - partial = [(nodes[0], 0), (nodes[1], 1)] - for s in partial: - select(X, Y, s) - - for s in solve(X, Y, []): - s = partial + [u for u in s if not isinstance(u, int)] - s.sort() - yield s - -#Extract the nodes and edges from a grid -def gridtograph(grid): - gridheight = len(grid) - gridwidth = len(grid[0]) - - #Find regions. - nodes = list(set(c for row in grid for c in row)) - nodes.sort() - #print 'nodes =', nodes - - #Find neighbours - #Verticals - edges = set() - for y in range(gridheight): - for x in range(gridwidth - 1): - c0, c1 = grid[y][x], grid[y][x+1] - if c0 != c1 and (c1, c0) not in edges: - edges.add((c0, c1)) - - #Horizontals - for y in range(gridheight - 1): - for x in range(gridwidth): - c0, c1 = grid[y][x], grid[y+1][x] - if c0 != c1 and (c1, c0) not in edges: - edges.add((c0, c1)) - - edges = list(edges) - edges.sort() - #print 'edges =', edges - return nodes, edges - -#---------------------------------------------------------------------- - -def show_grid(grid, strwidth): - for row in grid: - print(' '.join([str(k).zfill(strwidth) for k in row])) - print() - -pal = ( - b'\xff\x00\x00', - b'\x00\xff\x00', - b'\x00\x00\xff', - b'\xff\xff\x00', -) - -#---------------------------------------------------------------------- - -def main(): - if len(sys.argv) < 3: - print ("Solve tromino grid puzzle\nUsage:\n" - "%s width height [max_solutions]" % sys.argv[0]) - exit() - - width = int(sys.argv[1]) - height = int(sys.argv[2]) - maxcount = int(sys.argv[3]) if len(sys.argv) > 3 else 0 - ncolours = 4 - - strwidth = len(str(width * height // 3 - 1)) - - #Constants used for PPM output - fnamebase = 'grid_%dx%d' % (width, height) - scale = 16 - scalerange = range(scale) - imgwidth, imgheight = scale * width, scale * height - - print('\nSolutions:') - count = 1 - try: - for grid in fill_grid(width, height): - print('\n%2d:' % count) - show_grid(grid, strwidth) - nodes, edges = gridtograph(grid) - - #Find a colouring for this grid - gen = colour_map(nodes, edges, ncolours=ncolours) - solution = next(gen) - colourmap = dict(solution) - #print colourmap - grid = [[colourmap[u] for u in row] for row in grid] - #show_grid(grid, 1) - - #Convert to PPM - data = [] - for row in grid: - row = [u for u in row for _ in scalerange] - data.extend(row * scale) - ppmstr = b''.join([pal[u] for u in data]) - - fname = '%s_%03d.ppm' % (fnamebase, count) - with open(fname, 'wb') as f: - f.write(b'P6\n%d %d\n255\n%s' % (imgwidth, imgheight, ppmstr)) - print('Saved to', fname) - - count += 1 - if maxcount and count > maxcount: - break - print(count - 1) - except KeyboardInterrupt: - print('\nAborted') - - -if __name__ == '__main__': - main() - -And here are the first 100 solutions for the 9 x 7 grid as an animated GIF. - -These images all use 4 colours, however, it is possible to colour some of them with 3 colours. -(If the image doesn't animate for you, try your browser's "View Image Info" context menu item).<|endoftext|> -TITLE: Why do we take the closure of the support? -QUESTION [8 upvotes]: In topology and analysis we define the support of a continuous real function $f:X\rightarrow \mathbb R$ to be $ \left\{ x\in X:f(x)\neq 0\right\}$. This is the complement of the fiber $f^{-1} \left\{0 \right\}$. So it looks like the support is always an open set. Why then do we take its closure? -In algebraic geometry, if we look at elements of a ring as regular functions, then it's tempting to define their support the same way, which yields $\operatorname{supp}f= \left\{\mathfrak p\in \operatorname{Spec}R:f\notin \mathfrak p \right\}$. But these are exactly the basic open sets of the Zariski topology. I'm just trying to understand whether this is not a healthy way to see things because I've been told "supports should be closed". - -REPLY [5 votes]: Given a scheme $X$ there are two notions of support $f\in \mathcal O(X)$: -1) The first definition is the set of of points $$\operatorname {supp }(f)= \left\{ x\in X:f_x\neq 0_x\in \mathcal O_{X,x}\right\}$$ where the germ of $f$ at $x$ is not zero. -This support is automatically closed: no need to take a closure. -2) The second definition is the good old zero set of $f$ defined by $$V(f)=\{ x\in X:f[x]=\operatorname {class}(f_x)\neq 0\in \kappa (x)=\mathcal O_{X,x}/ \mathfrak m_x\}$$ It is also automatically closed. -3) The relation between these closed subsets is$$ V(f)\subset \operatorname {supp }(f)$$with strict inclusion in general: -For a simple example, take $X=\mathbb A^1_\mathbb C=\operatorname {Spec}\mathbb C[T],\: f=T-17$ . -Then for $a\in \mathbb C$ and $x_a=(T-a)$ we have $f[a]=a-17\in \kappa(x_a)=\mathbb C$ and for the generic point $\eta=(0)$ we have $f[\eta]=T-a\in \kappa(\eta)=\operatorname {Frac}(\frac {\mathbb C[T]}{(0)})=\mathbb C(T)$. -Thus $f[x_{17}]=0$ and $f[P]\neq 0$ for all other $P\in \mathbb A^1_\mathbb C$ , so that $$V(f)=\{x_{17}\}\subsetneq \operatorname {supp }(f)=\mathbb A^1_\mathbb C$$<|endoftext|> -TITLE: Exponential integral: something is wrong. -QUESTION [12 upvotes]: Consider the function -$$ -E(z)=\int_{-\infty}^z\frac{e^t}{t}dt.\quad (1) -$$ -Substituting $t\mapsto -u$ one obtains -$$ -E(z)=-\int_{-z}^{\infty}\frac{e^{-u}}{u}du\equiv Ei(z).\quad (2) -$$ -It is already surprising that the ugly definiton (2) and not (1) is usually used for $Ei(z)$. Much worser is the fact that both definitions lead to different results upon expanding the functions. For this we use the usual trick (standard branch cut along the negative real semi-axis is assumed, if necessary): -$$ -Ei(z)=-\int_{-z}^{\infty}\frac{e^{-u}}{u}du -+\int_0^{-z}\frac{1-e^{-u}}{u}du-\int_0^{-z}\frac{1-e^{-u}}{u}du\\ -=\left[\left.-e^{-u}\ln u\right|_{-z}^\infty-\int_{-z}^\infty e^{-u}\ln u du\right]+\left[\left.(1-e^{-u})\ln u\right|_0^{-z}-\int_0^{-z}e^{-u}\ln u du\right]+\int_0^z\frac{e^t-1}{t}dt\\ -=\ln(-z)+\gamma+\sum_{n=1}^\infty\frac{z^n}{n!n}. -$$ -Applying the same trick for (1) one obtains: -$$ -E(z)=\int_{-\infty}^z\frac{e^{u}}{u}du- -\int_0^{z}\frac{e^{u}-1}{u}du+\int_0^{z}\frac{e^{u}-1}{u}du\\ -=\left[\left.e^{u}\ln u\right|_{-\infty}^z-\int_{-\infty}^z e^{u}\ln u du\right]-\left[\left.(e^{u}-1)\ln u\right|_0^{z}-\int_0^{z}e^{u}\ln u du\right]+\int_0^z\frac{e^u-1}{u}dt\\ -=\ln(z)-\int_{-\infty}^0e^u\ln u du+\sum_{n=1}^\infty\frac{z^n}{n!n}= -\ln(z)-\int_0^{\infty}e^{-t}\ln(-t)dt+\sum_{n=1}^\infty\frac{z^n}{n!n}\\ -=\ln(z)-\int_0^{\infty}e^{-t}(\ln t+i\pi)dt+\sum_{n=1}^\infty\frac{z^n}{n!n} -=\ln(z)-i\pi+\gamma+\sum_{n=1}^\infty\frac{z^n}{n!n}. -$$ -The problem is that the equality $\ln(-z)=\ln(z)-i\pi$ with usual restriction $-i\pi<\arg(z)\le i\pi$ is valid only in the upper complex half-plane (including the negative real semi-axis). In the lower complex half-plane (including the positive real semi-axis) the two values differs by $2\pi i$. -Which result is correct? Where is hidden the error, resulting in the contradiction? - -REPLY [2 votes]: It seems like a substitution error. Although (1) is equal to (2), when you replace t with u in the first line of the last derivation, u=-t so they are not equal. This results in an false statement when you try to set the end products equal to each other because the variables are used inconsistently.<|endoftext|> -TITLE: Find the maximum of the $S=|a_1-b_1|+|a_2-b_2|+\cdots+|a_{31}-b_{31}|$ -QUESTION [13 upvotes]: Let $a_1,a_2,\cdots, a_{31} ;b_1,b_2, \cdots, b_{31}$ be positive integers such that -$a_1< a_2<\cdots< a_{31}\leq2015$ , $ b_1< b_2<\cdotsb_i}(a_i-b_i)=\sum_{a_i>b_i}(a_i-b_i)-\sum a_i+\sum b_i=\sum_{a_i\leq b_i}(b_i-a_i)$$ -The original sum is now: -$$S=\sum_{a_i>b_i}(a_i-b_i)+\sum_{a_i\leq b_i}(b_i-a_i)=\sum_{c_i>d_i}(c_i-d_i)+\sum_{c_i\leq d_i}(d_i-c_i)$$ -Since both sums are the same, we can take $(2-\lambda)$ of the first sum and $\lambda$ of the second sum and the sum will still be the same. Let $k$ be the number of terms in the second sum. $c_i$ has the nice property such that $c_1$ to $c_k$ are in the second sum. Here, choose $\lambda=\frac{2(31-k)}{31}$. The motivation for this is that we want to take them in a way such that terms cancel nicely later, so we take the sums in the ratio $k:31-k$. -$$S=\frac{2k}{31}\sum_{k -TITLE: Sobolev embedding into $L^\infty$ -QUESTION [6 upvotes]: I heard that $W^{n,1}(\mathbb R^n)\hookrightarrow L^\infty(\mathbb R^n)$. -I can only prove that $W^{1,1}(\mathbb R)\hookrightarrow L^\infty(\mathbb R)$ by Newton-Lebniz formula, how to prove for general $n$? -thanks! - -REPLY [5 votes]: If $u\in C_0^\infty(\mathbb{R}^n)$, then $$u(x_1,\cdots,x_n)=\int_{-\infty}^{x_1}\cdots \int_{-\infty}^{x_n} \frac{\partial ^n u(y_1,\cdots,y_n)}{\partial y_n\cdots \partial y_1}dy_n\cdots dy_1,$$ -which implies that -$$\tag{1}\|u\|_\infty\le \|u\|_{n,1}.$$ -Now, for any $u\in W^{n,1}(\mathbb{R}^n)$, take a sequence $u_i\in C_0^\infty(\mathbb{R}^n)$, which converges to $u$ in $W^{n,1}(\mathbb{R}^n)$. From $(1)$, we have that $$\|u_i-u_j\|_\infty\le \|u_i-u_j\|_{n,1},$$ -hence, $u_i$ converge to $u$ in $L^\infty(\mathbb{R}^n)$ and $$\|u\|_\infty\le \|u\|_{n,1}.$$<|endoftext|> -TITLE: Irreducibility of $f(x)=x^4+3x^3-9x^2+7x+27$ -QUESTION [13 upvotes]: Question at hand is: - -Is $x^4+3x^3-9x^2+7x+27$ irreducible in $\Bbb Q$ and/or $\Bbb Z$. - -This is for an exam, reasoning is trivial, but no calculators in hand. Clearly, if there is a rational root, they are integers by Rational Root theorem and since $f$ is monic. -I am aware of - -Rational root theorem, which narrows down the options to $\pm1,\pm3,\pm9,\pm27$, and clearly, no roots. -Eisenstein's Irreducibility Criteria, not helping here, thanks to $x$'s coefficient $7$ -Cohn's Irreducibility test: $12197$ is a prime, too large a number to prove that its a prime by hand. -Descartes Rule of signs: at most 2 (or 0) positive/negative roots. Close enough. - -None of which are helping me in any way since I can't use a calculator. -These are the solutions I tried: - -Alpha says all roots are complex. Made me search if there's some way to determine if all roots are complex, reaching nowhere. -Check if there are any easy prime generation functions like Euler's, and if lucky 12197 falls in that list, the best I got is Euler's, $n^2+n+41, 1\le n<40$, and biggest such is $1601$, not helping. - -Are there any better ways to determine if this polynomial is irreducible over $\Bbb Q$, without using calculators? - -REPLY [2 votes]: Irreducibility over $\mathbb Q$ is the same as over $\mathbb Z$ (Gauss lemma). There is no root (real or complex) not exceeding $1$ in absolute value ($27>1+3+9+7$). Thus, if there is a factorization into two polynomials, the free terms must be $\pm 3$ and $\pm 9$ (otherwise, by Vieta, the polynomial with the free term $\pm 1$ has a root with absolute value $\le 1$). However then the coefficient at $x$ must be divisible by $3$ and it is not.<|endoftext|> -TITLE: Is there standard terminology to describe the not-quite-a-limit behavior of ${\tan( \log x) \over x}$ as $x$ approaches infinity? -QUESTION [6 upvotes]: Suppose I want to describe the long term behavior of ${\tan(\log x) \over x}$ as x increases towards positive real infinity. -Now, -$$\lim_{x \rightarrow \infty}{\tan(\log x) \over x}$$ -obviously doesn't exist. So it would be wrong to say its limit is 0. -But in some very slightly looser sense, the term obviously approaches 0 aside from the very occasional vertical asymptote. If you were to pick a point at "random" far, far down the number line (I'm being very imprecise here, I know), it would be an $\epsilon$ from 0 with a probability approaching 1 as the random range you were pulling from got larger. Various summability methods would also make this fact clear. -Is there either standard terminology for getting this idea across, or standard notation for expressing it? - -REPLY [6 votes]: Following Michael Burr's suggestion in the comments, if we let -$$ -f(x) = \frac{\tan(\log x)}{x} -$$ -then we can define a running measure of the $x$'s for which $|f(x)| > a$ for some fixed $a > 0$: -$$ -m_a(x) = \mu\Bigl( \{ t \in \mathbb R : 1 < t < x \text{ and } |f(t)| > a \} \Bigr). -$$ - -Claim. - $$ -m_a(x) \sim \frac{2}{\pi a} \log x -$$ - as $x \to \infty$. - -This gives us a notion of the density of the "bad" intervals (the intervals where $f(x)$ is not small) in the real line. For instance, if we had had -$$ -\lim_{x \to \infty} \frac{m_a(x)}{x} = \frac{1}{2} -$$ -we would interpret this to mean that the bad intervals take up roughly half of the real line. In our case $\lim_{x \to \infty} m_a(x)/x = 0$, and it tends to zero rather quickly too, so we can interpret this to mean that, proportionally, the bad intervals are pretty insignificant. -Proof sketch. To calculate the width of the intervals where $|f(x)| > a$ we can start by calculating the points at which $\tan(\log x) = \pm ax$. Away from its poles $f(x)$ will be very close to zero for large $x$, so we only need to investigate neighborhoods of the poles. -Following this idea, the method in this answer can be used to show that, for large $x$, the graph of $y = \tan(\log x)$ - -intersects the graph of $y=ax$ at $x = e^{(2n+1)\pi/2} - \frac{1}{a} + o(1)$ and -intersects the graph of $y=-ax$ at $x = e^{(2n+1)\pi/2} + \frac{1}{a} + o(1)$ - -for $n \in \mathbb N$ with $n \to \infty$. We observe that for large $x$ the length of the intervals for which $|f(x)| > a$ approaches $2/a$, and so for -$$ -e^{(2n+1)\pi/2} + \frac{1}{a} + \epsilon < x < e^{(2n+3)\pi/2} - \frac{1}{a} - \epsilon \tag{1} -$$ -and $n$ large we have -$$ -m_a(x) \approx \sum_{k=0}^{n} \frac{2}{a} \approx \frac{2n}{a}. \tag{2} -$$ -For $x$ in the range in $(1)$ we have $n = \frac{\log x}{\pi} + O(1)$, and thus $(2)$ becomes -$$ -m_a(x) \approx \frac{2}{\pi a} \log x. -$$ -The claim follows from this estimate.<|endoftext|> -TITLE: What is the logical idea of self orthogonality? -QUESTION [6 upvotes]: I know how to calculate the orthogonal trajectory of a given family of curves. And it is said in my text book that if the orthogonal trajectory of a curve is itself, then we say that the family of curves is self orthogonal. As an example, orthogonal trajectory of parabola $$y^2=4a\left(x+a\right)$$ Is the same function. But how this is possible? Orhogonal trajectory cut every member of given family normally. I cannot get an idea of self orthogonality. I tried to graph the above equation with suitable $a$ . But I cannot agree with the fact that the same function is its orthogonal trajectory. I try an algebraic approach. If $m$ is the slope of such function at a point $x$ , then $\frac{-1}{m}$ is also its slope at $x$ . This gives, $$m^2=-1\\ -\implies {m=i}$$ -thus, the slope I get is a complex value for such functions. But I cannot agree with it. Where is my mistake? Can anyone help me? - -REPLY [3 votes]: Note that if $m$ is the slope of a function in the given family, than $\frac{-1}{m}$ is the slope, at the same $x$, of another function of the same family ( this is your mistake, I suppose). -See the figure: - -where this is illustrated for the values $a=1$ and $a=-1$ for the functions of your family ( the semiplane $y<0$ is symmetric). The two functions are: -$$ -a=1 \rightarrow y^2=4(x+1) \rightarrow y=2\sqrt{x+1} -$$ -$$ -a=-1 \rightarrow y^2=4(1-x) \rightarrow y=2\sqrt{1-x} -$$ -that have orthogonal tangents at $x=0$. -For an analitic approach: -you can see that $m=\frac{a}{\sqrt{a(x+a)}}$ so the condition of othogonality for two curves of the family with parameters $a$ and $b$ is: -$$ -\frac{a}{\sqrt{a(x+a)}}=-\frac{\sqrt{b(x+b)}}{b} -$$ -squaring, with the condition $ab<0$, this becomes: -$$ -x^2+(a+b)x=0 -$$ -and this means that if the two curves have a common point at $x=0$, at this point they are orthogonal, or they can be orthogonal at $x=-(a+b)$.<|endoftext|> -TITLE: Expected value for the number of tries to draw the black ball from the bag -QUESTION [5 upvotes]: We have a bag with $4$ white balls and $1$ black ball. We are drawing balls without replacement. Find expected value for the number of tries to draw the black ball from the bag. - -Progress. The probability to draw a black ball from first trial is $1/5$. The problem is how to find the probability to draw black ball from $2$nd, $3$rd, $ \ldots, 5$th trial. When I know all this probabilities I can find expected value as $1\cdot(1/5) + 2 p_2 + \dots + 5 p_5$. - -REPLY [5 votes]: It is as if you will create a word with $4$ W's and $1$ B. For example $BWWWW$ or $WWWBW$ etc. How many such words can you create? Answer: $5$ and any such word is equally likely. -In other words: the probability that the black ball will be drawn at any place - not only the first - is equal to $1/5$. Not conditional probability, but probability. Do not get confused, that if you have drawn $4$ White balls then the probability of drawing the black ball in the fifth draw is $1$. This is the conditional probability. "A priori" it is equally likely that the black ball will be drawn at any given point from $1$ to $5$. So, $$E[X]=\frac{1}{5}\cdot 1+ \frac{1}{5}\cdot2+\ldots+\frac15\cdot 5=\frac15(1+2+3+4+5)=3 $$ (where $X$ denotes the number of trials).<|endoftext|> -TITLE: Why is this the equation of the tangent plane? -QUESTION [5 upvotes]: I want to find the equation of the tangent plane of the surface patch $\sigma (r, \theta)=(r\cosh \theta , r\sinh \theta , r^2)$ at the point $(1,0,1)$. -I have done the following: -The point $(1,0,1)$ corresponds to $\sigma (1,0)$. -We have that $$\sigma_r=(\cosh \theta , \sinh \theta , 2r) \rightarrow \sigma_r(1,0)=(1,0,2) \\ \sigma_{\theta}=(r \sinh \theta , r \cosh \theta , 0) \rightarrow \sigma_{\theta}(1,0)=(0,1,0)$$ -$$\sigma_r (1,0) \times \sigma_{\theta} (1,0)=(-2,0,1)$$ -The equation of the tangent plane is given by the formula $$(-2, 0, 1) \cdot (x-1, y-0, z-1)=0 \\ \Rightarrow -2x+z+1=0$$ -$$$$ -In the solution of the book, the answer is $-2x - 2y + z =0$. -Where is the mistake at my calculations? - -REPLY [3 votes]: Your answer is correct. I guess it's a typo in the book<|endoftext|> -TITLE: Reference on the history of ergodic theory -QUESTION [7 upvotes]: I'm looking for some good books on the history of ergodic theory. I'm a Ph.D student in the field, and I am taking Steven Weinberg's advice to learn about the history of my field: -http://math.stanford.edu/~vakil/files/nature.pdf -My background is in physics as an undergraduate, so any references that are heavy on thermodynamics and statistical physics are welcome. A good reference on ergodic theory must include these subjects, in my opinion. -Thanks for your suggestions! - -REPLY [3 votes]: I believe it is hard to come by a historical account of ergodic theory on its own, instead any such account will be on dynamics (or "dynamics & ergodic theory", depending on your personal convictions). Keeping this in mind there are a few nice resources I believe (as a disclaimer I will also include some popular mathematics/science books as well): - -Chaos: Making a New Science by James Gleick -Philosophy and the Foundations of Dynamics by Lawrence Sklar -The Recursive Universe: Cosmic Complexity and the Limits of Scientific Knowledge by William Poundstone -The documentary Chaos by Jos Leys, Étienne Ghys and Aurélien Alvarez -Historical accounts of Anatole Katok -First section of John Milnor's lecture notes on dynamics, and the references therein.<|endoftext|> -TITLE: Can a function be analytic and satisfy $f\left(\frac 1 n\right) =\frac{1}{\log{n}}.$? -QUESTION [8 upvotes]: Let $\Omega = \{z\in\mathbb{C}:\,|z|<2\}$. Prove or disprove that there exists an analytic function $f:\Omega\rightarrow\mathbb{C}$ such that for $n\geq2$: -$$f\left(\frac 1 n\right) =\frac{1}{\log{n}}.$$ -Usually this question would fall under the uniqueness theorem for analytic functions; an example where the uniqueness theorem works is in disproving the existence of an analytic $f$ satisfying $$f\left(\frac 1 n\right) =\frac{(-1)^n}{n^2}.$$ -But in the case of the $\log$, I think the issue of existence is more "substantial". -Is it true that if such $f$ were to exist, then $f(z) = -\frac{1}{\log z}$? And if so, what is the main reason this function cannot be analytic in $\Omega$? -Thanks in Advance! - -REPLY [13 votes]: Assume that such a function exists. Then (by continuity) $f(0) = 0$. Thus we can write -$$ -f(z) = zg(z) -$$ -for some holomorphic $g$. Plug in $z=1/n$ to get -$$ -g(1/n) = nf(1/n) = \frac{n}{\log n} -$$ -but if we let $n\to\infty$, it follows that $g$ is unbounded near $0$, which is a contradiction. - -REPLY [2 votes]: If we require $f(z)$ to be analytic then $$f(z)=-\frac{1}{\log z}\tag1$$ -is the only solution, because having a function defined at an accumulation point of its domain, we can expand it to its whole domain. The accumulation point in this case is $z=0$. Function $f$ cannot be analytic at $\Omega$ as $(1)$ isn't.<|endoftext|> -TITLE: Differentiable function satisfying $f(x+a) = bf(x)$ for all $x$ -QUESTION [6 upvotes]: This is an exercise from Apostol Calculus, (Exercise 10 on page 269). - -What can you conclude about a function which has derivative everywhere and satisfies an equation of the form - $$ f(x+a) = bf(x) $$ - for all $x$, where $a$ and $b$ are positive constants? - -The answer in the back of the book suggests that we should conclude $f(x) = b^{x/a} g(x)$ where $g(x)$ is a periodic function with period $a$. I'm not sure how to arrive at this. -One initial step is to say, by induction, -$$ f(x+a) = bf(x) \implies f(x+na) = b^n f(x)$$ -for all $x$. I'm not sure what to do with this though. I'm also not clear how to use the differentiability of $f$. (If I write down the limit definition of the derivative then I end up with a term $f(x+h)$, but I cannot use the functional equation on that since the functional equation is for a fixed constant $a$.) - -REPLY [5 votes]: One trivial solution that doesn't use the differentiability of $ f(x) $: -From $ f(x+na)=b^nf(x) $, letting $ y=x+na $, and requiring that $ x \in [0,a) $ and $ n = \left\lfloor \frac{y}{a} \right\rfloor $ we get the following equivalent definition of $ f $: -$$ f(y)=b^{\frac{y-\left(y-\left\lfloor \frac{y}{a} \right\rfloor a\right)}{a}}f\left (y- \left\lfloor \frac{y}{a} \right\rfloor a \right) $$ -By letting $ g(y)=b^{-\frac{\left( y-\left\lfloor \frac{y}{a} \right\rfloor a\right)}{a}}f\left(y-\left\lfloor \frac{y}{a} \right\rfloor a \right) $ noting that $ g $ is periodic with a period of $ a $: -$$ f(y)=b^{\frac{y}{a}}g(y) $$<|endoftext|> -TITLE: If $B(x+y)-B(x)-B(y)\in\mathbb Z$ can we add an integer function to $B$ to make it additive? -QUESTION [11 upvotes]: Given a function $B:\mathbb R\to\mathbb R$ satisfying $B(x+y)-B(x)-B(y)\in\mathbb Z$ for all real numbers $x$ and $y$, is there a function $Z:\mathbb R\to\mathbb Z$ such that $B+Z$ is an additive function? In other words, is there a function $A:\mathbb R\to\mathbb R$ satisfying $A(x+y)=A(x)+A(y)$ for all real numbers $x$ and $y$, such that $A(x)-B(x)\in\mathbb Z$ for every real number $x$? - -My motivation: -I was thinking about real solutions of the d'Alembert functional equation, $f(x+y)+f(x-y)=2f(x)f(y)$, without assuming continuity. There was a case where I could show that for some real function $B$, $f(x)=\cos\big(2\pi B(x)\big)$ and $\cos\Big(2\pi\big(B(x+y)-B(x)-B(y)\big)\Big)=1$ so for every $x$ and $y$ we have $B(x+y)-B(x)-B(y)\in\mathbb Z$. I was wondering if there's an additive function $A$ such that $A(x)-B(x)\in\mathbb Z$ for every $x$. In that case, I could show that the solution is of the form $f(x)=\cos(2\pi A(x))$ where $A$ is additive. It's easy to verify that every function of this form is indeed a solution to the functional equation. (In other cases I could show that $f$ is the constant zero function or is of the form $f(x)=\cosh\big(A(x)\big)$ for some additive function $A$. But they're not related to my question here.) -My attempt: -I defined $n(x,y)=B(x+y)-B(x)-B(y)$. So $n(x,y)=n(y,x)$ and $n(x,0)=-B(0)$. Without loss of generality, we can assume that $B(0)=0$ (otherwise we can subtract $B(0)$ from $B(x)$ and continue). Because $x+(y+z)=(x+y)+z$, therefore I could conclude that $n(x,y+z)+n(y,z)=n(x+y,z)+n(x,y)$. But this doesn't seem to help much. - -REPLY [2 votes]: There is a similar problem here: USA January TST 2015 -Firstly, we show that there is a function $B:\mathbb Q\to\mathbb R$ such that for every rational $x$ and $y$, $B(x+y)-B(x)-B(y)\in\mathbb Z$, but there's no additive function $A$ such that $B(x)-A(x)\in\mathbb Z$ for every rational $x$. -It's well known that if $A$ is an additive function then there is a constant $c$ such that $A(x)=cx$ for every rational $x$. See here for a proof. -Now construct $B\left(\frac pq\right)=\frac pqK(q)$ where $\gcd(p,q)=1$, $q>0$ and $K(q)=\sum\limits_{i=0}^{q-1} i!$. -I claim that we have $B(x+y)-B(x)-B(y) \in \mathbb{Z}$, for every rational $x$ and $y$. -Let $x=\frac{a}{b}$, $y=\frac{c}{d}$ and $\frac{p}{q}=\frac{a}{b}+\frac{c}{d}$ where $\gcd(a,b)=\gcd(c,d)=\gcd(p,q)=1$. Then, in mod $1$, we have: -$$B\left(\frac pq\right)-B\left(\frac ab\right)-B\left(\frac cd\right)=\frac pqK(q)-\frac abK(b)-\frac cdK(d)\equiv\left(\frac pq-\frac ab-\frac cd\right)K(bd)=0$$ -Notice that for $m\ge n$, we have $m!\equiv0\pmod n$, so $K(q+m)\equiv K(q)\pmod q$ for all $m\ge0$. -Now I claim that there is no $c$ such that $B(x)-cx\in\mathbb{Z}$ for all $x\in\mathbb{Q}$. -Suppose that there is such $c$. If $q$ is a positive integer, then $B(\frac 1q)-\frac cq=\frac1q(K(q)-c)\in\mathbb Z$. So $c$ is an integer such that for every postive integer $q$, we have $K(q)\equiv c\pmod q$. for instance, we have $K(q!)\equiv c\pmod{q!}$. But by definition of $K$, we know that $K(q!)\equiv K(q)\pmod{q!}$ which leads to $K(q)\equiv c\pmod{q!}$. Hence there is a sequence of integers like $(k_q)_{q\in\mathbb Z^+}$ such that $c=k_q\cdot q!+K(q)$. Now for every positive integer $q$: -$$0=c-c=k_{q+1}\cdot (q+1)!+K(q+1)-k_q\cdot q!-K(q)=\left((q+1)k_{q+1}-k_q+1\right)q!$$ -$$\therefore\quad k_{q+1}=\dfrac{k_q-1}{q+1}$$ -Now we show that for every natural number $n$, we must have $|k_q|\ge q^n$. For the base case, we note that if $k_q=0$, then $k_{q+1}$ can't be an integer, so $|k_q|\ge1=q^0$. For the induction step, we have: -$$\dfrac{|k_q|+1}{q+1}\ge\dfrac{|k_q-1|}{q+1}=|k_{q+1}|\ge (q+1)^n$$ -$$\therefore\quad|k_q|\ge(q+1)^{n+1}-1\ge q^{n+1}$$ -But this leads to an obvious contradiction. So $c$ doesn't exist. -Finally, we show that there is a function $B:\mathbb R\to\mathbb R$ such that for every real $x$ and $y$, $B(x+y)-B(x)-B(y)\in\mathbb Z$, but there's no additive function $A$ such that $B(x)-A(x)\in\mathbb Z$ for every real $x$. So the answer to the original question is negative. -Let $({\bf e}_i)_{i\in I}$ be a Hamel Basis. So for every real number $x$, there is a finite set of indices $I_x\subseteq I$ and rational numbers $\left(\frac{p_i}{q_i}\right)_{i\in I_x}$ such that $\gcd(p_i,q_i)=1$, $q_i>0$ and $x=\sum\limits_{i\in I_x}\dfrac{p_i}{q_i}{\bf e}_i$. Define: -$$B(x)=\sum_{i\in I_x}\frac{p_i}{q_i}K(q_i)$$ -Note that $I_x$ and $\left(\frac{p_i}{q_i}\right)_{i\in I_x}$ are uniquely determined and thus $B$ is well defined. We can check that $B$ satisfies the desired conditions, similar to what we did before.<|endoftext|> -TITLE: Mysterious identity -QUESTION [25 upvotes]: Playing around with Maple I found this identity -$$\sum_{k=0}^{n-1}\frac{2k+1}{1-z^{2k+1}}=n\sum_{k=0}^{n-1}\frac{1}{1+z^{k}}$$ -where $n$ is a positive integer, $z=\exp(\pi i/n)$. -I was able to verify it only numerically. Does anyone know how to prove it? - -REPLY [3 votes]: Your identity is true. I give here the method for you to prove the point for general $n$. -One has as main preliminary remark $$\color{red}{z^n=-1\iff z^{n+k}=-z^k\iff \frac{1}{1-z^k}=\frac{z^{n-k}}{1+z^{n-k}}}$$ We make $$A=\sum_{k=o}^{n-1}\frac{2k+1}{1-z^{2k+1}}$$ $$B=n\sum_{k=0}^{n-1}\frac{1}{1+z^k}$$ -Example of algebraic verification: $n=6$, even. -The odd exponents in $A$ are $\begin{cases}1\\3\\5\\7=6+1\\9=6+3\\11=6+5\end{cases}$ -therefore -$$A=\left({1\over 1-z}+{3\over 1-z^3}+{5\over 1-z^5}\right)+\left({6+1\over 1+z}+{6+3\over 1+z^3}+{6+5\over 1+z^5}\right)$$ -$$A=B\Rightarrow \left({1\over 1-z}+{3\over 1-z^3}+{5\over 1-z^5}\right)+\left({1\over 1+z}+{3\over 1+z^3}+{5\over 1+z^5}\right)= 6\left({1\over 1+1}+{1\over 1+z^2}+{1\over 1+z^4}\right)$$ Hence $${2\over 1-z^2}+{6\over 1+1}+{10\over 1-z^{10}}= {6\over 1+1}+{6\over 1+z^2}+{6\over 1+z^4}$$ $${2\over 1-z^2}+{10\over 1+z^4}= {6\over 1+z^2}+{6\over 1+z^4}\iff {1\over 1+z^4}={1-2z^2\over 1-z^4}\iff z^4-z^2+1=0 $$ Since $z^6+1=(z^2+1)(z^4-z^2+1)=0$ the algebraic verification is ended. -Example of algebraic verification: $n=7$, odd. -$$A={1\over 1-z}+{3\over 1-z^3}+{5\over 1-z^5}+{7\over 1+1}+{7+2\over 1+z^2}+{7+4\over 1+z^4}+{7+6\over 1+z^6}=B$$ -$$\left({1-6z\over 1-z}-{7\over 1+z}+{7z^2\over 1-z^2}+{2+5z^2\over 1+z^2}+{4\over 1+z^4}\right)={7\over 1+z^3}-{3\over 1-z^3}$$ -$$\left({2z(2z^7-z^6+z^5-z^4+6z^3-z^2+z-1)\over z^8-1}\right)={7\over 1+z^3}-{3\over 1-z^3}$$ but the parenthesis equals -$${2z(5z^3-2)\over –z-1}$$ because $$\left({2z(2z^7-(\color{red}{z^6-z^5+z^4-z^3+z^2-z+1})+5z^3)\over z^8-1}\right)={2z(5z^3-2)\over –z-1}$$ -where the red polynomial is null as a factor of $z^7+1=0$. Therefore $${2z(5z^3-2)\over –z-1}={7\over 1+z^3}-{3\over 1-z^3}= {2(5z^2-2)\over z^6-1}$$ i.e $${z\over –z-1}={1\over z^6-1}\iff z^7-z=-z-1$$ which ends the proof $n=7$.<|endoftext|> -TITLE: Simpler proof of an integral representation of Bessel function of the first kind $J_n(x)$ -QUESTION [7 upvotes]: While doing research in electrical engineering, I derived the following integral representation of the Bessel function of the first kind: -$$J_n(x)=\frac{e^{in\pi/2}}{2\pi}\int_0^{2\pi}e^{i(n\tau-x\cos\tau)}\mathrm{d}\tau\tag{1}$$ -My derivation, which I include below, is long and ugly. I am wondering if there is a more elegant proof of (1) using basic facts about other integral representations of the Bessel function, trig identities, and, perhaps, clever integration techniques. The integral representation for Bessel function (found on wikipedia page) that looks similar to mine is: -$$J_n(x)=\frac{1}{2\pi}\int_{-\pi}^{\pi}e^{i(n\tau-x\sin\tau)}\mathrm{d}\tau. \tag{2}$$ -Gradshteyn and Ryzhik (G&R) give the same expression in a slightly different form as formula 8.411.1 in the 7th edition. However, I failed to convert (2) into (1) using simple substitution. Does anyone have any other ideas? - -My LONG proof of (1) -First, I use Euler's formula to break the integrand in (1) into in-phase and quadrature components, and apply the angle sum identities: -$$\begin{align}e^{i(n\tau-x\cos\tau)}&=\cos(n\tau-x\cos\tau)+i\sin(n\tau-x\cos\tau)\\ -&=\cos(n\tau)\cos(x\cos\tau)\tag{a}\\ -&\phantom{=}+\sin(n\tau)\sin(x\cos\tau)\tag{b}\\ -&\phantom{=}+i\sin(n\tau)\cos(x\cos\tau)\tag{c}\\ -&\phantom{=}-i\cos(n\tau)\sin(x\cos\tau)\tag{d} -\end{align}$$ -Now let's integrate (a)-(d) in turn. First, for (a), note that: -$$\begin{align}\int_{\pi}^{2\pi}\cos(n\tau)\cos(x\cos\tau)\mathrm{d}\tau&=\int_0^{\pi}\cos(n(\tau+\pi))\cos(x\cos(\tau+\pi))\mathrm{d}\tau\\ -&=(-1)^n\int_0^{\pi}\cos(n\tau)\cos(x\cos\tau)\mathrm{d}\tau\tag{a1}, -\end{align}$$ -where (a1) is due to the negation of the cosine (and sine) from the shift by odd multiples of $\pi$, or, formally, $\cos(\theta+n\pi)=(-1)^n\cos\theta$. By formula 3.715.18 in G&R 7th ed: -$$\int_0^{\pi}\cos(n\tau)\cos(x\cos\tau)\mathrm{d}\tau=\pi\cos\left(\frac{n\pi}{2}\right)J_n(x).$$ -Thus, -$$\begin{align}\int_0^{2\pi}\cos(n\tau)\cos(x\cos\tau)\mathrm{d}\tau&=(1+(-1)^n)\pi\cos\left(\frac{n\pi}{2}\right)J_n(x)\\ -&=2\pi\cos\left(\frac{n\pi}{2}\right)J_n(x),\tag{a2} -\end{align}$$ -where (a2) is because when $n$ is odd, $\cos\left(\frac{n\pi}{2}\right)=0$, making the double-multiplication by zero in this case unnecessary. -Now let's integrate (b). First consider odd $n$: -$$\int_0^{2\pi}\sin(n\tau)\sin(x\cos\tau)\mathrm{d}\tau=2\int_0^{\pi}\sin(n\tau)\sin(x\cos\tau)\mathrm{d}\tau,$$ -because $\sin(n(\tau+\pi))\sin(x\cos(\tau+\pi))=\sin(n\tau)\sin(x\cos\tau)$ due to the negation of the sine (and cosine) from the shift by odd multiples of $\pi$, or, formally, $\sin(\theta+n\pi)=(-1)^n\sin\theta$ and the fact that $\sin(-\theta)=-\sin\theta$. Furthermore, -$$\begin{align}\int_0^{\pi}\sin(n\tau)\sin(x\cos\tau)\mathrm{d}\tau&=\int_0^{\pi/2}\sin(n\tau)\sin(x\cos\tau)\mathrm{d}\tau+\int_{\pi/2}^{\pi}\sin(n\tau)\sin(x\cos\tau)\mathrm{d}\tau\\ -&=\int_0^{\pi/2}\sin(n\tau)\sin(x\cos\tau)\mathrm{d}\tau\\ -&\phantom{=}+\int_0^{\pi/2}\sin(n(\pi-\tau))\sin(x\cos(\pi-\tau))\mathrm{d}\tau\tag{b1}\\ -&=\int_0^{\pi/2}\sin(n\tau)\sin(x\cos\tau)\mathrm{d}\tau-\int_0^{\pi/2}\sin(n\tau)\sin(x\cos\tau)\mathrm{d}\tau=0\tag{b2}, -\end{align}$$ -where (b1) is due to the substitution of $\tau=\pi-\tau'$ (the prime is dropped after substitution is made) and (b2) is since $\sin(n\pi-\theta)=\sin(\theta)$ for odd $n$ and $\cos(\pi-\theta)=-\cos(\theta)$. -Now let's integrate (b) with even $n$: -$$\begin{align}\int_0^{2\pi}\sin(n\tau)\sin(x\cos\tau)\mathrm{d}\tau&=\int_0^{\pi}\sin(n\tau)\sin(x\cos\tau)\mathrm{d}\tau+\int_{\pi}^{2\pi}\sin(n\tau)\sin(x\cos\tau)\mathrm{d}\tau\\ -&=\int_0^{\pi}\sin(n\tau)\sin(x\cos\tau)\mathrm{d}\tau\\ -&\phantom{=}+\int_0^{\pi}\sin(n(2\pi-\tau))\sin(x\cos(2\pi-\tau))\mathrm{d}\tau\tag{b3}\\ -&=\int_0^{\pi}\sin(n\tau)\sin(x\cos\tau)\mathrm{d}\tau-\int_0^{\pi}\sin(n\tau)\sin(x\cos\tau)\mathrm{d}\tau=0\tag{b4}, -\end{align}$$ -where (b3) is due to the substitution of $\tau=2\pi-\tau'$ (again, the prime is dropped after substitution is made) and (b4) is since $\sin(2n\pi-\theta)=\sin(-\theta)=-\sin(\theta)$ for an integer $n$ and $\cos(2\pi-\theta)=\cos(\theta)$. -Now let's integrate (c). Consider odd $n$ (let's omit the imaginary unit): -$$\begin{align}\int_0^{2\pi}\sin(n\tau)\cos(x\cos\tau)\mathrm{d}\tau&=\int_0^{\pi}\sin(n\tau)\cos(x\cos\tau)\mathrm{d}\tau+\int_{\pi}^{2\pi}\sin(n\tau)\cos(x\cos\tau)\mathrm{d}\tau\\ -&=\int_0^{\pi}\sin(n\tau)\cos(x\cos\tau)\mathrm{d}\tau\\ -&\phantom{=}+\int_0^{\pi}\sin(n(2\pi-\tau))\cos(x\cos(2\pi-\tau))\mathrm{d}\tau\tag{c1}\\ -&=\int_0^{\pi}\sin(n\tau)\cos(x\cos\tau)\mathrm{d}\tau-\int_0^{\pi}\sin(n\tau)\cos(x\cos\tau)\mathrm{d}\tau=0,\tag{c2}\\ -\end{align}$$ -where (c1) is due to the substitution of $\tau=2\pi-\tau'$ (the prime is dropped after substitution is made) and (c2) is since $\sin(2n\pi-\theta)=\sin(-\theta)=-\sin(\theta)$ for integer $n$ and $\cos(2\pi-\theta)=\cos(\theta)$. -Now integrate (c) with even $n$ (again, let's omit the imaginary unit): -$$\int_0^{2\pi}\sin(n\tau)\cos(x\cos\tau)\mathrm{d}\tau=2\int_0^{\pi}\sin(n\tau)\cos(x\cos\tau)\mathrm{d}\tau,$$ -because $\sin(n(\tau+\pi))\cos(x\cos(\tau+\pi))=\sin(n\tau)\sin(x\cos\tau)$ due to $\sin(\theta+n\pi)=\sin\theta$ for even $n$, and $\cos(-\theta)=\cos\theta$. Furthermore, -$$\begin{align}\int_0^{\pi}\sin(n\tau)\cos(x\cos\tau)\mathrm{d}\tau&=\int_0^{\pi/2}\sin(n\tau)\cos(x\cos\tau)\mathrm{d}\tau+\int_{\pi/2}^{\pi}\sin(n\tau)\cos(x\cos\tau)\mathrm{d}\tau\\ -&=\int_0^{\pi/2}\sin(n\tau)\cos(x\cos\tau)\mathrm{d}\tau\\ -&\phantom{=}+\int_0^{\pi/2}\sin(n(\pi-\tau))\cos(x\cos(\pi-\tau))\mathrm{d}\tau\tag{c3}\\ -&=\int_0^{\pi/2}\sin(n\tau)\cos(x\cos\tau)\mathrm{d}\tau-\int_0^{\pi/2}\sin(n\tau)\cos(x\cos\tau)\mathrm{d}\tau=0\tag{c4}, -\end{align}$$ -where (c3) is due to the substitution of $\tau=\pi-\tau'$ (the prime is dropped after substitution is made) and (c4) is since $\sin(n\pi-\theta)=\sin(-\theta)=-\sin(\theta)$ for even $n$ and $\cos(-\theta)=\cos(\theta)$. -Finally, we integrate (d), omitting the negative imaginary unit for now. First, note that -$$\begin{align}\int_{\pi}^{2\pi}\cos(n\tau)\sin(x\cos\tau)\mathrm{d}\tau&=\int_0^{\pi}\cos(n(\tau+\pi))\sin(x\cos(\tau+\pi))\mathrm{d}\tau\\ -&=(-1)^{n+1}\int_0^{\pi}\cos(n\tau)\sin(x\cos\tau)\mathrm{d}\tau\tag{d1}, -\end{align}$$ -where (d1) is due to the negation of the cosine (and sine) from the shift by odd multiples of $\pi$, or, formally, $\cos(\theta+n\pi)=(-1)^n\cos\theta$ and by the fact that $\sin(-\theta)=-\sin\theta$. By formula 3.715.13 in G&R 7th ed: -$$\int_0^{\pi}\cos(n\tau)\sin(x\cos\tau)\mathrm{d}\tau=\pi\sin\left(\frac{n\pi}{2}\right)J_n(x).$$ -Thus, -$$\begin{align}\int_0^{2\pi}\sin(n\tau)\cos(x\cos\tau)\mathrm{d}\tau&=(1+(-1)^{n+1})\pi\sin\left(\frac{n\pi}{2}\right)J_n(x)\\ -&=2\pi\sin\left(\frac{n\pi}{2}\right)J_n(x),\tag{d2} -\end{align}$$ -where (d2) is because when $n$ is even, $\sin\left(\frac{n\pi}{2}\right)=0$, making the double-multiplication by zero in this case unnecessary. -Combining all the terms, using Euler's formula, and solving for $J_n(x)$, we arrive at (1). Surely there is a better way... - -REPLY [4 votes]: In Equation $(1)$ of the OP, enforce the substitution $\tau\to \tau -\pi/2$. Then, we obtain -$$\begin{align} -\frac{e^{in\pi/2}}{2\pi}\int_0^{2\pi}e^{in\tau-x\cos \tau}\,d\tau&=\frac1{2\pi}\int_{-\pi/2}^{3\pi/2}e^{in\tau-x\sin \tau}\,d\tau\\\\ -&=\frac1{2\pi}\int_{-\pi/2}^\pi e^{in\tau-x\sin \tau}\,d\tau+\int_\pi^{3\pi/2}e^{in\tau-x\sin \tau}\,d\tau \tag{A} -\end{align}$$ -Now, enforce the substitution $\tau\to \tau +2\pi$ in the second integral on the right-hand side of $(A)$. Then, -$$\begin{align} -\frac{e^{in\pi/2}}{2\pi}\int_0^{2\pi}e^{in\tau-x\cos \tau}\,d\tau&=\frac1{2\pi}\int_{-\pi/2}^\pi e^{in\tau-x\sin \tau}\,d\tau+\int_{-\pi}^{-\pi/2}e^{in\tau-x\sin \tau}\,d\tau \\\\ -&=\frac1{2\pi}\int_{-\pi}^{\pi}e^{in\tau-x\sin \tau}\,d\tau \tag{B} -\end{align}$$ -Comparing $(B)$ to Equation $(2)$ in the OP, we find that -$$J_n(x)=\frac{e^{in\pi/2}}{2\pi}\int_0^{2\pi}e^{in\tau-x\cos \tau}\,d\tau$$<|endoftext|> -TITLE: General audience books that teach deep mathematics -QUESTION [13 upvotes]: When I was younger some of my favorite books were Raymond Smullyan's puzzle books. He strove to write books that covered topics more deep than more standard children's puzzles; To Mock a Mockingbird gets into serious thought about combinatory logic, and Forever Undecided, an introduction to Godel's incompleteness theorem, phrasing everything in terms of the classic children's puzzles about people who always lie and people who always tell the truth. (For a much shorter but higher-brow sample of what the writing looks like, he has a brief note about roughly the same subject.) I recently looked at them again, and they still hold up (or maybe are even made better) looking back at them with an actual math education. -What I would like are books that are accessible to a general audience - especially kids! - that nonetheless are able to introduce one to nontrivial, deep mathematics, like Smullyan's books above. I would prefer books that actually teach the mathematics (however disguised) as opposed to something like Simon Singh's expository book on Fermat's Last Theorem (which tells you the story of FLT, but nothing about the mathematics behind it.) - -REPLY [4 votes]: I recommended Conway et al.'s Symmetries of Things in a recent answer, and I'll do it again! - -Essentially, the first third of the book is devoted to two-dimensional symmetry, and is generally very layperson-friendly, with lots of good pictures and things that utilize intuition. Generally the first 100 pages should be doable for anybody with a genuine interest. The next 100 pages are a bit more difficult, and then abstract groups are encountered, at which point all bets are off (but to be fair, it's still a pretty gentle treatment!). So, maybe it's not a very good fit for young children, but I'd say ages 10-12+ should be able to get plenty out of it. It doesn't hide the fact that it's a book about math, whatever that means. - -And while I realize this isn't a book, Satyan Devadoss has a series of video lectures about quality mathematics for a general audience, called The Shape of Nature. It's not cheap, but I managed to borrow a copy through an interlibrary loan and it was well worth it.<|endoftext|> -TITLE: Equivalent definitions of locally convex topological vector space -QUESTION [5 upvotes]: This Wikipedia article gives two equivalent definitions of locally convex space (l.c.s). I don't see clearly the equivalence and I'd like to make it crystal clear. - -Definition 1 Let $(V,\tau)$ be a TVS. It is called a l.c.s. if the origin has a local base of convex balanced absorbing sets. -Definitions 2 Let $(V,\tau)$ be a TVS. It is called a l.c.s. if $\tau$ is generated by a family of seminorms on $V$. - -Suppose $(V,\tau)$ satisfies Definition 1. Let $\mathcal{B}$ be a local base at $0$ such that for every $C\in\mathcal{B}$, $C$ is convex, balanced and absorbing. A known result shows that the Minkowski functional $\mu_C$ is a seminorm on $V$ for $C\in\mathcal{B}$. - -Question 1: Why can we conclude that $\tau$ is generated by $\{\mu_C\}_{C\in\mathcal{B}}$ so that Definition 1 implies Definition 2? - -Suppose $(V,\tau)$ satisfies Definition 2 and $\tau$ is generated by a family of seminorms $\{p_\alpha\}_{\alpha\in A}$. For every finite subset $F\subset A$ and $r>0$, define -$$ -S_{F,r}=\bigcap_{\alpha\in F}\{x\in V:p_\alpha(x) 0$ means $\bigl\lvert \frac{t}{s}\bigr\rvert \leqslant 1$, and hence $s^{-1}x = \frac{t}{s} t^{-1} x \in A$ follows from $t^{-1}x \in A$, which is the same as $x \in tA$. -Coming to question 1, we note that the $C \in \mathcal{B}$ are all neighbourhoods of $0$, so the seminorms $\mu_C$ are continuous (with respect to $\tau$). Hence the topology induced by $\{ \mu_C : C \in \mathcal{B}\}$ is coarser than $\tau$, i.e. $\tau' \subset \tau$. But, since $\mathcal{B}$ is a neighbourhood basis of $0$ (for $\tau$), given any $\tau$-neighbourhood $U$ of $0$, there is a $C\in \mathcal{B}$ with $C \subset U$. Since $C$ is by definition a $\tau'$-neighbourhood of $0$, it follows that $U$ is a $\tau'$-neighbourhood of $0$. Since vector space topologies are determined by the neighbourhood filter of $0$ it follows that $\tau'$ is finer than $\tau$. Being both coarser and finer than $\tau$ means that $\tau' = \tau$, so the topology is indeed induced by the family $\{\mu_C : C \in \mathcal{B}\}$ of seminorms. -Regarding question 2, note that each $S_{F,r}$ is a neighbourhood of $0$. And every neighbourhood of $0$ is absorbing by the continuity of scalar multiplication. For every $x\in V$, the map $s_x \colon t \mapsto tx$ is continuous, and we have $s_x(0) = 0$, so by continuity there is a $\delta > 0$ such that $\lvert t\rvert < \delta \implies tx \in S_{F,r}$. But that means $x \in u\cdot S_{F,r}$ for all $u$ with $\lvert u\rvert > \delta^{-1}$.<|endoftext|> -TITLE: On what sets can we define a group operation? -QUESTION [9 upvotes]: A question came up into mind. -Prove that for every nonempty set $X$ an operation can be suggested such that $X$ would be a group with that operation. For example, it is obvious for finite and countable sets: $(\mathbb{Z}_n,+), (\mathbb{Q}, +)$. Also, it can be done for all sets of form $X=2^L$, as $(X, \Delta)$, which is symmetric difference on subsets of $L$. -So is seems that the question is reduced to (1) sets which are high in hierarchy of cardinals (not of form $2^L$) and (2) sets which do not exist assuming continuum hypothesis (between $\mathbb{R}$ and $2^\mathbb{R}$ for instance). -Axiom of choice is given. -Or maybe intuition is wrong and for some sets it cannot be done, then a proof of existance of such or a single example would be nice. Thank you in advance. - -REPLY [9 votes]: The free group (or the free Abelian group, etc.) generated by an infinite set $X$ will have the same cardinality as $X.$ Therefore, every nonempty set is the underlying set of a group.<|endoftext|> -TITLE: Integral $\int_0^1\arctan(x)\arctan\left(x\sqrt3\right)\ln(x)dx$ -QUESTION [8 upvotes]: I need to evaluate this integral: -$$\int_0^1\arctan(x)\arctan\left(x\sqrt3\right)\ln(x)dx$$ -Apparently, Maple and Mathematica cannot do anything with it, but I saw similar integrals to be evaluated in terms of polylogarithms (unfortunately, I have not yet mastered them enough to do it myself). Could anybody please help me with it? - -REPLY [6 votes]: Following the method outlined in another answer, and simplifying the resulting expression, we get the following closed form: -$$\frac{5 G}{6 \sqrt{3}}-\frac{\Im\operatorname{Li}_3(1+i)}{\sqrt{3}}+\Im\operatorname{Li}_3\left(i - \sqrt{3}\right)-\frac{\Im\operatorname{Li}_3\left(i \sqrt{3}\right)}{4 \sqrt{3}}-\frac{1}{2} - \Im\operatorname{Li}_3\left(1+i \sqrt{3}\right)\\ --3 \Im\operatorname{Li}_3\left(\left(-\frac{1}{2}+\frac{i}{2}\right) - \left(-1+\sqrt{3}\right)\right)+\sqrt{3} \Im\operatorname{Li}_3\left(\left(-\frac{1}{2}+\frac{i}{2}\right) - \left(-1+\sqrt{3}\right)\right)\\ -+\frac{1}{\sqrt{3}}\Im\operatorname{Li}_3\left(\tfrac{(1+i) \sqrt{3}}{1+\sqrt{3}}\right)-3 - \Im\operatorname{Li}_3\left(\left(\frac{1}{2}+\frac{i}{2}\right) \left(1+\sqrt{3}\right)\right)+\frac{2}{\sqrt{3}}\Im\operatorname{Li}_3\left(\left(\frac{1}{2}+\frac{i}{2}\right) \left(1+\sqrt{3}\right)\right)\\ --\frac{1}{288} \pi - \left[-2 \left\{\vphantom{\Large|}3 \left(4+\sqrt{3}\right) \cdot \operatorname{Li}_2\left(\tfrac{1}{3}\right)+6 \ln ^23-6 \left(7 \sqrt{3}-24\right) \cdot \ln - ^2\left(1+\sqrt{3}\right)\\+24 \ln 3 --4 \left(9+4 \sqrt{3}\right) \cdot \ln \left(1+\sqrt{3}\right)\right\}+3 \left(5 \sqrt{3}-36\right) - \cdot \ln ^22\\ --4 \left\{\vphantom{\Large|}9+7 \sqrt{3}-6 \ln 3+3 \left(7 \sqrt{3}-24\right) \cdot \ln - \left(1+\sqrt{3}\right)\right\}\cdot\ln 2\right]\\ --\frac{1}{216} \left(18+5 \sqrt{3}\right) \pi ^2+\left(\frac{5}{36}-\frac{31}{384 - \sqrt{3}}\right) \pi ^3+\frac{5 \psi ^{(1)}\left(\frac{1}{3}\right)}{48 \sqrt{3}},$$ -that might be possible to simplify further. -Mathematica expression is here.<|endoftext|> -TITLE: Can $\mathbb C P^4$ be smoothly embedded in $\mathbb R^{12}$? -QUESTION [13 upvotes]: In Bott and Tu's Differential Forms in Algebraic Topology, the authors show using Pontrjagin classes that $\mathbb CP^4$ cannot be smoothly embedded in $\mathbb R^k$ when $k\le 11$. The obvious question arises: can $\mathbb CP^4$ be embedded in $\mathbb R^{12}$? -The only result I know in this direction is the Whitney embedding theorem, which says that a smooth $m$-dimensional manifold can be embedded in $\mathbb R^{2m}$. That is clearly not good enough here, as $\mathbb CP^4$ has dimension $8$. - -REPLY [14 votes]: No, see theorem 1.3 here http://www.lehigh.edu/~dmd1/CPcrabb4.pdf, and the reference given there. Here $\alpha (n)$ denotes the number of $1$'s in the binary expression of $n$. -In this case $4$ has binary expansion $100$, so the first case of theorem 1.3 implies $\Bbb CP^4$ cannot even immerse into $\Bbb R^{14}$. The Whitney immersion theorem implies that this is optimal. In fact any compact orientable $n$-manifold embeds in $\Bbb R^{2n-1}$ (according to wikipedia this is due to Haefliger and Hirsch (for $n>4$) but I do not know the specific references off hand).<|endoftext|> -TITLE: Can you give me a concrete example of a sphere being defined without reference to an ambient space? -QUESTION [9 upvotes]: The strength of topology seems to lie in the ability to consider geometric objects without having to deal with something as obnoxious as an ambient space. However, despite reading many books and articles on the subject, I am unable to think of a simple concrete example regarding the subject; every embedding of a sphere that I devise always references R^3. - -REPLY [12 votes]: The 2-sphere is the unique (up to homeomorphism) compact, connected, simply connected 2-manifold. - -REPLY [7 votes]: No topologist am I, but the two-sphere is the universal cover of the real projective plane, and the latter certainly has an abstract definition. If you don't like that, I guess you can take the stereographic projections from two points of the sphere to get an atlas of two charts, with the gluing functions cooked up from the two stereographies. Best of all might be to call in basic Complex Variable, and cover the sphere by two planes, each a copy of $\Bbb C$, and identify a nonzero point $z$ of the one to the point $1/z$ of the other.<|endoftext|> -TITLE: Asymptotics for a series of products -QUESTION [12 upvotes]: I am trying to solve the following problem: - -Define the following functions for $x>0$: - $$f_n(x):=\prod_{k=0}^{n}\frac{1}{x+k}$$ - -Show that the function - $$f(x):=\sum_{n=0}^{+\infty}f_n(x)$$ - is well defined for $x>0$. Calculate its value in $1$. -Study the function $f(x)$ and give asymptotic estimates for $x \to 0^+$ and $x\to +\infty$. -Prove that the following equivalence holds: - $$f(x)=e \sum_{n=0}^{+\infty}\frac{(-1)^n}{(x+n)n!}$$ - - -I am having a hard time proving the equality in the third point. What I have done for now: -$\textbf{Part 1}$ -Using the ratio test, -$$\lim_{n\to +\infty}\frac{\prod_{k=0}^{n+1}\frac{1}{x+k}}{\prod_{k=0}^{n}\frac{1}{x+k}}=\lim_{n\to +\infty}\frac{1}{x+n+1}=0$$ - the series converges for $x>0$. The value of the function in $1$ is -$$f(1)=\sum_{n=0}^{+\infty}\prod_{k=0}^{n}\frac{1}{k+1}=\sum_{n=0}^{+\infty}\frac{1}{(n+1)!}=e-1$$ -$\textbf{Part 2}$ -First of all, $f$ is positive for every $x>0$. Its monotonicity is immediate: if $x_2>x_1$, -$$\begin{align} \quad \qquad \frac{1}{x_2+k}<\frac{1}{x_1+k} \end{align} \\ - \implies f(x_2)=\sum_{n=0}^{+\infty}\prod_{k=0}^{n}\frac{1}{x_2+k}\leq\sum_{n=0}^{+\infty}\prod_{k=0}^{n}\frac{1}{x_1+k}=f(x_1)$$ -The general term of the series $f$ must be zero, because it converges; hence in an interval $[M,+\infty)$ with $M>0$ -$$||f_n(x) ||_{\infty}=\prod_{k=0}^{n}\frac{1}{M+k}$$ -$$\implies \sum_{n=0}^{+\infty}||f_n(x)|| \text{ is convergent}$$ -so the series is uniformly convergent on every interval of the type $[M,+\infty)$. -$f$ is asymptotic to $\frac 1x$ for $x\to +\infty$: in fact -$$\lim_{x\to \infty}\frac{f(x)}{\frac{1}{x}}= \lim_{x\to \infty} x\left (\frac{1}{x}+ \sum_{n=1}^{+\infty}\prod_{k=0}^{n}\frac{1}{x+k}\right )= 1 $$ -because the series converges in a neighbourhood of $+\infty$. -In a neighbourhood of $0$, the function acts similarly: we can notice that -$$\lim_{x\to 0^+}\frac{f(x)}{\frac{1}{x}}=\lim_{x\to 0^+} x\sum_{n=0}^{+\infty}\prod_{k=0}^{n}\frac{1}{x+k}=\lim_{x \to 0^+} x\left (\frac{1}{x}+ \sum_{n=1}^{+\infty}\prod_{k=0}^{n}\frac{1}{x+k}\right )= \lim_{x\to 0^+} 1 + \sum_{n=1}^{+\infty}\prod_{k=1}^{n}\frac{1}{x+k}$$ -but $\sum_{n=1}^{+\infty}\prod_{k=1}^{n}\frac{1}{x+k}$ converges in $x=0$ and is continuous, so the limit is -$$\lim_{x\to 0^+}\frac{f(x)}{\frac{1}{x}} = 1+ \sum_{n=1}^{+\infty}\prod_{k=1}^{n}\frac{1}{k}=e$$ -hence $f \sim \frac{e}{x}$ -Monotonicity and limits of this function imply that $f$ is a bijection of $(0,+\infty)$ in itself. -$\textbf{Part 3}$ -I have tried to manipulate the sums: writing a single fraction instead of the product does not seem to work: it leads to -$$\sum_{n=0}^{+\infty}\prod_{k=0}^{n}\frac{1}{x+k}=\frac{1}{x}+\frac{1}{x}\frac{1}{x(x+1)}+\dots=\lim_{n\to +\infty}\frac{\sum_{h=0}^{n}\prod_{k=0}^h(x+k)}{\prod_{k=0}^{n}(x+k)}$$ -It does not seem very familiar, even dividing it by $e=\sum_{n=0}^{+\infty}\frac{1}{n!}=f(1)$ Another idea that came to mind was to use the Cauchy product series and the Cauchy series product on the RHS: it leads to -$$\sum_{i=0}^{+\infty}\frac{1}{i!}\sum_{j=0}^{+\infty}\frac{(-1)^j}{(x+j)j!}=\sum_{k=0}^{+\infty}\sum_{l=0}^{k}\frac{(-1)^{k-l}}{(x+k-l)l!(k-l)!}$$ -Things seem as complicated as before. Integrating or derivating $f(x)$ term by term would require to know a general form for the integral/derivative of $f_n(x)=\prod_{k=0}^{n}\frac{1}{x+k}$: it does not appear impossible to find it, but I think it would not be of great practical use; moreover, the series does not converge uniformly on the whole interval $(0,+\infty)$. The same goes for the series on the RHS. Working backwards, I thought of finding its integral/series on the interval $[M,+\infty)$ : I obtained -$$\int \left (e\sum_{n=0}^{+\infty}\frac{(-1)^n}{(x+n)n!} \right ) dx =e\sum_{n=0}^{+\infty} \int \frac{(-1)^n}{(x+n)n!} dx=e\sum_{n=0}^{+\infty} \frac{(-1)^n}{n!}\log(x+n)+C $$ -I can't get far from here, and I am not even sure if what I have done is correct. - -Question: Are the two first parts correct? What could be a good way of proving the equality in the third part? - -REPLY [3 votes]: For the remaining third point, I would use the formula -$$f_n(x)=\frac1{n!}\int_0^1 t^{x-1}(1-t)^{n}dt.$$ -Exchanging the order of summation and integration, we get -\begin{align*} -f(x)&=\int_0^1 t^{x-1} \left(\sum_{n=0}^{\infty}\frac{(1-t)^n}{n!}\right)dt -=\int_0^1 t^{x-1}e^{1-t}dt=\\&=e\sum_{k=0}^{\infty}\int_0^1\frac{(-1)^k t^{x-1+k}}{k!}dt=e\sum_{k=0}^{\infty}\frac{(-1)^k}{(x+k)k!}. -\end{align*}<|endoftext|> -TITLE: Is a function that maps every compact set to a compact set continuous? -QUESTION [5 upvotes]: A continuous function maps a compact set to a compact set. Is the converse of this true? That is, is a function that maps every compact set to a compact set necessarily continuous? - -REPLY [8 votes]: The characteristic function of the rationals maps every set, in particular every compact set, to a compact set. But it is discontinuous at every point.<|endoftext|> -TITLE: What does the false infinite sum of a series mean? -QUESTION [9 upvotes]: For any geometric series with |$r$| < 1 , I know that -$$\sum_{k=1}^{∞} ar^{k-1} =\frac{a}{1-r}$$ -But if |$r$| > 1 and you try to use the formula, you'll get a weird answer. For instance: -$$4+8+16+32+64+128+... =\sum_{k=1}^{∞} (4)2^{k-1}= \frac{4}{1-2} = -4$$ -That answer obviously doesn't make sense; the series diverges. So what does -4 mean? Where did it come from, and how is it related to the series? It must be significant somehow. - -REPLY [3 votes]: Let's look at the formula for a finite geometric series first. If $a$ is the first term, $r$ is the common ratio, and $n$ is the number of terms, then your sum is equal to $a\frac{1-r^n}{1-r}$. Now, if $|r|<1$, we can see that as $n$ goes to infinity the $r^n$ term disappears, leaving the familiar formula of $a\frac{1}{1-r}$. So, the infinite geometric series sum formula makes the assumption that $|r|<1$. -With the finite geometric series sum formula, we can rewrite as follows: -$$a\frac{1-r^n}{1-r}$$ -$$=a\frac{r^n-1}{r-1}$$ -$$=\frac{a}{r-1}r^n - \frac{a}{r-1}$$ -Remember, an infinite sum is just the limit as $n\to\infty$ of a finite sum. -Now, we know that the infinite sum assumes that $r^n$ goes to 0. If we assume that the first term goes to 0 even though it doesn't, we end up with the second term of that last line: $$\frac{-a}{r-1}$$ -If you look at the terms individually, you can see that as $n\to\infty$ the first term goes to infinity. In a sense, you can think of that number you get as "ignoring infinity's contribution" to the sum (though this idea is very informal). It's what the sum would have been if the first term disappeared like it does when $|r|\lt1$.<|endoftext|> -TITLE: How to Prove the Chain Rule for Limits Using a $\varepsilon$-$\delta$ Argument? -QUESTION [8 upvotes]: I came across the chain rule for limits the other day and it interested me quite a bit and surprisingly I couldn't find the proof on the internet anywhere. From what I understand the chain rule for limits states that if: -$$ \lim_{x\to c} g(x)=M$$ and $$\lim_{x\to M} f(x)=L$$ then$$\lim_{x\to c} \ f(g(x))=L$$ -1.Under what conditions does this hold true? -2.What is the epsilon-delta proof for the rule? - -REPLY [8 votes]: Claim: Suppose $f$ and $g$ are functions such that $\lim_{x \rightarrow a} f(x) = L_1$ and $\lim_{y \rightarrow L_1} g(y) = L_2$ then $\lim_{x \rightarrow a} g ( f(x)) = L_2$. -Proof: Let $\epsilon > 0$ and choose $\delta >0$ such that if $0<|x-a|<\delta$ then $|f(x)-L_1|< \delta_2$ where $\delta_2 >0$ is small enough to force $|g(y)-L_2| < \epsilon$ for all $y \in \mathbb{R}$ such that $0 < |y-L_1| < \delta_2$. -We can choose $\delta_2 > 0$ as above because we were given that $\lim_{y \rightarrow L_1} g(y) = L_2$. Further, we can also choose $\delta >0$ to force $|f(x)-L_1| < \delta_2$ because we were also given that $\lim_{x \rightarrow a} f(x) = L_1$. -Suppose that $x \in \mathbb{R}$ such that $0 < |x-a| < \delta$ and observe that $|g(f(x))-L_2 | < \epsilon$. Therefore, by the definition of the limit, $\lim_{x \rightarrow a} g ( f(x)) = L_2$. -$\Box$ -Thanks to Vim for correcting comment. Let me attempt a modified proof, it may be useful to locate the error in the logic above, -Modified Claim: Suppose $f$ and $g$ are continuous functions such that $\lim_{x \rightarrow a} f(x) =L_1$ and $\lim_{y \rightarrow L_1} g(y) = L_2 = g(L_1)$ then $\lim_{x \rightarrow a} g ( f(x)) = L_2$. -Modified Proof: Since $\lim_{x \rightarrow a} f(x) = f(a)=L_1$ it follows that for each $\delta_2 >0$ there exists $\delta>0$ such that $0 < |x-a|< \delta$ implies $|f(x)-L_1| < \delta_2$. -Let $\epsilon>0$ and pick $\delta_2>0$ such that $0< |y-L_1|< \delta_2$ implies $|g(y)-L_2| < \epsilon$. This choice of $\delta_2$ is possible since we are given that $\lim_{y \rightarrow L_1} g(y) = L_2$. -Suppose that $x \in \mathbb{R}$ such that $0 < |x-a| < \delta$ implies $|f(x)-L_1| < \delta_2$. Thus, for $y=f(x)$,we have $|y-L_1|< \delta_2$. We don't quite have what is needed to conclude just yet since we need $0<|y-L_1| < \delta_2$ in order to conclude $|g(y)-L_2|=|g(f(x))-L_2 | < \epsilon$. Consider two cases: - -$0=|y-L_1|$ in which case $y=L_1$ hence $|g(L_1)-L_2| = |g(L_1)-g(L_1)| = 0 < \epsilon$ -$0< |y-L_1|<\delta_2$ in which case we have $|g(y)-L_2|=|g(f(x))-L_2 | < \epsilon$ - -Hence, in all cases possible,$0 < |x-a| < \delta$ implies $|g(f(x))-L_2|< \epsilon$. Therefore, by the definition of the limit, $\lim_{x \rightarrow a} g ( f(x)) = L_2$. -$\Box$ -Of course, we can also state this result as it is often applied: the continuity of the outer function allows us to pull the limit inside out: -$$ \lim_{x \rightarrow a} g(f(x)) = g \left( lim_{x \rightarrow a} f(x) \right)$$ -where once again I should emphasize, the continuity of $g$ at $lim_{x \rightarrow a} f(x)$ is assumed.<|endoftext|> -TITLE: Find an element of largest order in the symmetric group $S_{10}$. -QUESTION [9 upvotes]: Find an element of largest order in the symmetric group $S_{10}$. - -I know that given any element in $S_{10}$ there is a cycle decomposition and the order of it is the lcm of lengths of the cycles. So, we have to maximize $n_1.n_2....n_k$ so that $gcd(n_i, n_j)=1$ for $i\neq j$ and $n_1+n_2+...+n_k=10$. Moreover, $(n_1.n_2....n_k)|10!$. So, $n_1,...n_k$ must be a combination of some numbers of the set $\{1,2,...,10\}$. So how do I find that combination rigorously? Do I need to write all possibilities and decide?(I think there are many). Intuitively, I think the solution must be $2\times 3\times 5=30$. But how? - -REPLY [7 votes]: As you know the order is the least common multiple of the lengths of its cycle in its unique factorization into disjoint cycles. -We can break into cases depending of the largest length in the cycle decomposition: -If largest length is $10$ then max is $10$. -If largest length is $9$, then max is $9$. -If largest length is $8$ then max is $8$. -If largest length is $7$ then max is $21$ (since there is at most one other length greater than $1$, which can be $2$ or $3$) -If largest length is $6$ then max is $12$. If we have two other lengths larger than $1$ they must be $2$ and $2$. The other option is having just one other cycle of length greater than $1$, and the length can be $2,3$ or $4$. The max is reached with $4$ and is $12$. -If largest length is $5$ then max is $30$. Notice there can be at most two more cycles of length greater than $1$. And the options in this case are $2,3$ and $2,2$ ( out of these options max is $30$). The other option is having only one more cycle of length greater than $1$. And the options for this length are $2,3$ and $4$ (out of these options max is $20$). -If largest length is smaller than $5$ then the least common multiple is under $4\times3=12$. -Hence max is $30$.<|endoftext|> -TITLE: Relations in Group Presentation -QUESTION [5 upvotes]: In an introduction to abstract algebra, I was recently introduced to the idea of presenting a group - minimally, a group is just a set of generators along with a set of relations amongst the generators. I believe that I have, at least, a rather basic understanding of this idea. On the other hand, I don't quite understand when one knows that they have a sufficient amount of relations to uniquely characterize the group at hand. For example, a common example for generators and relations is the Dihedral group $ D_n = \{ \rho, \tau : \;\rho^n = 1, \tau^2 =1, \tau\rho\tau^{-1}=\rho^{-1} \}$. Clearly there are two generators here: a rotation $ \rho $ by an angle $ 2\pi/n$ and a reflection $ \tau $. What I don't understand is exactly how one knows that these three relations as listed are sufficient to characterize the group. When listing the relations, I see that each of these properties are true, but how does one know that they cannot stop with just $ \rho^n = 1$ and $ \tau^2= 1 $, the most basic properties of $D_n$? A small bit of clarification here would be greatly appreciated as I feel as though I am missing something obvious. - -REPLY [5 votes]: This won't address your specific question here, but more of my general feeling about presentations. -Key idea: Presentations make it easy to communicate the particular group you're working with, but are generally hard to come up with, or work with! - -For example, there are lots and lots of groups of order $96$ -- 231 of them, to be precise. But if you've found an interesting one (say, this guy), how in the world would you describe it to someone, especially if it doesn't belong to a fairly well-known family, or have a nice description as (semi)direct products? -That's where a presentation comes into play. Supposing you have such a presentation, you just write it down, tell your friend, and that's that. Your job is done! -This is ignoring the fact that it's really nontrivial to determine a set of relations that pins down your group. I've never even thought of doing this, but I'd wager it's not a pleasant task. Why would I be willing to wager that? -Let's go back to your friend, when she receives receives the compact presentation you sent earlier. She has her work cut out for her! See this answer of mine for an idea of the kind of work required just to list elements, for a group of order only $8$. Long story short, it's completely nontrivial to actually unpack a presentation, in general. This is without even mentioning the word problem, which in a sense makes precise how difficult it is. -So in summation, group presentations are nice as exactly that -- presentations. If you have any other description of the group to work with, chances are, it'll be easier than working with the presentation.<|endoftext|> -TITLE: Irreducible polynomials over $\mathbb Q$ and $\mathbb Z $ -QUESTION [10 upvotes]: When I read "Contemporary Abstract Algebra" by Joseph gallian, under the topic irreducible polynomials, his first example is the -polynomial $$2x^2+4=0$$ is reducible over $\mathbb Z$ but irreducible over -$ \mathbb Q$. I dont know how is this possible? Since it is of degree 2, we can see the roots of the polynomial, where it lies? if it lies in $\mathbb Z$ then it would lie in $ \mathbb Q$, then how can it be reducible over $\mathbb Z$ when the roots are complex numbers? pls explain - -REPLY [14 votes]: It's not the roots, it's the "$2$"! -A polynomial is irreducible over a ring if it cannot be written as a product of two non-invertible polynomials. In $\mathbb{Z}$, "$2$" is noninvertible, so $(x^2+2)2$ is an appropriately "nontrivial" factorization. -Meanwhile, over in $\mathbb{Q}$, the polynomial "$2$" is invertible, since ${1\over 2}$ is rational (proof: exercise :P). So the factoriztion $(x^2+2)2$ is "trivial" in the context of $\mathbb{Q}$, since we can always extract a factor of $2$ from any polynomial. -EDIT: Think of it this way: saying that a polynomial is irreducible over a ring means it has no "nontrivial" factorizations. Now, when we make the ring bigger (e.g. pass from $\mathbb{Z}$ to $\mathbb{Q}$) two things happen: - -More factorizations become possible. -More factorizations become trivial. - -So even though your first instinct might be "polynomials will only go from "irreducible" to "reducible" as the ring gets bigger," actually the opposite can happen! -In fact, here's a good exercise: - -Can you find a polynomial $p\in\mathbb{Z}[x]$ which is irreducible over $\mathbb{Z}$ but reducible over $\mathbb{Q}$? - - -Note that the definition of reducibility over a field may sound different: - -For $F$ a field, a polynomial $p\in F[x]$ is irreducible if $p$ cannot be written as the product of two nonconstant polynomials. - -But this is actually equivalent to the definition I gave above, in case we're over a field: the noninvertible elements of $F[x]$ are precisely the nonconstant polynomials!<|endoftext|> -TITLE: Construction of new ellipse -QUESTION [6 upvotes]: Using a pencil, the thread was pulled on the ellipse. Then the pencil started to rotate around the ellipse. How to prove that the new geometric figure which the pencil drew is also an ellipse (with the same foci as the first ellipse)? - -REPLY [2 votes]: Partial answer -Given the two ellipses, we'll check if the thread around the inner ellipse is constant from any point on the outer ellipse. -We can describe the inner and outer ellipses as follows: -$$ -\begin{align} -x &= a \cos(\alpha) && 0 -TITLE: Is $\int_{M_{n}(\mathbb{R})} e^{-A^{2}}d\mu$ a convergent integral?(2) -QUESTION [7 upvotes]: We identify $M_{n}(\mathbb{R})$ with $\mathbb{R}^{n^{2}}$ -We put $\int_{M_{n}(\mathbb{R})} e^{-A^{2}}d\mu=\lim_{r\to \infty} \int_{D_{r}} e^{-A^{2}}$ where the later is counted as a Riemann integral not Lebesgue integral. Here $D_{r}$ is the disc of radius $r$ with respect to the Euclidean norm of $\mathbb{R}^{n^{2}}$. Is the above integral a convergent improper integral? What about if we consider $D_{r}$ with respect to the matrix norm? -The following post shows that this integral is not convergent in the Lebesgue sense. It also shows that if it is Riemann convergent, then the value of integral is an scalar matrix. -Is $\int_{M_{n}(\mathbb{R})} e^{-A^{2}}d\mu$ a convergent integral? - -REPLY [4 votes]: In dimension $n = 2$, we have - $$ \int_{D_r} e^{-A^2} \, \mu(dA) = \frac{\pi^2}{2}\left( e^{-r^2} - 1 + r^2 + \frac{r^4}{2} \right) I_2, \tag{*} $$ - where $I_2$ is the identity matrix in $M_2(\Bbb{R})$. - -In particular, the improper integral does not converge. -I have summarized my solution in my blog posting. But here is my idea: Evaluate the integral of the series expansion -$$ \int_{D_r} e^{-A^2} \, \mu(dA) = \sum_{m=0}^{\infty} \frac{(-1)^m}{m!} \int_{D_r} A^{2m} \, \mu(dA). $$ -term by term. Writing $A$ as -$$ A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}, $$ -notice that each entry of $A^{2m}$ is a homogeneous polynomial of degree $2m$ in variables $a, b, c, d$. With a little bit of observations, we find that - -Lemma 1. Let $D$ be any symmetric domain (in the sense that $-D = D$). Then for any $n \geq 0$, - $$ \int_{D} A^{2m} \, \mu(dA) = I_2 \sum_{i+j+2k=m} \binom{2k+2i}{2i}\binom{2k+2j-1}{2j} \int_{D} a^{2i}d^{2j}(bc)^{2k} \, \mu(dA). \tag{1} $$ - Here we exploited the convention that $\binom{-1}{0} = 1$. - -Sketch of proof. Each entry of $A^{2m}$ admits a combinatorial interpretation in terms of transition path on a 2-state space: - -(For example, the path 1→1→1→2→2→2→1→2→1 contributes to the $(1,1)$-entry of $A^8$ with the term $aacddbcb = a^2b^2c^2d^2$.) Also, by the symmetry of $D$, we only need to consider terms with even exponents, i.e., of the form -$$\text{(coef.)}\cdot a^{2i}b^{2j}c^{2k}d^{2l}.$$ -These terms correspond to paths where every transition takes place in even numbers. A moment of thought shows that such term corresponds to closed paths. Thus theses term only appear in $(1,1)$-entry and $(2,2)$-entry of $A^{2m}$, and the exponents of $b$ and $c$ are equal. Then $\text{(1)}$ follows by counting the number of such paths. //// -Now, each term in $\text{(1)}$ can be easily computed when $D = D_r$: - -Lemma 2. For any $i+j+2k = m$ we have - $$ \int_{D_r} a^{2i}d^{2j}(bc)^{2k} \, \mu(dA) = \frac{(i-\frac{1}{2})!(j-\frac{1}{2})!(k-\frac{1}{2})!^2}{(m+2)!} r^{2m+4}. \tag{2} $$ - -Of course we are using the convention that $(l-\frac{1}{2})! = \Gamma(l+\frac{1}{2})!$ in $\text{(2)}$. We skip the proof since the proof is easy. -Somewhat surprisingly, the egregious formula $\text{(1)}$ yields simple values: - -Lemma 3. For $m \geq 1$ we have - $$ \int_{D_r} A^{2m} \, \mu(dA) = \frac{\pi^2}{2(m+1)(m+2)} r^{2m+4}. $$ - (When $m = 0$ we have different value.) - -This amounts to prove that -$$ \sum_{i+j+2k=m} \binom{2k+2i}{2i}\binom{2k+2j-1}{2j} (i-\tfrac{1}{2})!(j-\tfrac{1}{2})!(k-\tfrac{1}{2})!^2 = \frac{m! \pi^2}{2}. $$ -Proof of this fact can be found in my blog posting. Now assuming that this is true, $\text{(*)}$ easily follows.<|endoftext|> -TITLE: Olympiad problem on the prime numbers -QUESTION [16 upvotes]: Let $P={2,3,5,7,11,...}$ denote the set of all prime numbers less than ${ 2 }^{ 100}$. -Prove that $\sum _{ p\in P }^{ }{ \frac { 1 }{ p } } < 8$. - -I don't understand how to progress in the problem. Any help would appreciated. Thank you. - -REPLY [3 votes]: The problem can be easily solved if you can use the following bound on prime counting function $\pi(n)$ (taken from here): -$$\pi(n) / \frac{n}{\ln{n}} \le C = 1.25506$$ -Rewrite you sum via $\pi(n)$: -$$ - \sum_{p \in P} \frac{1}{p} = \sum_{k=1}^{2^{100}}\frac{\pi(k) - \pi(k-1)}{k} -$$ -Now use summation by parts, with $k$ starting from $a > 1$: -$$ - \sum_{k=a}^{2^{100}}\frac{\pi(k) - \pi(k-1)}{k} = - \left[ \frac{\pi(2^{100})}{2^{100} + 1} - \frac{\pi(a-1)}{a} \right] - \sum_{k=a}^{2^{100}}\pi(k)\left(\frac{1}{k+1} - \frac{1}{k}\right) -$$ -Forget about the part in brackets, it should be easy to bound efficiently. -The sum in the main part can be bounded: -$$ - -\sum_{k=a}^{2^{100}}\pi(k)\left(\frac{1}{k+1} - \frac{1}{k}\right) = - \sum_{k=a}^{2^{100}}\frac{\pi(k)}{k(k+1)} \le - \sum_{k=a}^{2^{100}}\frac{C \; k}{k(k+1)\ln{k}} \le - C \sum_{k=a}^{2^{100}}\frac{1}{k\ln{k}} -$$ -We can bound this sum via integration. Define function $f(x) = (x \ln x)^{-1}$. It monotonically decreases for $x > 1$, so $(k \ln k)^{-1} \le \int_{k-1}^{k} f(x) dx$. Hence: -$$ - \sum_{k=a}^{2^{100}}\frac{1}{k\ln{k}} \le - \int\limits_{a-1}^{2^{100}-1} \frac{dx}{x \ln x} = - \ln \ln x \bigg\rvert_{a-1}^{2^{100}-1} \le \ln \ln 2^{100} = 4.23865\ldots -$$ -Multiplying this value by $C$ given above, we get a bound $5.32 < 8$. However, we also have to add the part in the brackets and the original sum for $k < a$.<|endoftext|> -TITLE: A Lie group that has an immersion in $\mathrm{GL}(n,\Bbb R)$ but no embedding? -QUESTION [6 upvotes]: Question: Is there a Lie group $G$ that admits a smooth immersion - $$i:G\longrightarrow\mathrm{GL}(n,\Bbb R)$$ - for some $n\in\Bbb N$, but no smooth embedding - $$j:G\longrightarrow\mathrm{GL}(m,\Bbb R)$$ - for any $m\in\Bbb N$? (Here, $i$ and $j$ are also required to be group homomorphisms.) - -The usual example of a Lie group which is immersed but not embedded is the group $\Bbb R$ with the immersion -$$i:\Bbb R\longrightarrow\mathrm{GL}(2,\Bbb C),\quad t\longmapsto\begin{pmatrix}e^{it} & 0 \\ 0 & e^{i\alpha t}\end{pmatrix}$$ -for $\alpha$ irrational. The so-called dense curve on the torus. However, here $\Bbb R$ do embed in $\mathrm{GL}(n,\Bbb R)$, but just in a different way: -$$j:\Bbb R\longrightarrow \mathrm{GL}(2,\Bbb R),\quad t\longmapsto\begin{pmatrix}1 & t \\ 0 & 1\end{pmatrix}.$$ -There are also well-known groups that have no injective homomorphism into $\mathrm{GL}(n,\Bbb R)$, but then they do not immerse either. - -REPLY [2 votes]: There are countable groups (hence, Lie groups) $G$ such that there exists a faithful linear representation of $G$ but there is no faithful linear embedding of $G$. For instance, the 2-step solvable group -$$ -G=BS(2,1)=< a,b: aba^{-1}=b^2> -$$ -admits an injective homomorphism to $SL(2,R)$ but every injective homomorphism $f: G\to GL(n,R)$ will have non-closed image and, hence, will not be an embedding. The reason for the latter is that a discrete ebmedding $f: G\to GL(n,R)$ will (up to conjugation) land in the group $B$ of upper triangular matrices, but every discrete subgroup of $B$ is polycyclic (G.D. Mostow, On the fundamental group of a homogeneous space, Ann. of Math. (2) 66 (1957), 249–255), while $BS(2,1)$ is not polycyclic.<|endoftext|> -TITLE: Errors in math research papers -QUESTION [19 upvotes]: Have there been cases of errors in math papers, that were undetected for so long, that they caused subsequent errors in research, citing those papers. ie: errors getting propagated along. My impression is that this type of thing is extremely rare. -What was the worst case of such a scenario? Thanks. - -REPLY [3 votes]: One egregious case recently analyzed in detail by Adrian Mathias is Bourbaki's text Theory of sets and a couple of sequels published by Godement and others. Mathias' paper is: -Mathias, A. R. D. Hilbert, Bourbaki and the scorning of logic. Infinity and truth, 47–156, Lect. Notes Ser. Inst. Math. Sci. Natl. Univ. Singap., 25, World Sci. Publ., Hackensack, NJ, 2014. -Mathias analyzes several ubiquitous errors in the book, such as choosing an inappropriate foundation in Hilbert's pre-Goedel epsilon (or tau) operator, confusion of language and metalanguage, missing hypotheses that make certain statements incorrect, and even more serious "editorial comments" suggesting to the reader that certain issues in logic are too complicated to be clarified completely. -The result was not merely perpetuation of errors in other papers, but the stagnation of logic in France for several generations that only recently has begun to be corrected.<|endoftext|> -TITLE: I want to calculate the limit of: $\lim_{x \to 0} \left(\frac{2^x+8^x}{2} \right)^\frac{1}{x} $ -QUESTION [8 upvotes]: I want to calculate the limit of: $$\lim_{x \to 0} \left(\frac{2^x+8^x}{2} \right)^\frac{1}{x} $$ -or prove that it does not exist. Now I know the result is $4$, but I am having trouble getting to it. Any ideas would be greatly appreciated. - -REPLY [3 votes]: More generally, for $a>0$ and $b>0$, -$$ -\lim_{x\to0}\frac{1}{x}\log\frac{a^x+b^x}{2}= -\lim_{x\to0}\frac{\log(a^x+b^x)-\log 2}{x} -$$ -is the derivative at $0$ of the function $f(x)=\log(a^x+b^x)$. Since -$$ -f'(x)=\frac{a^x\log a+b^x\log b}{a^x+b^x} -$$ -we have -$$ -f'(0)=\frac{\log a+\log b}{2}=\log\sqrt{ab} -$$ -Thus -$$ -\lim_{x\to0}\left(\frac{a^x+b^x}{2}\right)^{1/x}= -e^{\log\sqrt{ab}}=\sqrt{ab} -$$ -You might enjoy proving that -$$ -\lim_{x\to\infty}\left(\frac{a^x+b^x}{2}\right)^{1/x}=\max(a,b) -$$<|endoftext|> -TITLE: Find the Range of $f(a,b)=2a+b-3ab$ -QUESTION [6 upvotes]: Let $a,b>0$,and such $$a^2+b^2-ab=4$$ Find the range -$$f(a,b)=2a+b-3ab$$ -I try let $a=x+y,b=x-y$,then -$$a^2+b^2-ab=4\Longrightarrow x^2+3y^2=16,x>y,x>-y$$ -so we Let -$$\begin{align} -x &=4\cos{t}, \qquad y=\dfrac{4}{\sqrt{3}}\sin{t} \\ -f(a,b) &= 2a+b-3ab \\ -&= \dfrac{3x}{2}+\dfrac{y}{2}-\frac{3}{4}x^2+\dfrac{3}{4}y^2\\ -&=6\cos{t}-\dfrac{2}{\sqrt{3}}\sin{t}-12\cos^2{t}+4\sin^2{t} -\end{align}$$ -Then I stuck - -REPLY [2 votes]: I think you have made a mistake in your computations as I did the arithmetic with my CAS. However, if we make the substitutions -$$\begin{align} -a &= 2 \cos t + \frac{2}{\sqrt{3}} \sin t = \frac{4}{\sqrt{3}} \sin(t+\frac{\pi}{3}) \\ -b &= 2 \cos t - \frac{2}{\sqrt{3}} \sin t = \frac{4}{\sqrt{3}} \sin(t+\frac{2\pi}{3}) -\end{align}$$ -then the constraint equation -$$a^2+b^2-ab=4$$ -will be satisfied identically $4=4$. Also, to satisfy $a \gt 0$ and $b \gt 0$ we require that -$$\begin{cases} -0 \lt t+\frac{\pi}{3} \lt \pi \\ -0 \lt t+\frac{2\pi}{3} \lt \pi -\end{cases} -\to -\begin{cases} --\frac{\pi}{3} \lt t \lt \frac{2\pi}{3} \\ --\frac{2\pi}{3} \lt t \lt \frac{\pi}{3} -\end{cases} -\to --\frac{\pi}{3} \lt t \lt \frac{\pi}{3}$$ -So, it remains to work on $f(a,b)=2a+b- 3ab$. Now, if we do the substitution in $f(a,b)$ we will get -$$\begin{align}g(t) &= f(a(t),b(t)) \\ -&=\frac{2}{\sqrt{3}}\sin t + 6 \cos t - 16 \cos^2 t +4 \\ -\end{align}$$ -Finally, we can use any method in calculus to find the Maximum and Minimum of this continuous function on the interval $(-\frac{\pi}{3},\frac{\pi}{3})$. So the range of $f(a,b)$ with the desired constraints on $a$ and $b$ will be obtained. In fact, we just reduced a constrained multi-variable optimization problem to a usual single variable optimization problem in calculus.<|endoftext|> -TITLE: Intuition for Kuratowski-Mrówka characterization of compactness -QUESTION [6 upvotes]: Fact. A space $X$ is compact iff for every space $Y$, the projection $X\times Y\rightarrow Y$ is a closed map. -The finite subcover definition of compactness seems reasonably intuitive: finite covers mean it can't be too "spread out". This fits in with the characterization by nets, which are just there because sequences may have too few points to measure problems. -I have no intuition whatsoever for this characterization. How should one visualize it, why should one expect it, etc? - -REPLY [6 votes]: What does the closedness of the projection $\pi : X \times Y \to Y$ really mean? -That the compactness of $X$ implies that every projection $\pi : X \times Y \to Y$ is closed really stems from the Tube Lemma, and a characterisation of closed (continous) maps. - -Tube Lemma. (See Lemma 26.8 from Munkres's Topology.) If $X$ is compact, $Y$ is any space, $y_0 \in Y$, and $W \subseteq X \times Y$ is open such that $X \times \{ y_0 \} \subseteq W$, then there is an open neighbourhood $V$ of $y_0$ such that $X \times V \subseteq W$. -A continuous function $f : X \to Y$ is closed iff for each $y \in Y$ and each open $U \subseteq X$ with $f^{-1} [\{ y \}] \subseteq U$ there is an open neighbourhood $V$ of $y$ such that $f^{-1} [V] \subseteq U$. - -Restating the latter in the special case of the projection $\pi : X \times Y \to Y$ we get - -$\pi : X \times Y \to Y$ is closed iff for each $y \in Y$ and each open $W \subseteq X \times Y$ with $X \times \{ y \} = \pi^{-1} [\{y\}] \subseteq W$ there is an open neighbourhood $V$ of $y_0$ such that $X \times V = \pi^{-1} [ V ] \subseteq W$. - -So the statement that the projection $\pi : X \times Y \to Y$ is closed for every $Y$ is really just a restatement of the Tube Lemma for $X$. -"Intuition" for the sufficiency of the statement -The Tube Lemma is probably easier to gain an intuition for. It basically says that if $W \subseteq X \times Y$ is open and contains a segment $X \times \{ y_0 \}$ (which is homeomorphic to $X$), then it can't be too "erratic" or become arbitrarily "thin" around that segment. - -If $X$ is compact, then since $\langle x,y_0 \rangle \in W$ there are open $U_x \subseteq X$ and $V_x \subseteq Y$ such that $\langle x,y_0 \rangle \in U_x \times V_x \subseteq W$. If $X$ is compact, we only need finitely many of the $U_x$s to cover $X$, and the intersection of the corresponding $V_x$s will yield $V$. -If $X$ is not compact, you can imagine that the $U_x$s form an open cover with no finite subcover, and so when you intersect the corresponding $V_x$s for a family of the $U_x$s which do cover $X$, you might not get an open set as desired. - -The trouble then is picking out an appropriate space to witness the latter for each non-compact space. -Encoding information about an open cover in an auxiliary space -That the closedness of each projection $\pi : X \times Y \to Y$ implies compactness rests on constructing an auxiliary topological space $Y$ for a given family $\mathcal{U}$ of open subsets of $X$ in such a way that properties of space $Y$ and the projection $\pi : X \times Y \to Y$ encode facts about the family $\mathcal{U}$. In particular, $\pi$ being closed will imply that either $\mathcal{U}$ does not cover $X$, or that a finite subfamily covers $X$. -The actual construction of $Y$ is a bit synthetic, and may not yield to intuition. But we can try to pick apart some ideas. -Consider the general case of a topological space $X$, and a family $\mathcal{U}$ of open subsets of $X$. We consider the set $Y = X \cup \{ \mathord{*} \}$ where $\mathord{*} \notin X$ with the topology generated by the basis consisting of - -$\{ x \}$ for each $x \in X$; and -all sets of the form $\{ \mathord{*} \} \cup ( X \setminus ( U_1 \cup \cdots \cup U_n ) )$ where $U_1 , \ldots , U_n \in \mathcal{U}$. - -Central Fact. For $A \subseteq Y$, $\mathord{*} \in \overline{A}$ iff $A$ cannot be covered by finitely many sets from $\mathcal{U}$. -If the projection $\pi : X \times Y \to Y$ is closed, then in particular the implication $$\mathord{*} \in \overline{ \pi[\Delta] } \Rightarrow \mathord{*} \in \pi[ \overline{\Delta} ]$$ holds, where $\Delta = \{ \langle x,x \rangle : x \in X \} \subseteq X \times Y$. - -As $\pi[\Delta] = X$, we have that $\mathord{*} \in \overline{\pi[\Delta]}$ iff $X$ cannot be covered by finitely many sets from $\mathcal{U}$. -Now $\mathord{*} \in \pi [ \overline{\Delta} ]$ iff there is an $x \in X$ such that $\langle x , \mathord{*} \rangle \in \overline{\Delta}$. -For $x \in X$, if $x \in \bigcup \mathcal{U}$, then take $U \in \mathcal{U}$ containing $x$. Clearly $U$ is an open neighbourhood of $x$ in $X$, and $\{ \mathord{*} \} \cup ( X \setminus U )$ is an open neighbourhood of $\mathord{*}$ in $Y$, however $( U \times ( \{ \mathord{*} \} \cup ( X \setminus U ) ) ) \cap \Delta = \varnothing$, so $\langle x , \mathord{*} \rangle \notin \overline{\Delta}$. On the other hand, if $x \notin \bigcup \mathcal{U}$, then as every open neighbourhood of $\mathord{*}$ contains $x$ it follows that $\langle x , \mathord{*} \rangle \in \overline{\Delta}$. -So for $x \in X$ we have that $\langle x,\mathord{*} \rangle \in \overline{\Delta}$ iff $x \notin \bigcup \mathcal{U}$, and so $\mathord{*} \in \pi[\overline{\Delta}]$ iff $\bigcup \mathcal{U} \neq X$. - -Putting this together, in order for $\pi$ to be closed it must be the case that either $X$ can be covered by finitely many sets from $\mathcal{U}$, or that $\mathcal{U}$ does not cover $X$. -We can use the same space $Y$ to investigate how the Tube Lemma for $X$ implies the compactness of $X$. Note that $W = ( X \times Y ) \setminus \overline{ \Delta }$ is an open subset of $X \times Y$. For the Tube Lemma for $X$ to hold, it must be that the implication - -$X \times \{ \mathord{*} \} \subseteq W$ ⇒ $X \times V \subseteq W$ for some open neighbourhood $V$ of $\mathord{*}$ - -holds. - -One can show that $X \times \{ \mathord{*} \} \subseteq W$ iff $\bigcup \mathcal{U} = X$. (In particular, as above, $\langle x, \mathord{*} \rangle \in \overline{\Delta}$ iff $x \notin \bigcup \mathcal{U}$.) -If $V$ is an open neighbourhood of $\mathord{*}$ such that $X \times V \subseteq W$, then without loss of generality there are $U_1 , \ldots , U_n \in \mathcal{U}$ such that $V = \{ \mathord{*} \} \cup ( X \setminus ( U_1 \cup \cdots \cup U_n ) )$. It then follows that $U_1 \cup \cdots \cup U_n = X$.<|endoftext|> -TITLE: Simple example of "Maximum A Posteriori" -QUESTION [6 upvotes]: I've been immersing myself into Bayesian statistics in school and I'm having a very difficult time grasping argmax and maximum a posteriori. A quick explanation of this can be found: https://www.cs.utah.edu/~suyash/Dissertation_html/node8.html -Basically theta is a set of parameters and x is the data, P of theta given x (posterior) equals P of x given theta (likelihood term) multiplied by P of theta (prior term) and all of that divided by P of x (to normalize). Not sure exactly how dividing by P(x) normalizes this but thats not my main question. - -Then, you maximize the posterior with argmax - -I believe your maximizing the set of parameters to get the most likely posterior ... -Can someone please give a simple example of this so I can visualize what is happening? - -REPLY [4 votes]: Consider flipping fair coins. The outcome of a flip is described by -the random variable $C$ that can take on the values heads (h) and -tails (t). The probability of heads is $P(C=h) = 0.5$ and the -probability of tails is $P(C=t) = 1 - P(C=h)$. Consider also -flipping a biased coin whose probability of turning up heads is -$P(C=h) = 0.7$ and tails $P(C=t) = 1 - P(C=h) = 0.3$. We can easily -solve problems such as calculating the probability of getting three -tails in a row. -We can make the model more general by introducing a parameter k to -describe the probability of getting heads. We write that as -$P(C=h)=k$ and $P(C=t)=1-k$. The model becomes more complicated, but -we can now describe coins with any bias! -We can now ask questions like "Given a coin with bias $k$, what is the -probability of getting 2 heads and 3 tails?" The answer is: -$$ -P(C=h)P(C=h)P(C=t)P(C=t)P(C=t) -$$ -which simplifies to -$$ -P(C=h)^2P(C=t)^3 = k^2(1-k)^3 -$$ -Using this we can calculate the maximum likelihood estimate (MLE) -of the parameter $k$ given the flips we have observed. How do we do -that? Remember calculus and the method for finding stationary points? -Yes! We derive the expression and set it equal to 0: -$$ -D\, k^2(1-k)^3 = 2k(1-k)^3 - 3k^2(1-k) = 0 -$$ -Solving for $k$ yields $k = 0.4$. So according to the MLE the -probability of gettings heads is 40%. -That's it for the MLE. Now let's tackle the maximum a posteriori -estimate (MAP). To compute it we need to think about the parameter -$k$. The idea behind MAP is that we have some rough idea about how -biased our coin is. Then we flip it a few times and update our rough -idea. -Our "rough idea" is called the prior, the coin flips the -observations, and our "rough idea," after considering the -obvervations, the posterior. -The big epiphany that brings us from MLE to MAP is that the parameter -$k$ should be thought of as a random variable! At first it seems -strange to think of a probability as a random variable, but it makes -perfect sense after a while. A priori, we don't know how biased the -coin is but our hunch could be that it is biased in favor of heads. -We therefore introduce the random variable $K$ and say that its values -are drawn from the Beta distribution: $P(K=k) = -\mathrm{Beta_K}[a,b]$. This is out prior. We won't explain how the -Beta distribution works because it is beyond the scope of this -answer. It suffices to know that it is perfect for modeling the bias -of coins. For the distribution's parameters we choose $a=6$ and $b=2$ -which corresponds to a coin that is heavily biased in favor of -heads. So $P(K=k) = \mathrm{Beta_K}[6,2]$. -To get the posterior from the prior, we simply multiply it with -the observations: -$$ -P(K=k|C=\{3t, 2h\}) = P(C=h,C=h,C=t,C=t,C=t|K=k)P(K=k) -$$ -We simplify the right hand side the same way as we did for the -expression for the MLE -$$ -k^2(1-k)^3\mathrm{Beta_K}[6,2] -$$ -Wikipedia -tells us how to expand out the Beta distribution: -$$ -k^2(1-k)^3\frac{1}{B(6,2)}k^{6-1}(1-k)^{2-1} = \frac{1}{B(6,2)}k^7(1-k)^4 -$$ -Notice how similar the posterior is to the prior... Perhaps it can be -turned into a Beta distribution itself?! But to get the MAP we don't -need that. All that is left is deriving the above expression and -setting it to 0, exactly as we did when computing the MLE: -$$ -D\,k^7(1-k)^4 = 7k^6(1-k)^4 - 4k^7(1-k)^3 = 0 -$$ -The $\frac{1}{B(6,2)}$ factor is omitted because it is non-zero and -constant so it won't affect the maximum. -Solving for $k$ yields $k = 7/11$. So according to the MAP the -probability of gettings heads is about 64%. -TL;DR I think two things confused you. First, the argmax -syntax. My method of deriving to find the maximum of the parameter -analytically works in this example, but many times it doesn't. Then you -have to use other methods to find or approximate it. -Second, not only events but the parameters themselves can be thought -of as random variables drawn from fitting distributions. You are -uncertain of the outcome, whether a coin will land heads or tails, but -you are also uncertain of the coins fairness. That is a "higher level -of uncertainity" which is hard to grasp in the beginning.<|endoftext|> -TITLE: Prove that $e>2$ geometrically. -QUESTION [18 upvotes]: Q: Prove that $e>2$ geometrically. -Attempt: I only know one formal definition of $e$ that is $\lim_\limits{n\to\infty} (1+\frac{1}{n})^n=e$. I could somehow understand that this is somehow related to rotation in the complex plane. -$$e^{i\theta}=\cos \theta + i \sin \theta$$ -Hence we have $$e^{i\pi}=-1$$ -But how can I bring out the value of $e$ when I am showing this rotation in a geometrical figure? -Any hints are appreciated. -EDIT: As per the comments, I am making a small addition to the question which will not affect the existing answers. It is that, as a definition of $e$, one can use any definition which does not use the fact $2 -TITLE: I would like to calculate $\lim_ {n \to \infty} {\frac{n+\lfloor \sqrt{n} \rfloor^2}{n-\lfloor \sqrt{n} \rfloor}}$ -QUESTION [6 upvotes]: I would like to calculate the following limit: $$\lim_ {n \to \infty} {\frac{n+\lfloor \sqrt{n} \rfloor^2}{n-\lfloor \sqrt{n} \rfloor}}$$ -where $\lfloor x \rfloor$ is floor of $x$ and $x ∈ R$. -Now I know the result is $2$, but I am having trouble getting to it. Any ideas would be greatly appreciated. - -REPLY [5 votes]: You may observe that, as $n \to \infty$, -$$ -\begin{align} - {\frac{n+\lfloor \sqrt{n} \rfloor^2}{n-\lfloor \sqrt{n} \rfloor}}&={\frac{2n+(\lfloor \sqrt{n} \rfloor-\sqrt{n})(\lfloor \sqrt{n} \rfloor+\sqrt{n})}{n-\lfloor \sqrt{n} \rfloor}}\\\\ -&={\frac{2+(\lfloor \sqrt{n} \rfloor-\sqrt{n})(\lfloor \sqrt{n} \rfloor+\sqrt{n})/n}{1-\lfloor \sqrt{n} \rfloor/n}} -\\\\& \to 2 -\end{align} -$$ since, as $n \to \infty$, -$$ -\left|\frac{\lfloor \sqrt{n} \rfloor}{n}\right|\leq\frac{\sqrt{n}}{n} \to 0 -$$ and -$$ -\left|\frac{(\lfloor \sqrt{n} \rfloor-\sqrt{n})(\lfloor \sqrt{n} \rfloor+\sqrt{n})}{n}\right|\leq\frac{2\sqrt{n}}{n} \to 0. -$$<|endoftext|> -TITLE: Logistic regression - Prove That the Cost Function Is Convex -QUESTION [15 upvotes]: I'm reading about Hole House (HoleHouse) - Stanford Machine Learning Notes - Logistic Regression. -You can do a find on "convex" to see the part that relates to my question. -Background: -$h_\theta(X) = sigmoid(\theta^T X)$ --- hypothesis/prediction function -$y \in \{0,1\}$ -Normally, we would have the cost function for one sample $(X,y)$ as: -$y(1 - h_\theta(X))^2 + (1-y)(h_\theta(X))^2$ -It's just the squared distance from 1 or 0 depending on y. -However, the lecture notes mention that this is a non-convex function so it's bad for gradient descent (our optimisation algorithm). -So, we come up with one that is supposedly convex: -$y * -log(h_\theta(X)) + (1 - y) * -log(1 - h_\theta(X))$ -You can see why this makes sense if we plot -log(x) from 0 to 1: -$\infty$ to 0"> -i.e. if y = 1 then the cost goes from $\infty$ to 0 as the hypothesis/prediction moves from 0 to 1. -My question is: -How do we know that this new cost function is convex? -Here is an example of a hypothesis function that will lead to a non-convex cost function: -$h_\theta(X) = sigmoid(1 + x^2 + x^3)$ -leading to cost function (for y = 1): -$-log(sigmoid(1 + x^2 + x^3))$ -which is a non-convex function as we can see when we graph it: - -REPLY [16 votes]: Here I will prove the below loss function is a convex function. -\begin{equation} -L(\theta, \theta_0) = \sum_{i=1}^N \left( - y^i \log(\sigma(\theta^T x^i + \theta_0)) -- (1-y^i) \log(1-\sigma(\theta^T x^i + \theta_0)) -\right) -\end{equation} -Then will show that the loss function below that the questioner proposed is NOT a convex function. -\begin{equation} -L(\theta, \theta_0) = \sum_{i=1}^N \left( y^i (1-\sigma(\theta^T x^i + \theta_0))^2 -+ (1-y^i) \sigma(\theta^T x^i + \theta_0)^2 -\right) -\end{equation} -To prove that solving a logistic regression using the first loss function is solving a convex optimization problem, we need two facts (to prove). -$ -\newcommand{\reals}{{\mathbf{R}}} -\newcommand{\preals}{{\reals_+}} -\newcommand{\ppreals}{{\reals_{++}}} -$ -Suppose that $\sigma: \reals \to \ppreals$ is the sigmoid function defined by -\begin{equation} -\sigma(z) = 1/(1+\exp(-z)) -\end{equation} - -The functions $f_1:\reals\to\reals$ and $f_2:\reals\to\reals$ defined by $f_1(z) = -\log(\sigma(z))$ and $f_2(z) = -\log(1-\sigma(z))$ respectively are convex functions. -A (twice-differentiable) convex function of an affine function is a convex function. - -Proof) First, we show that $f_1$ and $f_2$ are convex functions. Since -\begin{eqnarray} -f_1(z) = -\log(1/(1+\exp(-z))) = \log(1+\exp(-z)), -\end{eqnarray} -\begin{eqnarray} -\frac{d}{dz} f_1(z) = -\exp(-z)/(1+\exp(-z)) = -1 + 1/(1+exp(-z)) = -1 + \sigma(z), -\end{eqnarray} -the derivative of $f_1$ is a monotonically increasing function and $f_1$ function is a (strictly) convex function (Wiki page for convex function). -Likewise, since -\begin{eqnarray} -f_2(z) = -\log(\exp(-z)/(1+\exp(-z))) = \log(1+\exp(-z)) +z = f_1(z) + z -\end{eqnarray} -\begin{eqnarray} -\frac{d}{dz} f_2(z) = \frac{d}{dz} f_1(z) + 1. -\end{eqnarray} -Since the derivative of $f_1$ is a monotonically increasing function, that of $f_2$ is also a monotonically increasing function, hence $f_2$ is a (strictly) convex function, hence the proof. -Now we prove the second claim. Let $f:\reals^m\to\reals$ is a twice-differential convex function, $A\in\reals^{m\times n}$, and $b\in\reals^m$. And let $g:\reals^n\to\reals$ such that $g(y) = f(Ay + b)$. Then the gradient of $g$ with respect to $y$ is -\begin{equation} -\nabla_y g(y) = A^T \nabla_x f(Ay+b) \in \reals^n, -\end{equation} -and the Hessian of $g$ with respect to $y$ is -\begin{equation} -\nabla_y^2 g(y) = A^T \nabla_x^2 f(Ay+b) A \in \reals^{n \times n}. -\end{equation} -Since $f$ is a convex function, $\nabla_x^2 f(x) \succeq 0$, i.e., it is a positive semidefinite matrix for all $x\in\reals^m$. Then for any $z\in\reals^n$, -\begin{equation} -z^T \nabla_y^2 g(y) z = z^T A^T \nabla_x^2 f(Ay+b) A z -= (Az)^T \nabla_x^2 f(Ay+b) (A z) \geq 0, -\end{equation} -hence $\nabla_y^2 g(y)$ is also a positive semidefinite matrix for all $y\in\reals^n$ (Wiki page for convex function). -Now the object function to be minimized for logistic regression is -\begin{equation} -\begin{array}{ll} -\mbox{minimize} & -L(\theta) = \sum_{i=1}^N \left( - y^i \log(\sigma(\theta^T x^i + \theta_0)) -- (1-y^i) \log(1-\sigma(\theta^T x^i + \theta_0)) -\right) -\end{array} -\end{equation} -where $(x^i, y^i)$ for $i=1,\ldots, N$ are $N$ training data. Now this is the sum of convex functions of linear (hence, affine) functions in $(\theta, \theta_0)$. Since the sum of convex functions is a convex function, this problem is a convex optimization. -Note that if it maximized the loss function, it would NOT be a convex optimization function. So the direction is critical! -Note also that, whether the algorithm we use is stochastic gradient descent, just gradient descent, or any other optimization algorithm, it solves the convex optimization problem, and that even if we use nonconvex nonlinear kernels for feature transformation, it is still a convex optimization problem since the loss function is still a convex function in $(\theta, \theta_0)$. -Now the new loss function proposed by the questioner is -\begin{equation} -L(\theta, \theta_0) = \sum_{i=1}^N \left( y^i (1-\sigma(\theta^T x^i + \theta_0))^2 -+ (1-y^i) \sigma(\theta^T x^i + \theta_0)^2 -\right) -\end{equation} -First we show that $f(z) = \sigma(z)^2$ is not a convex function in $z$. If we differentiate this function, we have -\begin{equation} -f'(z) = \frac{d}{dz} \sigma(z)^2 = 2 \sigma(z) \frac{d}{dz} \sigma(z) -= 2 \exp(-z) / (1+\exp(-z))^3. -\end{equation} -Since $f'(0)=1$ and $\lim_{z\to\infty} f'(z) = 0$ (and f'(z) is differentiable), the mean value theorem implies that there exists $z_0\geq0$ such that $f'(z_0) < 0$. Therefore $f(z)$ is NOT a convex function. -Now if we let $N=1$, $x^1 = 1$, $y^1 = 0$, $\theta_0=0$, and $\theta\in\reals$, $L(\theta, 0) = \sigma(\theta)^2$, hence $L(\theta,0)$ is not a convex function, hence the proof! -However, solving the non-convex optimization problem using gradient descent is not necessarily bad idea. (Almost) all deep learning problem is solved by stochastic gradient descent because it's the only way to solve it (other than evolutionary algorithms). -I hope this is a self-contained (strict) proof for the argument. Please leave feedback if anything is unclear or I made mistakes. -Thank you. - Sunghee<|endoftext|> -TITLE: Why do we need "span" in linear algebra? -QUESTION [11 upvotes]: In my linear algebra course in university we started learning about span and I was curious what is it good for? and if someone know, how does it relate to 3D graphics? -Thank you. - -REPLY [4 votes]: Linear algebra is not a theory about vectors, it's a theory about spaces. -In set theory, you can reason about a set $S$ by representing elements of it symbolically, i.e. with variables $x, y, z,$ whathaveyou. If we're reasoning about three elements at a time, we can think of this as probing $S$ with a function from a three-element set $\{ x,y,z \}$ to $S$ (i.e., an arrow $\{ x, y, z \} \rightarrow S$). -In linear algebra, analogously, you use the free vector space $F(\{ x,y,z \})$ over $\{ x,y,z \}$, which is like the standard 3D space over the field of scalars, to probe a vector space V, by way of a linear map $F(\{ x,y,z \}) \rightarrow V$. Instead of getting a subset of atmost three elements of $S$, you get a subspace of atmost dimension 3 of V. This is the image of the linear map, and is the same as the span of the values of $x, y,$ and $z$ under the map. -The "dimension at most 3" part is relevant for linear maps in graphics. If you want to make sure you project the outline of a cube into 2D so that you can see every face, what you're saying is you want no two edges drawn to overlap in a line segment. So, for every point of the cube, you want it so that if you take that point as your origin for a 2D vector space, every two points connected to it by an edge span the plane. Embedding the 2D space into 3D as a plane offset from the origin, you can use affine span instead of linear span and a reference to each origin: Try taking the affine span of (1,0) and (0,1) in 2D, and then its convex hull. All three of these spans are closure operators, and they are generally helpful for solving and problems to do with arranging things a certain way. -But something I've just had to draw is this 3D solid which has a special point at the center and spokes coming out of it, and where the points at the end of the spokes are all distributed around the unit sphere around the special point. Then, since you can see all the points from the center, you only need to check that if you take the center to be the origin, you can distinguish all the points as you see them on the sphere from the origin. Hence, to have a 3D projection like this that keeps all the points and edges separate, it suffices for the linear span of every two spoke terminals to be 2D and not 1D. Why say it that way? Because you have to sometimes look at the shape before you draw it, and if you can't tell from the drawing whether the points are the same, you need a way to test for that. So for this problem, the linear span is a tool for designing a projection, and describing a constraint on the program in a way that the same libraries you're probably writing the project code in can deal with.<|endoftext|> -TITLE: Show that $P(T \le n + N \mid \mathscr F_n) > \epsilon$ where T is a stopping time -QUESTION [5 upvotes]: Given random variables $Y_1, Y_2, \ldots \stackrel{iid}{\sim} P(Y_i = 1) = p = 1 - q = 1 - P(Y_i = -1)$ where $p > q$ in a filtered probability space $(\Omega, \mathscr F, \{\mathscr F_n\}_{n \in \mathbb N}, \mathbb P)$ where $\mathscr F_n = \mathscr F_n^Y$, -define $X = (X_n)_{n \ge 0}$ where $X_n = a + \sum_{i=1}^{n} Y_i$ where $0 < a$. -It can be shown that the stochastic process $M = (M_n)_{n \ge 0}$ where $M_n = X_n - n(p-q)$ is a $(\{\mathscr F_n\}_{n \in \mathbb N}, \mathbb P)$-martingale. -Let $b > a$ be a positive integer and $T:= \inf\{n: X_n = 0 \ or \ X_n = b\}$. -It can be shown that $T$ is a $\{\mathscr F_n\}_{n \in \mathbb N}$-stopping time. -Show that $\exists N \in \mathbb N, \epsilon > 0$ s.t. $\forall n \in \mathbb N$, -$$P(T \le n + N \mid \mathscr F_n) > \epsilon \ a.s.$$ - -What I tried: -By using induction on n, we have for the base case: -Suppose $P(T \le N) > \epsilon$. Show that $P(T \le N+1 | \mathscr F_1) > \epsilon$. -I tried considering $P(T \le N+1 | \mathscr F_1)$ and then hopefully I could use the assumption somewhere: -$$P(T \le N+1 | \mathscr F_1) = E[1_{T \le N+1} | \mathscr F_1]$$ -$$= E[1_{T \le N} 1_{T = N+1} | \mathscr F_1]$$ -$$= E[E[1_{T \le N} 1_{T = N+1} | \mathscr F_N] | \mathscr F_1]$$ -$$= E[1_{T \le N} E[ 1_{T = N+1} | \mathscr F_N] | \mathscr F_1]$$ -Now what is $E[ 1_{T = N+1} | \mathscr F_N]$ exactly? -Well, up to time N we have already hit $X_n = b$, or we haven't. If we have, then $E[ 1_{T = N+1} | \mathscr F_N] = 0$. Otherwise, $E[ 1_{T = N+1} | \mathscr F_N] = p1_{X_{N} = b-1}$. Continuing: -$$= E[1_{T \le N} p1_{X_{N} = b-1} | \mathscr F_1]$$ -$$= pE[1_{T \le N} 1_{X_{N} = b-1} | \mathscr F_1]$$ -$$= pE[1_{T \le N-1} 1_{X_{N} = b-1} | \mathscr F_1]$$ -Similarly, I got -$$= p^2 E[1_{T \le N-2} 1_{X_{N-1} = b-2} | \mathscr F_1]$$ -$$= p^2 E[1_{T \le N-2} E[1_{X_{N-1} = b-2} | \mathscr F_{n-2}] | \mathscr F_1]$$ -However, I'm not quite sure that -$$E[1_{X_{N-1} = b-2} | \mathscr F_{n-2}] = p1_{X_{N-2} = b-3}$$ -I think we have that -$$E[1_{X_{N-1} = b-2} | \mathscr F_{n-2}] = p1_{X_{N-2} = b-3} + q1_{X_{N-2} = b-1}$$ -Um, am I on the right track? Did I make a mistake somewhere? - -REPLY [3 votes]: Step 1. Define $T_x$ by -$$ T_x = \inf\{n \geq 0 : x+Y_1+\cdots+Y_n \in \{0, b\} \}. $$ -For any given $x$, we know that $x+Y_1+\cdots+Y_n \to \infty$ almost surely as $n\to\infty$. (For instance, the SLLN is enough to justify this.) So we have $\Bbb{P}(T_x > N) < 1$ for any sufficiently large $N$. Then we can choose $N$ such that -$$ c := \max_{0 < x < b} \Bbb{P}(T_x > N) < 1. $$ -Step 2. We claim that the inequality holds with this $N$ and $\epsilon = 1-c$. To this end, write -\begin{align*} -\Bbb{P}(T > n+N \mid \mathscr{F}_n) -&= \Bbb{E}[ \mathbf{1}_{\{T > n+N\}} \mid \mathscr{F}_n] \\ -&= \Bbb{E}[ \mathbf{1}_{\{T-n > N\}}\mathbf{1}_{\{T > n\}} \mid \mathscr{F}_n] \\ -&= \sum_{x : 0 < x < b } \Bbb{E}[ \mathbf{1}_{\{T-n > N\}} \mathbf{1}_{\{T > n, X_n = x \}} \mid \mathscr{F}_n] -\end{align*} -Now define $\tilde{T}_x$ by -$$ \tilde{T}_x := \inf\{k \geq 0 : x + Y_{n+1} + \cdots + Y_{n+k} \in \{0, b\}\}. $$ -Then given $\{T > n, X_n = x\}$, we have $T-n = \tilde{T}_x$. Also it is clear that $\tilde{T}_x$ is independent of $\mathscr{F}_n$ and has the same distribution as $T_x$. So $\Bbb{P}$-a.s., we have -\begin{align*} -\Bbb{P}(T > n+N \mid \mathscr{F}_n) -&= \sum_{x : 0 < x < b } \Bbb{P}(T_x > N) \mathbf{1}_{\{T > n, X_n = x \}} \\ -&\leq \sum_{x : 0 < x < b } (1-\epsilon) \mathbf{1}_{\{T > n, X_n = x \}} \\ -&\leq 1-\epsilon. -\end{align*} -This is equivalent to the desired inequality. - -Remark. If you have acquaintance with Markov property, you will see that Step 2 is a typical Markov property argument. In this case, we can shorten Step 2 as follows: $\Bbb{P}^a$-almost surely, -\begin{align*} -\Bbb{P}^a(T > n+N \mid \mathscr{F}_n) -&= \Bbb{E}^a[ \mathbf{1}_{\{T-n > N\}}\mathbf{1}_{\{T > n\}} \mid \mathscr{F}_n] \\ -&= \Bbb{P}^{X_n}(T > N) \mathbf{1}_{\{T > n\}} \\ -&\leq 1-\epsilon. -\end{align*}<|endoftext|> -TITLE: Proof that trace of 'hat' matrix in linear regression is rank of X -QUESTION [15 upvotes]: I understand that the trace of the projection matrix (also known as the "hat" matrix) X*Inv(X'X)*X' in linear regression is equal to the rank of X. How can we prove that from first principles, i.e. without simply asserting that the trace of a projection matrix always equals its rank? -I am aware of the post Proving: "The trace of an idempotent matrix equals the rank of the matrix", but need an integrated proof. - -REPLY [23 votes]: If $X$ is $n \times m$ with $m \le n$ and has full rank, then $rank (X) = \min(n,m) = m$, and we know $(X^T X)^{-1}$ exists. -By commutativity of the trace operator, we have -$$tr(H) := tr (X (X^T X)^{-1} X^T) = tr (X^T X (X^T X)^{-1} ) = tr[I_m] = m$$<|endoftext|> -TITLE: Cardinality of power sets decides all of cardinal arithmetic? -QUESTION [15 upvotes]: Assuming ZFC, is it possible to have two models which agree on the cardinality of all the power sets, but disagree on the cardinality of some other cardinal exponentiation (meaning that they agree on the function $F$ such that $F(\alpha) = \beta$ iff $2^{\aleph_\alpha} = \aleph_\beta$, but in one model ${\aleph_\alpha} ^ {\aleph_\beta} = {\aleph_\gamma}$ whereas in the second model ${\aleph_\alpha} ^ {\aleph_\beta} = {\aleph_\delta}$ with $\gamma \neq \delta$)? -Put another way: If we decide (for example by forcing) the cardinality of every power set ($2^{\aleph_\alpha}$ for all $\alpha$), does it automatically decide the result of every possible cardinal exponentiation (${\aleph_\alpha} ^ {\aleph_\beta}$ for all $\alpha,\beta$)? -For example, when we assume GCH, we have an immediate formula for the cardinality of every cardinal exponentiation (see Jech 5.15), and we have no "freedom" to choose alternative values for them. - -REPLY [9 votes]: The answer is no assuming the existence of a model of $\mathsf{ZFC+GCH}$ having a supercompact cardinal. -First, using Silver's forcing, there is a generic extension $V[K]$ where $\kappa$ is still measurable but $2^{\kappa}=\kappa^{++}$. Then we can use Prikry's forcing to obtain a generic extension $V[K][H]$ of $V[K]$ such that all bounded subsets of $\kappa$ are in $V[K]$, all cardinals are preserved, $\kappa$ is still a strong limit and $\operatorname{cf}\kappa=\omega$. Let $G=K\ast H$. -As we assumed $V\models\mathsf{GCH}$, it's not hard to prove that in $V[G]$ we have $2^\lambda=\lambda^+$ for all $\lambda>\kappa$; since the poset yielding $V[K]$ has size $\kappa^{++}$ in $V$ and the poset giving the extension $V[K][H]$ has size $\kappa^{++}$ in $V[K]$. -Now let us work in $V[G]$. We have $$\kappa^{\aleph_0}=\kappa^{\operatorname{cf}\kappa}=2^{\kappa}=\kappa^{++},$$ and $$(\kappa^{+3})^{\omega_1}=(2^{\kappa^{++}})^{\omega_1}=2^{\kappa^{++}}=\kappa^{+3},$$ thus we can force with $Add(\omega_1,\kappa^{+3})$ to obtain a generic extension $V[G][H']$ where $2^{\omega_1}=\kappa^{+3}$, and $2^{\lambda}=\kappa^{+3}$ for all $\omega_1\leq\lambda\leq \kappa^{++}$. In $V[K]$, $\kappa$ is measurable, thus in there $\kappa=\aleph_\kappa$, so as cardinals are preserved in $V[G]$ we get that $\kappa=\aleph_\kappa$ is also true in $V[G][H']$. -Let $\beta_0$ be such that $2^{\aleph_0}=\aleph_{\beta_0}$ in $V[G]$. Then $\beta_0<\kappa$; as all bounded subsets of $\kappa$ in $V[G]$ are in $V[K]$. We also have $2^{\aleph_0}=\aleph_{\beta_0}$ in $V[G][H']$. -Thus if we consider the following function -$$F(\alpha)=\begin{cases} \beta_0 & \text{if}&\alpha=0 \\\kappa+3 & \text{if}& 1\leq\alpha\leq\kappa+2\\\alpha+1 &\text{if}&\alpha\geq\kappa+3 \end{cases},$$ -it follows that for all ordinals $\alpha$, $$V[G][H']\models 2^{\aleph_\alpha}=\aleph_{F(\alpha)},$$ -and as the poset we used in $V[G]$ is $<\omega_1$-closed there, we get that $\kappa^{\aleph_0}=\kappa^{++}$ in this extension too. -Now, let $\mathbb P\in L$ be a poset, cardinal preserving, such that if $K'$ is $L$-generic over $\mathbb P$, we have for all ordinals $\alpha$, $$L[K']\models 2^{\aleph_\alpha}=\aleph_{F(\alpha)}.$$ -In $L$, $\kappa$ is inaccesible, and thus in this model $\aleph_\kappa=\kappa$, so as $\mathbb P$ preserves cardinals we get that $\aleph_\kappa=\kappa$ in $L[K']$ too. -Let us work in $L[K']$. The singular cardinals hypothesis is true; since $0^\sharp$ does not exist, so we get that as $2^{\aleph_0}<\kappa$ and $\kappa$ is regular, $\kappa^{\aleph_0}=\kappa$. -Therefore we have that in both models $V[G][H']$ and $L[K']$, $2^{\aleph_\alpha}=\aleph_{F(\alpha)}$ for all ordinals $\alpha$, but -$$V[G][H']\models \aleph_\kappa^{\aleph_0}=\aleph_{\kappa+2}\text{ and }L[K']\models \aleph_\kappa^{\aleph_0}=\aleph_\kappa.$$ - -Note: This argument should go through with no problem using just a measurable cardinal $\kappa$ of Mitchell order $\kappa^{++}$, working in Mitchell's model for such $\kappa$, using Gitik and Woodin's forcing. However as I'm not that familiar with this method, I used Silver's instead.<|endoftext|> -TITLE: Conformally mapping an ellipse into the unit circle -QUESTION [9 upvotes]: I'm currently studying for a complex analysis final and I don't think I've really developed the intuition for conformal mappings yet. I'm attempting a problem from Ahlfors: map the outside of the ellipse $(x/a)^2+(y/b)^2=1$ onto $|w|<1$ with preservation of symmetries. I believe I should use the inverse of the Joukowski transformation at some point (as it maps ellipses to circles) to get a circle of radius $R$ and then rescale. However, I run into trouble when I try to find an $R$ that will work. Any thoughts? - -REPLY [10 votes]: It is through a special Joukowski transformation $z=\alpha w + \beta/w$ with real constants $\alpha$ and $\beta$ (assuming $z=x+iy$ and $w=u+iv$). The constants are determined using the fact that the boundary of the ellipse is mapped to the boundary of the disk $|w|=1$, or -$$ z = x+iy = \alpha (u+iv)+\beta/(u+iv)=\alpha (u+iv)+\beta (u-iv).$$ -Then the equation $u^2+v^2=1$ becomes -$$ \frac{x^2}{(\alpha+\beta)^2} + \frac{y^2}{(\alpha-\beta)^2}=1$$, -from which you can determine $\alpha$ and $\beta$ by $|\alpha+\beta|=a, |\alpha-\beta|=b$.<|endoftext|> -TITLE: Classification of Finite Topologies -QUESTION [12 upvotes]: Does there exist a classification of finite topologies? -I define a finite topology as a finite Set $T$ of Sets which respects the following properties: - -$\forall a,b \in T: a \cap b \in T$, -$\forall a,b \in T: a \cup b \in T$, -$ \emptyset \in T$, -$\exists S\in T\ |\ \forall a \in T , a \subseteq S$. - -This seems like a natural thing to do in the vein of classifying finite groups, so i'm curious what current research in this area looks like. - -REPLY [8 votes]: There is a huge amount of literature about finite topologies. Actually this topic is one of the major chapters in universal algebra, under the name of distributive lattices. Namely, sets $L$ endowed with two associative, commutative and idempotent operations $\vee$ (“join”) and $\wedge$ (“meet”) which furthermore satisfy the following equations: -$$ -x\vee(x\wedge y) = x = x\wedge(x\vee y) -$$ -(absorption), and -$$ -x\vee(y\wedge z) = (x\vee y)\wedge (x\vee z) -$$ -$$ -x\wedge(y\vee z) = (x\wedge y)\vee (x\wedge z) -$$ -(distributivity). In the case at hand, we are looking at bounded distributive lattices, i.e., having two elements $0$ and $1$ that satisfy -$$ -x \vee 0 = x \qquad x \vee 1 = 1 -$$ -for all $x\in L$. You'll check immediately that every finite topology on a set $S$ is a concrete interpretation of this axioms, since $\cup$ and $\cap$, $\emptyset$ and $S$ satisfy the defining identities. Moreover, every finite bounded distributive lattice is isomorphic to some finite topology on a finite set (considered as an algebraic structure): This follows from Priestley's representation theorem. -Just perform a web search for more on this.<|endoftext|> -TITLE: Prove that the limit of $\sqrt{n+1}-\sqrt{n}$ is zero -QUESTION [12 upvotes]: How would I go about proving that $\lim_{n\to\infty}\sqrt{n+1}-\sqrt{n}=0$? I have tried to use Squeeze theorem but have not been able to come up with bounds that converge to zero. Additionally, I don't think that converting to polar is possible here. - -REPLY [11 votes]: One way is by using the mean value theorem. Specifically, let $f(x) = \sqrt x$. Then, for each $x > 0$, we know that $\displaystyle f(x+1) - f(x) = \frac{f(x+1) - f(x)}{(x+1) - x} = f'(c)$ for some $c$ in the interval $(x, x+1)$. Since $\displaystyle f'(x) = \frac1{2\sqrt x}$ is strictly decreasing we conclude that $0 < \displaystyle \sqrt{x+1} - \sqrt{x} < \frac1{2\sqrt x}$.<|endoftext|> -TITLE: If $\small {x+\sqrt { (x+1)(x+2) } +\sqrt { (x+2)(x+3) } +\sqrt { (x+3)(x+1) } = 4}$, solve for $x$. -QUESTION [8 upvotes]: I came across this olympiad algebra problem, asking to solve for $x$: -$x\ +\ \sqrt { (x+1)(x+2) } \ +\ \sqrt { (x+2)(x+3) } +\ \sqrt { (x+3)(x+1) } =\ 4$ -Here was my try: -If $$x\ +\ \sqrt { (x+1)(x+2) } \ +\ \sqrt { (x+2)(x+3) } +\ \sqrt { (x+3)(x+1) } =\ 4$$ -Then $\quad \sqrt { (x+1)(x+2) } +\sqrt { (x+2)(x+3) } +\sqrt { (x+3)(x+1) } =4-x$. -Further, I tried squaring the equation on both sides, but that doesn't seem to solve my problem. Please help. -Thank you. - -REPLY [6 votes]: While Kf-Sansoo has given an elegant answer, if the problem asks for any real (and not just rational) solution, then it misses a second one which is a root of a quartic equation, hence normally is not easy to do by hand. -In general, the two solutions to, -$$x+\sqrt{(x+1)(x+2)}+\sqrt{(x+2)(x+3)}+\sqrt{(x+3)(x+1)} = n\tag1$$ -for real $n>0$ are, -$$x = \frac{(n^2+4n+5)^2}{4(n+1)(n+2)(n+3)}-2\tag{2a}$$ -and the appropriate root of, -$$-23 + 48 n - 22 n^2 + n^4 - 4 (30 - 33 n + 6 n^2 + n^3) x \\+ - 16 (-11 + 6 n) x^2 + 16 (-6 + n) x^3 - 16 x^4=0\tag{2b}$$ -For $n = 4$, we have $x_1 = -311/840 \approx -0.37$. Then $x_2 \approx -5.12357$ as a root of, -$$73 - 232 x + 208 x^2 - 32 x^3 - 16 x^4 = 0$$ -with both valid for the positive case of $\sqrt{z}$ as the graph from Walpha below shows, - - -$\color{green}{Edit:}$ When using Kf-Sansoo's method, we end up with an expression of form, -$$\prod^4 (c_1\sqrt{x+1}\pm c_2\sqrt{x+2}\pm c_3\sqrt{x+3}) = 0$$ -Let $n=4$ and we get $x = -311/840$. Simpler, but the price to pay is we lose a second solution. Another method is to form an octic, -$$\prod^8 \big(y-(\pm\sqrt{z_1}\pm \sqrt{z_2}\pm \sqrt{z_3})\big)=0 \tag3$$ -After it is formed, substitute into $(3)$ the ff, -$$y = n-x\\z_1=(x+1)(x+2)\\z_2=(x+2)(x+3)\\z_3=(x+3)(x+1)$$ -and we get linear/quartic factors given by $(2a), (2b)$. Less simpler, but it yields a second valid solution.<|endoftext|> -TITLE: Integral $\int_0^\infty\frac{\tanh^2(x)}{x^2}dx$ -QUESTION [40 upvotes]: It appears that -$$\int_0^\infty\frac{\tanh^2(x)}{x^2}dx\stackrel{\color{gray}?}=\frac{14\,\zeta(3)}{\pi^2}.\tag1$$ -(so far I have about $1000$ decimal digits to confirm that). -After changing variable $x=-\tfrac12\ln z$, it takes an equivalent form -$$\int_0^1\frac{(1-z)^2}{z\,(1+z)^2 \ln^2z}dz\stackrel{\color{gray}?}=\frac{7\,\zeta(3)}{\pi^2}.\tag2$$ -Quick lookup in Gradshteyn—Ryzhik and Prudnikov et al. did not find this integral, and it also is returned unevaluated by Mathematica and Maple. How can we prove this result? Am I overlooking anything trivial? -Further questions: Is it possible to generalize it and find a closed form of -$$\mathcal A(a)=\int_0^\infty\frac{\tanh(x)\tanh(ax)}{x^2}dx,\tag3$$ -or at least of a particular case with $a=2$? -Can we generalize it to higher powers -$$\mathcal B(n)=\int_0^\infty\left(\frac{\tanh(x)}x\right)^ndx?\tag4$$ - -Thanks to nospoon's comment below, we know that -$$\mathcal B(3)=\frac{186\,\zeta(5)}{\pi^4}-\frac{7\,\zeta(3)}{\pi^2}\tag5$$ -I checked higher powers for this pattern, and, indeed, it appears that -$$\begin{align}&\mathcal B(4)\stackrel{\color{gray}?}=-\frac{496\,\zeta(5)}{3\,\pi^4}+\frac{2540\,\zeta(7)}{\pi^6}\\ -&\mathcal B(5)\stackrel{\color{gray}?}=\frac{31\,\zeta(5)}{\pi^4}-\frac{3175\,\zeta(7)}{\pi^6}+\frac{35770\,\zeta(9)}{\pi^8}\\ -&\mathcal B(6)\stackrel{\color{gray}?}=\frac{5842\,\zeta(7)}{5\,\pi^6}-\frac{57232\,\zeta(9)}{\pi^8}+\frac{515844\,\zeta(11)}{\pi^{10}}\end{align}\tag6$$ - -REPLY [3 votes]: Using the Taylor series for $\tan\left(z+\frac{(2k+1)\pi}2\right)=-\frac1z+\frac z3+O\!\left(z^3\right)$ and $i\tanh(z)=\tan(iz)$, we get -$$\newcommand{\Res}{\operatorname*{Res}} -\tanh^2\left(z+i\frac{(2k+1)\pi}2\right)=\frac1{z^2}+\frac23+O\!\left(z^2\right)\tag1 -$$ -Using the Taylor series for $\frac1{1+z}=1-z+O\!\left(z^2\right)$, we get -$$ -\frac1{\left(z+i\frac{(2k+1)\pi}2\right)^2}=-\frac4{(2k+1)^2\pi^2}-\frac{16iz}{(2k+1)^3\pi^3}+O\!\left(z^2\right)\tag2 -$$ -Therefore, with $z_k=i\frac{(2k+1)\pi}2$ we get that -$$ -\Res\limits_{z=z_k}\left(\frac{\tanh^2(z)}{z^2}\right)=-\frac{16i}{(2k+1)^3\pi^3}\tag3 -$$ -We can use the contour of integration - -$$ -\begin{align} -\int_0^\infty\frac{\tanh^2(x)}{x^2}\,\mathrm{d}x -&=\frac12\int_{-\infty}^\infty\frac{\tanh^2(x)}{x^2}\,\mathrm{d}x\tag{4a}\\ -&=\pi i\sum_{k=0}^\infty\frac{-16i}{(2k+1)^3\pi^3}\tag{4b}\\ -&=\frac{14}{\pi^2}\zeta(3)\tag{4c} -\end{align} -$$ -Explanation: -$\text{(4a)}$: use symmetry -$\text{(4b)}$: the integral along the contour is $2\pi i$ times the sum of the residues inside -$\text{(4c)}$: $\sum\limits_{k=0}^\infty\frac1{(2k+1)^3}=\frac78\zeta(3)$ - -The answer for $(3)$ from the question, with $a=2$, is given in this answer.<|endoftext|> -TITLE: Determinant of a $2n$ square block matrix in which all blocks commute -QUESTION [5 upvotes]: Problem: Let $A , B , C , D$ be commuting $n$-square matrices. Consider the $2n$-square block matrix -$$M=\begin{pmatrix} A & B \\ C & D\end{pmatrix}$$ -Prove that $|M|= |A||D| - |B||C|$, where $|M|$ means the determinant. -I should also state that this from a beginning Linear Algebra book, so I have not studied any fancy determinant formulas yet. My problem here is that everything I can try involves multiplication but there is a minus sign on the right hand side which I cannot presently handle. -Note: (this is not the same question as has been asked before here on this site as the formula here is quite different.) - -REPLY [3 votes]: You cannot prove it because it is not true. Counterexample: $A=B=D=I_2$ and $C=\pmatrix{0&0\\ 0&1}$ over any field. Then $\det M=0$ (the second and the fourth rows are identical to each other), but $\det(A)\det(D)-\det(B)\det(C)=1$. -The correct formula should be $\det M=\det(AD-BC)$. This formula has been discussed many times on this site. See one of my answers for instance.<|endoftext|> -TITLE: why $ 1 - \cos^2x = \sin^2x $? -QUESTION [5 upvotes]: I'm trying to prove this result $$\lim_{x\to 0} \frac{1 - \cos(x)}{x} = 0$$ In this process I have come across an identity $1-\cos^2x=\sin^2x$. Why should this hold ? Here are a few steps of my working: -\begin{array}\\ - \lim_{x\to 0} \dfrac{1 - \cos(x)}{x}\\ = \lim_{x\to 0} \left[\dfrac{1 - \cos(x)}{x} \times \dfrac{1 + \cos(x)}{1 + \cos(x)}\right] \\ -=\lim_{x\to 0} \left[\dfrac{1 - \cos^2(x)}{x(1+\cos(x))}\right] \\ -=\lim_{x\to 0} \left[\dfrac{\sin^2(x)}{x(1+\cos(x))}\right] -\end{array} - -REPLY [5 votes]: Let $F(x)=\sin ^2 x + \cos ^ 2 x$. -$$F'(x)=2 \sin x\cos x- 2 \cos x\sin x=0$$ -Since $F(0)=1$ and $F$ is constant, we get -$$\sin ^2 x + \cos ^ 2 x=1$$<|endoftext|> -TITLE: Does sample-path continuity imply mean square continuity? -QUESTION [7 upvotes]: Let $(X_t)_{t>0}$ be a square-integrable stochastic process on a probability space $(\Omega, \mathcal{F}, \mathbb{P})$. -I'm well aware that in general almost sure convergence does not imply mean square convergence, unless some additional conditions (such as the ones of the dominated convergence theorem) are met. -I've also come across this related question where the OP is interested in the case of (the weaker) almost sure continuity at a given time. -However, in the stronger case of sample path continuity (i.e. $\forall \omega \in \Omega, t \to X(t, \omega)$ is a continuous function) I am curious as to whether the process would also be mean square continuous. If so, I'd appreciate any pointer to the proof. If not, can you please provide a counter-example? The counter-example provided in the question above does not have continuous paths. -Please note that these lecture notes, more specifically the remark following Definition 72, seem to suggest that not only is it true in general, but the proof is also obvious. -Thanks, - -REPLY [2 votes]: No, this is, in general, not true as the following counterexample shows: -Consider $((1,\infty),\mathcal{B}((1,\infty))$ endowed with the probability measure $$\mathbb{P}(dx) = c \frac{1}{x^2} \, dx$$ -(where $c>0$ is chosen such that $\mathbb{P}((1,\infty))=1$). Choose a continuous function $\chi : [0,\infty) \to [0,\infty)$ such that $\chi(x)=0$ for all $x \notin [2,5]$ and $\chi(x) \geq 1$ for all $x \in [3,4]$. Note that $\chi$ is bounded, i.e. $\|\chi\|_{\infty}<\infty$. Define -$$X_t(x) := x \chi \left( tx \right).$$ -Obviously, $(X_t)_{t \geq 0}$ has continuous sample paths, $X_0 = 0$ and -$$\begin{align*} \mathbb{E}(X_t^2) = \int_{(1,\infty)} x^2 \chi(tx)^2 \frac{1}{x^2} \, dx &= \int_{(1,\infty)} \chi^2(tx) \, dx \\ &\leq \|\chi\|_{\infty}^2 \int_{2/t}^{5/t} \, dx < \infty \end{align*}$$ -i.e. $X_t \in L^2(\mathbb{P})$ for each $t>0$. On the other hand, we find by a similar calculation -$$\begin{align*} \mathbb{E}(X_t^2) = \int_{(1,\infty)} \chi(tx)^2 \, dx \geq \int_{(3/t,4/t)} 1 \, dx = \frac{4}{t}- \frac{3}{t} = \frac{1}{t} \end{align*}$$ -where we have used that $\chi$ is non-negative and $\chi(tx) \geq 1$ for all $x \in [3/t,4/t]$. This shows that $\mathbb{E}(X_t^2)$ does not converge to $\mathbb{E}(X_0^2)=0$ as $t \to 0$. Consequently, $(X_t)_{t \geq 0}$ is not mean-square continuous.<|endoftext|> -TITLE: When is a mobius transformation its own inverse? -QUESTION [6 upvotes]: I was puzzeling trying to find the inverse of the mobius transformation -$$ f(z) \ = \ \frac{z + i}{iz+1} $$ -and if I am correct (I can be wrong here) it is its own inverse $( f(f(z)) = z )$ -Are there general rules to check if any mobius transformation is its own inverse ? -something like $$ -f(z) \ = \ \frac{az+b}{cz+d} -$$ -Is its own inverse iff: - -REPLY [7 votes]: I think the most easy way is to just calculate it: -\begin{align} -f(f(z)) &= \frac{a f(z) + b}{c f(z) + d}\\ -&= \frac{a(az+b)+b(cz+d)}{c(az+b)+d(cz+d)}\\ -&= \frac{(a^2+bc)z+(ab+bd)}{(ac+cd)z+(bc+d^2)} \stackrel!= z -\end{align} -Clearly for this to hold true, you need -\begin{align} -ab+bd &= 0\\ -ac+cd &= 0\\ -a^2+bc &= bc+d^2 -\end{align} -Now the last equation can be simplified to $a=\pm d$. Thus we have three cases to consider: - -$a=d=0$ -In this case, the first two equations are automatically fulfilled. Since a common factor in the coefficients doesn't change the function, we can choose $c=1$, and get -$$f(z) = \frac{b}{z}$$ -It is easily verified that this indeed is its own inverse. -$a=d\ne 0$. -In this case, the first two equations reduce to $b=c=0$ and we just get the identity. -$a=-d\ne 0$. -Again, the first two equations are automatically fulfilled; now we can use the invariance under a common factor to set $a=1$, obtaining -$f(z) = \frac{z+b}{cz-1}$ -Note that the function you found is of this type, with $b=c=\mathrm i$. -Also note that functions of the form $f(z)=\alpha-z$ are obtained this way by setting $b=-\alpha$ and $c=0$.<|endoftext|> -TITLE: "false implies true" is a true statement -QUESTION [15 upvotes]: In algebra, one lesson we took was about logic. -We learned that it is a true statement or logical expression to say that if a Beijing was the capital of the US then the moon existed last night, as this is convenient with a false statement implying a true one being a true statement. -Agree? - -REPLY [6 votes]: You want "real life", eh? -Let (P) be the statement - -If the policeman sees you speeding, then you will have to pay a fine. - -This is true. But it could happen you have to pay a fine because you failed to shovel the snow from your sidewalk. So you have to pay a fine even though you did not speed. But this does not mean that (P) is false.<|endoftext|> -TITLE: Proportion of nonabelian $2$-groups of a certain order whose exponent is $4$ -QUESTION [10 upvotes]: Let - -$$\displaystyle A(n)=\frac{\text{number of nonabelian 2-groups of order $n$ whose exponent is }4}{\text{total number of nonabelian 2-groups of order $n$}}.$$ - -Using GAP, I could observe the following: -$$A(16)=\frac{5}{9}=0.5556, A(32)=\frac{21}{44}=0.4773, -A(64)=\frac{93}{256}=0.3633, A(128)=\frac{820}{2313}=0.3545, A(256)=\frac{30446}{56070}=0.5430 \text{ and } A(512)=\frac{8791058}{10494183}=0.8377.$$ -Can one prove that if $n>4$, then $A(n)>\frac{1}{3}$? - -REPLY [2 votes]: It is not hard to prove a weaker result based using the seminal work of Sims (1961) and Higman (1964). -Proving your result, I fear, will require a deeper understanding of $p$-groups than currently exists. Let $f(n,p)$ be the number of groups of order $p^n$ and let $f_2(n,p)$ be the number of such groups with $\Phi(G)=G'G^p$ central and elementary abelian. -It follows from Sims that $f(n,p)\leqslant p^{cn^3+dn^{5/2}}$ (see [1]) and from Higman that -$f_2(n,p)\geqslant p^{cn^3+en^2}$ where $c=\frac{2}{27}$ and $d$, $e$ are constants. -Note that $d>0$. Higman had $e=-\frac{2}{9}$. -How does this relate to the ratio $A(n)$ in the above question? The groups of exponent dividing $p^2$ (and order $p^n$) are precisely counted by $f_2(n,p)$. Since a group of exponent 2 is elementary, we have $$A(2^n)=\frac{f_2(n,2)-1}{f(n,2)}\geqslant\frac{2^{cn^3+en^2}-1}{2^{cn^3+dn^{5/2}}}\to 0\qquad{\rm as}\ d>0.$$ Taking logarithms of the numerator and denominator above (and ignoring the $-1$), this ratio approaches 1, that is $\lim_{n\to\infty}\frac{cn^3+en^2}{cn^3+dn^{5/2}}=1$. Even if there were a constant $d'$ such that $f(n,p)\leqslant p^{cn^3+d'n^2}$ as Sims conjectured on p. 153. We must have -$e -TITLE: Calculating the expectation of $X=$ the number of failures until the $r^{th}$ success -QUESTION [5 upvotes]: I need to calculate the expectation of $X=$ the number of failures until the $r$-th success in an infinite series of Bernoulli experiments with $p$ the probability of success. ($q=1-p$ the probability of failure) -My solution: -I figured $$P(X=x)={x+r \choose x}q^xp^r$$ (is this correct?) and $x\geq 0$ (In other words, $X\sim Bin(x+r,q)$. -So by definition, $\Bbb EX=\sum_{x=0}^\infty x{x+r \choose x}q^xp^r$. -Trying to simplify this, I got to -\begin{align*} -\frac{qp^r}{r!}\sum_{x=0}^\infty (x+r)(x+r-1) \ldots (x+1)xq^{x-1} & =\frac{qp^r}{r!}\left(\sum_{x=0}^\infty q^{x+r}\right)^{(r+1)}\\ & =\frac{qp^r}{r!}\left(q^r\sum_{x=0}^\infty q^{x}\right)^{(r+1)}\\ & =\frac{qp^r}{r!}(\frac{q^r}{1-q})^{(r+1)} -\end{align*} -$(r+1)$ denotes taking the $(r+1)^{th}$ derivative in respect to $q$. -Now what? How can I simplify that further? Is there a simpler way? - -REPLY [2 votes]: Let $N(r)$ be the expected number of trials until the $r^{th}$ success, and $p$ the probability of success. Make one trial. This gives:$$N(r)=1+(1-p)N(r)+pN(r-1)$$Where the $1$ is for the trial and the other terms represent failure and success. -It is trivial that $N(0)=0$ and rearranging the formula gives $N(r)=N(r-1)+\frac 1p$ whence $N(r)=\frac rp$. Now let $F(r)$ be the number of failures with $$F(r)=N(r)-r=\frac rp-r=\frac {r(1-p)}p$$<|endoftext|> -TITLE: Neron-Severi group as the image of first Chern class -QUESTION [6 upvotes]: Let $X$ be a smooth projective variety over $\mathbb{C}$, then the Neron-Severi group $NS(X)$ of $X$ is defined to be the Picard group of $X$ modulo algebraically equivalent relations. -On the other hand, by the exponential sequence, there is a first Chern class map -$$c_1: {\rm Pic}(X) \to H^2(X, \mathbb{Z}).$$ It is claimed that the image of $c_1$ coincides with $NS(X)$. -I want to know why this is true. Any suggestion or reference is greatly welcome! - -REPLY [12 votes]: The kernel of $c_1$ consists exactly of line bundles that are algebraically equivalent to $0$. -Let me expand a little bit. The Picard group $\textrm{Pic }X$ has a subgroup $\textrm{Pic}^0\,X\subset \textrm{Pic }X$ consisting of line bundles algebraically equivalent to zero. Equivalently, $\textrm{Pic}^0\,X$ is the connected component containing the identity element of $\textrm{Pic }X$. (With your assumptions on $X$, $\textrm{Pic}^0\,X$ is an abelian variety; it is called the Picard variety of $X$). The Neron-Severi group is, by your definition, the quotient of the Picard group by the subgroup $\textrm{Pic}^0\,X$. But $\textrm{Pic}^0\,X$ is also the kernel of $c_1:\textrm{Pic }X\to H^2(X,\mathbb Z)$, so that $$NS(X)=\textrm{Pic }X/\textrm{Pic}^0\,X=\textrm{Pic }X/\ker(c_1)=\textrm{Im}(c_1).$$ -Remark. The exponential exact sequence $0\to \mathbb Z\to \mathscr O_X\to\mathscr O_X^\times\to 1$ induces an exact piece $$H^1(X,\mathscr O_X)\to \textrm{Pic }X\overset{c_1}{\to}H^2(X,\mathbb Z)$$ which tells us that when $\textrm{Pic}^0\,X$ is a point, and hence $H^1(X,\mathscr O_X)=0$ (this $H^1$ is the tangent space to the Picard variety at any point $[L]\in \textrm{Pic}^0\,X$), we have $NS(X)\cong \textrm{Pic }X$.<|endoftext|> -TITLE: summation of determinants of $3\times3$ matrices -QUESTION [11 upvotes]: I have an algebra problem but no idea how to solve it. The problem is: "you can create 9! matrices the elements of which lie in a set $ \{1,2,3,...,9\} \subset \mathbb N$ so that their elements do not repeat, i.e. e.g. -$$ -\begin{pmatrix}1&2&9\\3&5&7\\6&4&8 \end{pmatrix} -$$ -Find the sum of the determinants of all these matrices." -Could you give me a hint how to solve it? Thank you. - -REPLY [4 votes]: Given that others have answered, and the basic trick is the same, here is another way of thinking about the problem. -If I exchange the first and second rows of every matrix, I change the sign of every determinant and hence the sign of the sum. On the other hand, I get exactly the same set of matrices, so the sum must stay the same. This leaves me with only one possibility. -I mention it because it deals with the problem globally, and that kind of global reasoning is sometimes very useful. The local information is hidden in "I get exactly the same set of matrices" and if you write down the detail of what makes that obvious you will find yourself mirroring the other suggestions people have posted.<|endoftext|> -TITLE: Assymptotics of the generalized harmonic number $H_{n,r}$ for $r < 1$ -QUESTION [12 upvotes]: The $H_{n,r}$ generalized harmonic number is defined as: -$$H_{n,r} = \sum_{k=1}^{n} \frac{1}{k^r}$$ -I'm interested in the growth of $H_{n,r}$ as a function of $n$, for a fixed $r\in[0,1]$. - -For $r>1$, $H_{n,r}=O(1)$ (as a function of $n$). -For $r=1$, $H_{n,1}=O(\log n)$. -For $r=0$, $H_{n,0}=n$. - -How does $H_{n,r}$ grow for intermediate values of $r$? - -REPLY [11 votes]: The Euler-Maclaurin Sum Formula is tailor-made for this kind of application. For $r\ne1$, -$$ -\sum_{k=1}^nk^{-r}=\zeta(r)+\frac1{1-r}n^{1-r}+\frac12n^{-r}-\frac{r}{12}n^{-r-1}+O\left(n^{-r-2}\right) -$$ -Note that for $r$ near $1$, we have $\zeta(r)=\frac1{r-1}+\gamma+O\left(r-1\right)$. Therefore, -$$ -\begin{align} -\lim_{r\to1}\left(\zeta(r)+\frac1{1-r}n^{1-r}\right) -&=\lim_{r\to1}\left(\frac{n^{1-r}-1}{1-r}+\gamma+O\left(r-1\right)\right)\\[3pt] -&=\log(n)+\gamma -\end{align} -$$ -This gives us the standard expansion for the Harmonic series: -$$ -\sum_{k=1}^n\frac1k=\log(n)+\gamma+\frac1{2n}-\frac1{12n^2}+O\left(\frac1{n^3}\right) -$$<|endoftext|> -TITLE: Can a class test scores with a bimodal distribution provide statistical evidence for cheating? -QUESTION [6 upvotes]: I know the normal distribution can represent many things in nature. Most items are normally distributed. I recently watched a video of a professor who claims that biomodal distributions provide evidence of cheating. He states that biomodal distribution "when external forces are applied to a data set that creates a systematic bias to a data set" aka cheating. He compares this information to previous grade distributions of students given the same test in other years when he gave the test and estimated that 1/3 of his students have cheated. My question is does a bimodal distribution really provide statistical evidence of cheating? Can't it be that some students do very poorly and some students do really well, leaving a peak that is low and a peak that is high? Do biomodal distribution really mean there is a higher probability of "when external forces are applied to a data set that creates a systematic bias to a data set?" -The link to the video is: https://www.youtube.com/watch?v=rbzJTTDO9f4 -Yes, I get some students admitted to cheating, but that doesn't answer my question. My question is can a teacher really provide statistical evidence of someone cheating without them admitting it? I know statistics is all about probability, so can a teacher claim that the probability of this is really lower than a certain threshold and say because of this there exist a statistical significance of them cheating? And how can they approximate 1/3 of there students cheated just by comparing the bimodal distribution to a normal distribution. -To me, it seems that the teacher is just trying to use scare tactics with his "statistics" and guilt students into admitting to cheating rather than have any evidence of them cheating. -PS: I know cheating is wrong, but I know there must be a lot of innocent students in his class that were also accused of cheating, so that is why I asked this question. (I don't actually go to that university) - -REPLY [2 votes]: I could be mistaken, but I think that in general, whenever you see something that is too far against the norm, it raises red flags, which may be what he is talking about. Here is a glaring example from standardized testing, where the minimum score to pass this test was 30%: -Here is the original reddit post: https://www.reddit.com/r/dataisbeautiful/comments/27dx4q/distribution_of_results_of_the_matura_high_school/<|endoftext|> -TITLE: What kind of object is the push forward of a vector field? -QUESTION [5 upvotes]: I was actually not sure about asking this question since I think I know what the answer is, but here it goes: -Let $M$ and $N$ be two smooth manifolds and $\mathbf{X}$ a vector field defined on $M$. As a function, $\mathbf{X}$ is defined on $M$ and at $p \in M$ it can take values on $T_p M$; in fancier terms, it is a cross section of the tangent bundle of $M$. Now let $\phi: M \to N$ be a smooth function (not necessarily a diffeomorphism) and $\phi_*$ its differential. What kind of object is $\phi_* \mathbf{X}$? -Here are my thoughts: it's not really a vector field on $M$ or on $N$; rather, it's defined on $M$, but at $p \in M$ it takes values in $T_{\phi(p)} N$. Therefore, it's a cross section of a vector bundle with base space $M$ and tangent spaces on $N$ as fibres. At each point $p$ of $M$ we would place the tangent space $T_{\phi(p)}N$. Is this correct? I have a feeling my definition of the vector bundle is not very rigorous; how can it be made more precise? Is there a neater way to define this vector bundle? - -REPLY [9 votes]: This is usually called (a bit tautologically in this special case) "a vector field along $\phi$", and it is a section in the pullback bundle $\phi^* TN$. See this wikipedia article for a more comprehensive discussion<|endoftext|> -TITLE: One-sided inverse of a function -QUESTION [6 upvotes]: Is it possible to find an example of an one-sided inverse of a function? other than matrix? -I am trying to find such an example but having no luck. Anybody got an idea about it? - -REPLY [4 votes]: Another classical example is the shift maps. Let $S$ be the space of real sequences $S = \{(a_0,a_1,a_2,\dots) : a_i\in\mathbb{R} \, \text{for all $i$}\}$ (which, essentially, is the space of functions $\mathbb{N}\to\mathbb{R}$). -Define the "left shift map" $f:S\to S$ as -$$ -f((a_0,a_1,a_2,a_3,\dots)) = (a_1,a_2,a_3,\dots) -$$ -(i.e., remove the first element from the sequence), and the "right shift map" $g:S\to S$ as -$$ -g((a_0,a_1,a_2,\dots)) = (0,a_0,a_1,a_2,\dots) -$$ -(i.e., add a zero at the beginning of the sequence). -Then, $f\circ g$ is the identity, but $g\circ f$ is not.<|endoftext|> -TITLE: The four runner problem/conjecture -QUESTION [15 upvotes]: I've recently read here the following problem, called « four-runner problem » : - -Suppose four runners (represented by labeled spheres) run around a circular track. Their speeds are constant positive rationals $v_1j\geq0.$ In fact we could even eliminate the condition $v_0 -TITLE: Axiom of Choice needed to "categorify" the cardinals? -QUESTION [8 upvotes]: I was playing around in $\mathsf{Set},$ trying to reduce it modulo isomorphisms to make a category $\mathsf{Card},$ letting the objects of $\mathsf{Card}$ be the isomorphism classes of $\mathsf{Set}$ and let the morphisms of $\mathsf{Card}$ be the isomorphism classes of $\mathsf{Set}^\to$ (the arrow category). Unfortunately, I got stuck trying to define composition in the obvious way. -Here's the approach I was taking. Given $a\in\mathsf{Set}$ or $f\in\mathsf{Set}^\to,$ we denote their respective isomorphism classes by $|a|$ and $\bar f.$ I've shown that each $\bar f$ uniquely determines a source and a target $|a|$ and $|b|,$ by taking any $f\in\bar f,$ and letting $a,b$ the source and target of $f$ (this is independent of our choice of $f$). If we have $\bar f:|a|\to|b|$ and $\bar g:|b|\to|c|,$ then there exist $a\in|a|,b\in|b|,c\in|c|,f\in\bar f,$ and $g\in\bar g$ such that $f:a\to b$ and $g:b\to c.$ It seems natural to define $\bar g\bar f:=\overline{gf},$ but I'm having trouble showing independence from the choices of $a,b,c,f,$ and $g.$ -I started by taking $f_j:a_j\to b_j$ and $g_j:b_j\to c_j$ for $j=1,2,$ and taking isomorphisms $\langle u_1,v_1\rangle:f_1\to f_2$ and $\langle u_2,v_2\rangle:g_1\to g_2.$ So, $f_2u_1=v_1f_1$ and $g_2u_2=v_2g_1.$ Now, if I could find some iso $u:a_1\to a_2$ such that $\langle u,u_2\rangle:f_1\to f_2,$ then I'd be done. Likewise if I could find an iso $v:c_1\to c_2$ such that $\langle v_1,v\rangle:g_1\to g_2.$ Now, the latter doesn't seem feasible, since there's no guarantee that $v_2$ should map fibers of $g_1$ bijectively to fibers of $g_2.$ I've not had any success demonstrating the former, either. - -If I use the Axiom of Choice, then I can show that if $f:A\to B$ and $g:X\to Y$ are isomorphic objects in $\mathsf{Set}^\to$, then for any iso $v:B\to Y,$ there is an iso $u:A\to X$ such that $\langle u,v\rangle:f\to g.$ From there, I can finish the proof that the operation is well (and uniquely) defined. In fact, this result seems to imply the Axiom of Choice, as well, which makes me suspect that it is necessary. If so, would someone be able to outline a proof or provide a reference? -If not, then could someone help me get "unstuck"? - -Now, if we stick to isomorphism classes of injective functions, we can categorify the cardinals as a partial order (a well-order iff Choice holds), but I'd like to include more isomorphism classes than that, if possible. - -Added: As Eric points out, my difficulties are only to be expected. It would seem, then, that my desires might be fruitless, and that only isomorphism classes of injective functions allow such composition to be well-defined. Am I correct? - -REPLY [5 votes]: As I commented, composition is not well-defined on isomorphism classes in $\mathsf{Set}^\to$, and this has nothing to do with Choice. In fact, if you want isomorphic maps in $\mathsf{Set}^\to$ to be equal and also for composition to be well-defined, then any two maps $X\to Y$ must be equal for any cardinalities $X$ and $Y$! To see this, take any two maps $f_0,f_1:X\to Y$ and consider $g:X\to X\times\{0,1\}$ given by $i(x)=(x,0)$ and $g:X\times\{0,1\}\to Y$ given by $g(x,0)=f_0(x)$ and $g(x,1)=f_1(x)$. Then $gi=f_0$. But if $h:X\times \{0,1\}\to X\times\{0,1\}$ swaps the second coordinates, then $ghi=f_1$, and $h\cong 1$ in $\mathsf{Set}^\to$. We thus conclude that $f_0$ and $f_1$ must be equal. (This argument works with $\mathsf{Set}$ replaced by any category with binary coproducts.) -As you mention, you can avoid this problem by restricting to injective maps. Then it is easy to see that composition is well-defined, and this does not use the axiom of choice. Note, however, that this gives something more interesting than a poset: if $\kappa$ and $\lambda$ are cardinals, then maps $\kappa\to\lambda$ are in bijection with cardinals $\mu$ such that $\kappa+\mu=\lambda$ (namely, $\mu$ is the cardinality of the complement of the image of the injection). Composition of maps then corresponds to adding the $\mu$s. -Alternatively, as Qiaochu says, you can just not ask for isomorphic maps to be equal and define maps by just choosing a representative set of each cardinality, giving a skeleton of $\mathsf{Set}$. Without Choice, it is consistent that it is impossible to do this, according to this answer (see this paper for a proof). Note that in fact if you can define any category isomorphic to a skeleton of $\mathsf{Set}$ (regardless of how exactly you construct it), then you obtain a choice of a representative of each cardinality, by considering the Hom-sets $\operatorname{Hom}(1,X)$ in your category.<|endoftext|> -TITLE: Need help with $\int_0^1\frac{\log(1+x)-\log(1-x)}{\left(1+\log^2x\right)x}\,dx$ -QUESTION [16 upvotes]: Please help me to evaluate this integral -$$\int_0^1\frac{\log(1+x)-\log(1-x)}{\left(1+\log^2x\right)x}\,dx$$ -I tried a change of variable $x=\tanh z$, that transforms it into the form -$$\int_0^\infty\frac{4z}{\left(1+\log^2\tanh z\right)\sinh2z}\,dz,$$ -but I do not know what to do next. - -REPLY [7 votes]: An alternative way to evaluate $$\frac{\pi}{2} \int_{0}^{\infty} \tanh \left(\frac{\pi u}{2} \right) \frac{e^{-u}}{u} \, du ,$$ which is line $3d$ in robjohn's answer, is to add a parameter and then differentiate under the integral sign. -Specifically, let $$I(a) = \frac{\pi}{2}\int_{0}^{\infty} \tanh \left(\frac{\pi u}{2} \right) \frac{e^{-au}}{u} \, du.$$ -Then $$ \begin{align} I'(a) &= - \frac{\pi}{2} \int_{0}^{\infty} \tanh \left(\frac{\pi u}{2} \right) e^{-au} \, du \\ &= -\frac{\pi}{2} \int_{0}^{\infty} \left(\frac{1}{1+e^{- \pi u}}- \frac{e^{- \pi u}}{1+e^{-\pi u}} \right)e^{-au} \, du \\ &= -\frac{\pi}{2} \int_{0}^{\infty} \left(\sum_{n=0}^{\infty} (-1)^{n} e^{-n \pi u} + \sum_{n=1}^{\infty} (-1)^{n} e^{-n \pi u} \right)e^{-au} \, du \\ &= \frac{\pi}{2} \int_{0}^{\infty} \left(1- 2 \sum_{n=0}^{\infty} (-1)^{n}e^{-n \pi u} \right) e^{-au} \, du \\ &= \frac{\pi }{2a} -\pi \sum_{n=0}^{\infty} \frac{(-1)^{n}}{a+n \pi} \\ &= \frac{\pi }{2a} -\frac{1}{2} \psi \left(\frac{a+\pi}{2 \pi} \right) + \frac{1}{2} \psi \left(\frac{a}{2 \pi} \right) \tag{1}. \end{align}$$ -Integrating back, we get $$ \begin{align} I(a) &= \frac{\pi}{2} \log(a) - \pi \log \Gamma \left(\frac{a+\pi}{2 \pi} \right) + \pi \log \Gamma\left(\frac{a}{2 \pi} \right) +C \\ &= \pi \log \left(\frac{\sqrt{a} \, \Gamma \left(\frac{a}{2 \pi } \right)}{\Gamma \left(\frac{a}{2 \pi} + \frac{1}{2} \right)} \right) + C,\end{align} $$ -where $$\lim_{a \to \infty} I(a) =0 = \lim_{a \to \infty} \pi \log \left(\frac{\sqrt{a} \, \Gamma \left(\frac{a}{2 \pi } \right)}{\Gamma \left(\frac{a}{2 \pi} + \frac{1}{2} \right)} \right) +C $$ -$$= \pi \log (\sqrt{2 \pi}) + C. \tag{2} $$ -Therefore, -$$\frac{\pi}{2} \int_{0}^{\infty} \tanh \left(\frac{\pi u}{2} \right) \frac{e^{-u}}{u} \, du = I(1) =\pi \log \left(\frac{ \Gamma \left(\frac{1}{2 \pi } \right)}{\sqrt{2 \pi} \, \Gamma \left(\frac{1}{2 \pi} + \frac{1}{2} \right)} \right).$$ -$$ $$ -$(1)$ http://mathworld.wolfram.com/DigammaFunction.html (6) -$(2)$ In general, for $x,y>0$, $\lim_{a \to \infty} \frac{a^{x} \Gamma(ya)}{\Gamma(ya+x)} = y^{-x}$. This can be proven using Stirling's approximation formula for the gamma function.<|endoftext|> -TITLE: Conjecture $\int_0^1\ln\ln\left(\frac{1+x}{1-x}\right)\frac{\ln x}{1-x^2}\,dx\stackrel?=\frac{\pi^2}{24}\,\ln\left(\frac{A^{36}}{16\,\pi^3}\right)$ -QUESTION [27 upvotes]: I did some numeric experiments with integrals involving double logarithms (because they received much interest both on this site and in published papers, sometimes under names of Malmsten—Vardi—Adamchik integrals). -It appears that -$${\large\int}_0^1\ln\ln\left(\frac{1+x}{1-x}\right)\cdot\frac{\ln x}{1-x^2}\,dx\stackrel{\color{gray}?}=\frac{\pi^2}{24}\,\ln\left(\frac{A^{36}}{16\,\pi^3}\right),$$ -where $A=\exp\left(\frac1{12}-\zeta'(-1)\right)$ is the Glaisher—Kinkelin constant (I have more than $1000$ decimal digits confirming this conjecture). How can we prove it? - -REPLY [13 votes]: Hint. One may observe that, by the change of variable $u=\dfrac{1-x}{1+x}$ one gets -$$ -I={\int}_0^1\log\log\left(\frac{1+x}{1-x}\right)\cdot\frac{\log x}{1-x^2}\,dx=\int_0^1\log\left(-\log u\right)\cdot\frac{\log (1-u)-\log (1+u)}{2u}\,du. \tag1 -$$ Then by a standard Taylor expansion, we have -$$ -\frac{\log (1-u)-\log (1+u)}{2u}= -\sum_{n=0}^{\infty} \frac{u^{2n}}{2n+1}, \qquad |u|<1,\tag2 -$$ giving -$$ -I=-\sum_{n=0}^{\infty} \frac{1}{2n+1}\int_0^1u^{2n}\log\left(-\log u\right)\:du.\tag3 -$$ The latter integral is easily obtained using the well-known integral representation of the Euler gamma function -$$ -\frac{\Gamma(s)}{(a+1)^s}=\int_0^\infty t^{s-1} e^{-(a+1)t}\:dt. \tag4 -$$ By differentiating $(4)$ with respect to $s$ and putting $s=1$ we produce -$$ -\int_0^1u^a\log\left(-\log u\right)\:du=-\frac{\gamma+\log(a+1)}{a+1} \tag5 -$$ leading to -$$ -I=\sum_{n=0}^{\infty} \frac{\gamma+\log(2n+1)}{(2n+1)^2}=\left.\left(\gamma-\frac{d}{ds} \right)\left(\left(1-2^{-s}\right)\zeta(s)\right)\right|_{s=2}=\frac{\pi^2}{24}\,\ln\left(\frac{A^{36}}{16\,\pi^3}\right)\tag6 -$$ as announced. -These integrals have been studied by Adamchik, Vardi, Moll and many others. One may have a look at this interesting paper.<|endoftext|> -TITLE: How Do I Compute the Eigenvalues of a Small Matrix? -QUESTION [5 upvotes]: If I have a $2\times 2$ or $3\times 3$ matrix, how should I go about computing the eigenvalues and eigenvectors of the matrix? -NB: I am making this question to provide a unified answer to questions about eigenvalues of small matrices so that all of the specific examples that come up can be marked as duplicates of this post. See here. - -REPLY [10 votes]: Here's a cool way to compute eigenvalues and eigenvectors of matrices. Unfortunately, it requires solving a $n$ degree polynomial, where the matrix is $n\times n$ so it's not suited for large matrices, but for many problems it is sufficient. Additionally, there are some conditions on the matrix that make it doable for larger matrices. -Let's suppose you have a matrix $A$ over some field, $F$. When $v\neq 0$ is an eigenvector, $v$ satisfies $Av=\lambda v$ for some $\lambda\in F$, with $\lambda\neq 0$. Thus $Av-\lambda v = 0$ where $0$ is the zero matrix, so $(A-\lambda I)v = 0$. If $\det(A-\lambda I)\neq 0$, then it would be invertible. Multiplying both sides by the inverse gives $v=0$, so for eigenvectors we are going to have a determinant of $0$. -By considering $\lambda$ as a variable, we can take the determinant and produce a polynomial of degree $n$ over $F$ which is known as the characteristic polynomial of the matrix $A$. It is commonly denoted $p_A(\lambda)$. This polynomial has several interesting properties, but what is relevant to us is that its zeros are exactly the eigenvalues of $A$. For small cases, this gives us a surefire way to find the eigenvalues of a matrix. For larger matrices, this polynomial is not necessarily solvable, but still worth looking at, as some of its roots might be obvious. Additionally, under some circumstances, it will have solutions that we can solve for. -Once we have obtained however many eigenvalues as we are able to compute, $\{\lambda_1,\dots,\lambda_m\}$ we can then directly find the corresponding eigenvectors by looking at the equation $Av=\lambda_i v$. This gives rise to a system of equations that has infinitely many solutions (as a scalar multiple of an eigenvector is an eigenvector), but all of them are eigenvalues of $A$ corresponding to $v$. -The reason why this approach doesn't work in general is that it's not always possible to algebraically solve polynomials of large degree. - -Here's an example computation (taken from wikipedia). -The eigenvectors, $v$, of $A= \begin{bmatrix} 2 & 0 & 1\\0 & 2 & 0\\ 1 & 0 & 2\end{bmatrix}$, satisfies the equation $(A-\lambda I)\mathbf{v}=0$. This means that -$$\det\left(\begin{bmatrix} 2-\lambda & 0 & 1\\0 & 2-\lambda & 0\\ 1 & 0 & 2-\lambda\end{bmatrix}\right)=0$$ - or that $0=6-11\lambda+6\lambda^2-\lambda^3$. Thus we have that the characteristic polynomial is $p_A(\lambda)=\lambda^3-6\lambda^2+11\lambda-6$. The solutions to this polynomial are $\{1,2,3\}$, so those are the eiganvalues of $A$. They give rise to the eigenvectors $(1,0,-1),(0,1,0),$ and $(1,0,1)$ respectively.<|endoftext|> -TITLE: Solution to $e^{e^x}=x$ and other applications of iterated functions? -QUESTION [9 upvotes]: While trying to solve $e^{e^x}=x$, I ran into the simple solution $x=-W(-1)$. I found it by using the equation $$e^x=x$$Then powering both sides with a base $e$.$$e^{e^x}=e^x$$Now note that the left side of the original equation equals the right side of my new equation. Therefore:$$e^{e^x}=e^x=x$$The solution at the beginning is very easy to solve for with the Lambert W function, unlike the actual equation I was trying to solve: $e^{e^x}=x$. -Which made me realise that if $f(x)=x$, then $f[f(x)]=x$ or more generally, $f_n(x)=x$ being equivalent to asking $f(x)=x$ for any integer $n\ne0$ (I will use subscripts to describe the amount of times a given function is iterated). -I confirmed this solution by solving $L(x)=x$ where $L(x)=mx+b$, the linear equation. No matter how many times I do $L(L(L(L(\cdots L(x)\cdots))))=x$, the solution is always $x=-\frac b{m-1}$. -Which made me wonder if I could create a solution to the general quartic polynomial $P(x)=ax^4+bx^3+cx^2+dx+e$. -Use $F(x)=px^2+qx+r$. -$F[F(x)]=$some really big quartic. -Also note that we are trying to solve $P(x)=F[F(x)]=F(x)=x$. -If you are trying to find the roots, just add $x$ to both sides. -Now let's try to see what $F[F(x)]$ equals. (Do note that I will probably make some mistakes.) -$$F(x)=px^2+qx+r$$$$F[F(x)]=p(px^2+qx+r)^2+q(px^2+qx+r)+r$$$$=p^3x^4+(2p^2q)x^3+(p(q^2+qp+2pr))x^2+(q^2+2qpr)x^1+(pr^2+qr+r)x^0$$ -And we are trying to make it equal to $P(x)$. -$$ax^4+bx^3+cx^2+dx+e=p^3x^4+(2p^2q)x^3+(p(q^2+qp+2pr))x^2+(q^2+2qpr)x^1+(pr^2+qr+r)$$ -Try to equate parts? I am unsure if that'll work, but anyways...$$a=p^3$$$$b=2p^2q$$$$c=p(q^2+qp+2pr)$$$$d=q^2+2qpr$$$$e=pr^2+qr+r$$ -If anyone wants to do that, be my guest because it looks messy. -A note however, is that when we are done with this, we should get two answers. This is because of the quadratic equation. However, a quartic polynomial should have 4 solutions, meaning we missed 2. -However, we can make up for this. Suppose we found $x_1=y$ and $x_2=z$. Then $P(x)-x=0$, after which we can divide by our solutions:$$\frac{P(x)-x}{(x-y)(x-z)}=0$$ -Divide and simplify to get a quadratic that is easily solved for. -Lastly, if this works, perhaps we can do this for an 8-th degree polynomial, or any $2^n$th degree polynomial for that matter. Just use $F[F(F(x))]=x$ for a sextic polynomial and more as needed. -I also note that while $F(x)=x$ produces solutions for $F[F(x)]=x$, it does not work conversely as shown above. -So my questions are as follows: - -Could you have solved $e^{e^x}=x$ without iterated functions or approximations? -Could you solve $xe^{e^x}=e$ by similar methods? -Does the method for solving quartic polynomials work? I have yet to make heads or tails of the ones on the Wikipedia or Wolfram. -Is there anything I should note when using my method? Because I have noticed the failure to find all solutions in a polynomial and doubt I have found all solutions in other functions with this method. -How else can I use iterated functions to solve for things? -Can I do this for an infinite amount of iterated functions? For example:$$e^{e^{e^{e^{e^{\cdots^x}}}}}=x$$ - -REPLY [4 votes]: No, but other perhaps can=) -I do not think so, or can you find an iterated function for this expression? -It does but only for certain polynomials. For a general quartic polynomial you have four degrees of freedom ($Q(x) = x^4+ax^3+bx^2+cx+d$) but for an iterated quadratic you only still have $2$ degrees of freedom ($P(x) = x^2+Ax+B$), so it is quite unlikely that you can represent a given quartic polynomial by $P(P(x))$ where $P$ is a quadratic polynomial, but if you can, nobody stops you from using that technique. -Well the important thing to notice is just that $f(x) = x$ implies $f(f(x)) = x$ but not necessarily the other way round. -Only numerical solutions (or for certain proofs) you can use a fixed point theorem, e.g. the Banach fixed point theorem. -No generally, not, as $e^{e^{e^{...x}}}$ is not really well defined. - -REPLY [2 votes]: Concerning 3 -Your method for solving quartic polynomials will only work in some very special cases. Let us try to pick coefficients $p,q,r$ that generate the quartic $ax^4+bx^3+cx^2+dx+e$. Then as you said: -$$a=p^3$$ -Giving us three possible values of $p$, and then: -$$b=2p^2q$$ -gives us only one possible value of $q$ for each value of $p$, and then finally -$$d=q^2+2pqr$$ -gives only one possible value of $r$ for each value of $p$. Hence you can create only create three different polynomials for any given values of $a,b,d$, but $c,e$ could obviously be chosen in infinitely many ways. So for general quartics, it will only rarely work. That being said, if one happens to find a quartic that can be solved in this way, then it would make for a very elegant solution indeed.<|endoftext|> -TITLE: Eigenvalues/vectors of the Laplace transform? -QUESTION [5 upvotes]: I'm learning about eigenvalues and eigenvectors (finally starting to get them). This might be a silly question, but what is/are the eigenvector(s) of the Laplace transform? I mean, what $\vec{x}_{i}$'s and $\lambda_{i}$'s satisfy -\begin{align}\mathcal{L}\left\{\vec{x}_{i}\right\}&=\lambda_{i}\vec{x}_{i}.\end{align} -I'm just trying to extrapolate a bit from the fact that -\begin{align}D_{t}e^{\lambda t}&=\lambda e^{\lambda t},\end{align} -but I cannot think of any function that remains unchanged under the transformation. - -REPLY [3 votes]: The Laplace transform of $t^p$ is proportional to $\frac{1}{s^{p+1}}$ for $p>-1$. Take $p=-1/2$.<|endoftext|> -TITLE: Is Kaplansky's theorem for hereditary rings a characterization? -QUESTION [10 upvotes]: This question came up during a first course on rings and modules I TA'd at. -Kaplansky's Theorem for hereditary rings states that - -If $A$ is a hereditary ring, and $F$ is a free left $A$-module, then every submodule $M \subset F$ is isomorphic to a direct sum $\bigoplus_{i \in I} J_i$, where every $J_i$ is a left ideal of $A$. - -See for example Lam's Lectures on modules and rings, (2.24). Recently a student asked me for an example of a submodule of a free module that was not a direct sum of ideals, and the best I could come up with was the following: Let $A = \mathbb Z_4[X]/(X^2)$, and let $M = \langle (2,X) \rangle \subset A^2$; then $M$ is not isomorphic to a direct sum of ideals. My proof is long and tedious, and besides $A$ is very very far from being hereditary, since it has infinite global dimension. Hence the question: - -Can we find a simpler example of a submodule of a free module that is not isomorphic to a direct sum of ideals (say, over $\mathbb Z[X]$)? - -In fact, I was wondering if there are examples for all non-hereditary rings. - -is the converse of Kaplansky's theorem true? if a ring $A$ is such that all submodules of a free module are isomorphic to a direct sum of ideals, does it follow that $A$ is hereditary? - -REPLY [4 votes]: To answer the second question, no, it is not a characterization. -For example, let $k$ be a field and let $A=k[x]/(x^2)$. Then $A$ is not hereditary, but every $A$-module is a direct sum of copies of $A$ and of $k=A/(x)\cong Ax$, both of which are ideals. -(As mentioned by rschwieb in comments, my claimed classification of $A$-modules follows from more general results. But there's a fairly simple direct proof. Let $M$ be an $A$-module. Choose a basis $\{n_i\}$ of $Mx$ together with a choice of elements $\{m_i\}$ such that $n_i=m_ix$. Now extend $\{n_i\}$ to a basis $\{n_i\}\cup\{k_j\}$ of the kernel of multiplication by $x$. Then $\{m_i\}\cup\{n_i\}\cup\{k_j\}$ is a basis of $M$, for each $i$ the elements $m_i$ and $n_i$ span a submodule isomorphic to $A$, and for each $j$ the element $k_j$ spans a submodule isomorphic to $k$.)<|endoftext|> -TITLE: Intuitionistic proof of $\neg\neg(\neg\neg P \rightarrow P)$ -QUESTION [6 upvotes]: How do you prove $\neg\neg(\neg\neg P \rightarrow P)$ in intuitionistic logic? -I know this statement to be intuitionistically provable because of Glivenko's theorem. However, I wish to prove it intuitionistically. -The relevant axioms I happen to be using are: (1) $\neg P = P\rightarrow\bot$ and (2) $\bot \rightarrow P$. - -REPLY [8 votes]: The first thing to notice is that $$(\lnot(\lnot\lnot P \to P)) \to \lnot P\qquad(\star)$$ Indeed, suppose $\lnot(\lnot\lnot P \to P)$ and $P$ ; we want to show $\bot$. -By modus ponens, it suffices to prove $\lnot\lnot P \to P$, but as you supposed $P$, the implication is clearly true. -Now, for the main result, suppose $\lnot(\lnot\lnot P \to P)$ ; you want to show $\bot$. By modus ponens, it suffices to show $\lnot\lnot P \to P$. -Suppose now $\lnot\lnot P$. By $(\star)$, you also have $\lnot P$, and you know that $\lnot\lnot P \land \lnot P \to \bot$. Thus you have $\bot$, thus $P$, and the result is proved. -Edit : This proof is, on some points, similar to the proof of $\lnot\lnot(P\lor\lnot P)$, delightly explained by Phil Wadler in section 4 of http://homepages.inf.ed.ac.uk/wadler/papers/dual/dual.pdf<|endoftext|> -TITLE: What is $\max{(1/\alpha+1/\beta+|1/\gamma|+|1/\delta|)}$? -QUESTION [6 upvotes]: Consider a polynomial $(\alpha ,\beta >0)$,$f(x)=x^3/\alpha+x^2/\beta+x/\gamma+1/\delta$. -If $|f(x)|\leq 1$ for $|x|\leq 1$ then $\max{(1/\alpha+1/\beta+|1/\gamma|+|1/\delta|)}$ is what? -Intuitively it seems that the value should be within the range $1$ to $10$....just plugged in random values.But I'm not able to solve it.... - -REPLY [4 votes]: I think is $$\max \left\{ \dfrac{1}{\alpha}+\dfrac{1}{\beta}+\dfrac{1}{|\gamma|}+\dfrac{1}{|\delta|}\right\}= 7$$ -In fact,it is 1996 IMO shortist problem : -Let $P(x)=ax^3+bx^2+cx+d$, where $a,b,c,d$ are real numbers, if $|x|\le 1\implies|P(x)|\le 1$, show that:- -$$|a|+|b|+|c|+|d|\le 7$$ -Proof: it is clear -$$|P(1)|=|a+b+c+d|\le 1\space,|P(-1)|=|-a+b-c+d|\le 1$$ -$$\left|P\left(\dfrac{1}{2}\right)\right|=\left|\dfrac{1}{8}a+\dfrac{1}{4}b+\dfrac{1}{2}c+d\right|\le 1\space,\left|P\left(-\dfrac{1}{2}\right)\right|=\left|-\dfrac{1}{8}a+\dfrac{1}{4}b-\dfrac{1}{2}c+d\right|\le 1$$ -Note -\begin{align*} -&|\lambda\cdot a+b|=\left|\dfrac{4}{3}(\lambda\cdot a+b+\lambda\cdot c+d)-2\left(\dfrac{\lambda}{8}a+\dfrac{1}{4}c+\dfrac{\lambda}{2}c+d\right)+\dfrac{2}{3}\left(-\dfrac{\lambda}{8}a+\dfrac{1}{4}b-\dfrac{\lambda}{2}c+d\right)\right|\\ -&\le\dfrac{4}{3}+2+\dfrac{2}{3}=4 -\end{align*} -where $\lambda=\pm 1$ -so we have -$$|a|+|b|=\max{\{|a+b|,|-a+b|\}}\le 4$$ -\begin{align*} -&|\lambda c+d|=\left|-\dfrac{1}{3}\left(\lambda a+b+\lambda c+d\right)+2\left(\dfrac{\lambda}{8}a+\dfrac{1}{4}b+\dfrac{\lambda}{2}c+d\right)-\dfrac{2}{3}\left(-\dfrac{\lambda}{8}a+\dfrac{1}{4}b-\dfrac{\lambda}{2}c+d\right)\right|\\ -&\le \dfrac{1}{3}+2+\dfrac{2}{3}=3 -\end{align*} -so we have$$|c|+|d|\le\max{\{|c+d|,|-c+d|\}}\le 3$$ -then -$$|a|+|b|+|c|+|d|\le 7$$<|endoftext|> -TITLE: Space of Lipschitz Functions Complete? -QUESTION [8 upvotes]: Consider the subspace of continuous, real-valued functions on $[0,1]$ that are Lipschitz. Is this subspace complete under the sup norm ($\Vert \cdot \Vert_{\infty} = \sup \{ |f(x)| : x\in S \}$)? -I would say yes, since all Lipschitz functions ($d_{Y}(f(x_{1}),f(x_{2}))\leq K d_{X}(x_{1},x_{2})$, $K \geq 0$, where here $X = [0,1]$, $Y = \mathbb{R}$) are uniformly continuous, functions of certain types tend to converge uniformly to functions of the same types (e.g. differentiable functions to differentiable functions), but it seems unlikely. -Could somebody please help? - -REPLY [9 votes]: Theorem (Weierstrass). Any continuous $f:[0,1]\to R$ is the uniform limit of a sequence of real polynomials .....Now any continuously differentiable $g:[0,1]\to R$ is Lipschitz-continuous with Lipschitz constant $K=\max \{|g'(x)| :x\in [0,1]\}.$ Polynomials on $[0,1]$ are therefore Lipschitz. So with the $\sup$ norm, the set of Lipschitz-continuous $g:[0,1]$ is dense in $C[0,1]$, the space of all continuous $f:[0,1]\to R.$<|endoftext|> -TITLE: Derangements with extra chairs -QUESTION [5 upvotes]: This was a question on my combinatorics final. - -Suppose $m$ people are sitting in a room with $n$ chairs. If everyone leaves and comes back, how many ways can they sit down such that no one gets their original chair? - -If $m=n$, we simply get the derangement numbers. As another example, if person $A$ is in chair $1$, $B$ is in $2$, and no one is in $3$, then there are $3$ possible arrangements. Obviously $m\le n$ in general. -The question seems pretty simple, but I had a hard time getting a simple answer (I ended up with a pretty complicated recursion which I'm pretty sure was either wrong or not the best answer). -You are allowed to use derangement numbers, $d_i$, in the answer. Also a hint provided said that the answer would be a sum. -I'm looking for some thoughts on this. I guess I should add that the final is over; I'm asking out of curiosity since I probably won't get the exam back, at least for several weeks. - -REPLY [5 votes]: The number of ways to seat a particular $k$ people in their original seats is -$$ -\overbrace{\ \ \ \binom{m}{k}\ \ \ }^{\substack{\text{ways to pick}\\\text{$k$ people}\\\text{from $m$}}}\overbrace{\binom{n-k}{m-k}}^{\substack{\text{ways to pick}\\\text{$m-k$ chairs}\\\text{from $n-k$}}}\overbrace{(m-k)!\vphantom{\binom{k}{k}}}^{\substack{\text{arrangements}\\\text{of $m-k$ people}}} -$$ -Inclusion-Exclusion says that the number of ways for no people to be in their original seats is -$$ -\begin{align} -\sum_{k=0}^m(-1)^k\binom{m}{k}\binom{n-k}{m-k}(m-k)! -&=m!\sum_{k=0}^m\frac{(-1)^k}{k!}\binom{n-k}{m-k}\\ -&=m!\binom{n}{m}\sum_{k=0}^m\frac{(-1)^k}{k!}\frac{\binom{m}{k}}{\binom{n}{k}}\\ -&\sim m!\binom{n}{m}\sum_{k=0}^m\frac{(-1)^k}{k!}\frac{m^k}{n^k}\\ -&\sim m!\binom{n}{m}\,e^{-m/n}\quad\text{(better when $m$ is larger)} -\end{align} -$$ -Since the total number of ways to arrange $m$ people into $n$ chairs is $m!\binom{n}{m}$, the probability that no one gets their old chair back approaches $e^{-m/n}$. - -This is similar to the case for Derangements, when $m=n$ and -$$ -\mathcal{D}(n)\sim\frac{n!}e -$$ -and the probability that no one gets their old chair back approaches $\frac1e$.<|endoftext|> -TITLE: Derivative of position -QUESTION [7 upvotes]: [Beginning calculus question.] I saw in a calculus lecture online that for a position vector $\boldsymbol{r}$ -$$\left|\frac{d\boldsymbol r}{dt}\right| \neq -\frac{d\left| \boldsymbol r \right|}{dt}$$ -but I don't understand exactly how to parse this. -It's my understanding that: - -$\frac{d\boldsymbol r}{dt}$ refers to the rate of change in the -position over time (speed?) -$|\boldsymbol r|$ refers to the magnitude of the position, i.e. the distance (from what to what?) -$\frac{d\left| \boldsymbol r \right|}{dt}$ refers to the rate of change in distance traveled over time, (a different kind of speed?) - -Is there a good way to understand what both of these expressions mean? - -REPLY [10 votes]: The first one is looking at the velocity $\mathbf{v}=\frac{d\mathbf{r}}{dt}$, and taking its norm: it's the value of the speed, i.e. the value of (instantaneous) change in position. -The second is looking at the distance from the point of coordinates $\mathbf{0}$ (the origin), $\lvert \mathbf{r}\lvert$, and taking its derivative: it's the instantaneous change in distance from the origin. -The two indeed need not be equal: -imagine you are moving very fast, but staying at the same distance from the origin (that is, you're moving very fast on a circle). Then the speed $\left\lvert \frac{d\mathbf{r}}{dt}\right\rvert$ is big (you're moving fast), but $\lvert \mathbf{r}\lvert$ is constant -- so $\frac{d\lvert\mathbf{r}\rvert}{dt} = 0$. - -REPLY [3 votes]: We can understand this with one example. Consider angular motion problem where position of a particle is given by $\mathbf{r}=\hat{\imath}\cos(\omega t)+\hat{\jmath}\sin(\omega t)$. Now $\frac{d\mathbf{r}}{dt}=-\hat{\imath}\omega\sin(\omega t)+\hat{\jmath}\omega \cos(\omega t)$, clearly $\lvert \frac{d\mathbf{r}}{dt}\rvert = \omega$ , whereas $\lvert \mathbf{r} \rvert =1$, hence $\frac{d\lvert \mathbf{r}\rvert}{dt}=0$. It means that a particle is at constant position, within unit radius, only angular position is changing, hence derivative of $\lvert \mathbf{r}\rvert$ is zero.<|endoftext|> -TITLE: Does an isomorphism of groups that can be written as a direct product induce isomorphisms on the factors? -QUESTION [15 upvotes]: An answer to question Isomorphism of Direct Product of Groups says if you have two (or more) group isomorphisms -$ \phi_1:A_1 \rightarrow X_1 $ and $ \phi_2:A_2 \rightarrow X_2 $ then it follows that $ A_1 \times A_2 \cong X_1 \times X_2 $ under the isomorphism $\phi(a_1,a_2)=(\phi_1(a_1),\phi_2 (a_2) )$ -I am interested in whether the converse of this statement is true. -If $\phi: A_1 \times A_2 \rightarrow X_1 \times X_2 $ is an isomorphism, is it true that $A_1 \cong X_1 $ under an isomorphism $ \phi_1 $ and $ A_2 \cong X_2 $ under an isomorphism $\phi_2$ such that $ \phi(a_1,a_2)= (\phi_1 (a_1), \phi (a_2)) $? - -REPLY [22 votes]: No, and this is always false for any group that can be written as a direct product in a non-trivial way. For example, if $G = A \times B$ with neither $A$ nor $B$ the trivial group, then -$$A \times B = G \equiv G \times \{e\}$$ - -REPLY [17 votes]: Yet another example: let $A\not\cong B$. Then $A\times B\cong B\times A$ . . . - -REPLY [8 votes]: I would add to the answers already provided that even if $A_1\cong X_1$ and $A_2\cong X_2$, there may not exist isomorphisms $\phi_1:A_1\to X_1$ and $\phi_2:A_2\to X_2$ such that $\phi(a_1,a_2)=(\phi_1(a_1),\phi_2(a_2))$. For instance, let $A_1=A_2=X_1=X_2=\mathbb{Z}$ and consider $\phi:\mathbb{Z}\times\mathbb{Z}\to\mathbb{Z}\times\mathbb{Z}$ given by $\phi(a,b)=(a,b+a)$. Then $\phi$ is an isomorphism (its inverse is given by $(a,b)\mapsto(a,b-a)$), but it cannot come from a pair of isomorphisms $\phi_1$ and $\phi_2$ because the second coordinate of $\phi$ depends on both coordinates of the input. - -REPLY [4 votes]: This is not true: Let $A_1 = \mathbb{Z}$, and consider $A_2 = \mathbb{Z} \times \mathbb{Z} \times \mathbb{Z} \times \cdots$. Also let $X_1$ be the trivial group, and $X_2 = A_2$. Then -$$ -A_1 \times A_2 \cong A_2 \cong X_1 \times X_2 -$$ -and $A_2 \cong X_2$, but $A_1$ is not isomorphic to $X_1$.<|endoftext|> -TITLE: $ S_{n}=\frac{x}{x+1}+\frac{x^2}{(x+1)(x^2+1)}+...........+\frac{x^{2^{n}}}{(x+1)(x^2+1)...(x^{2^{n}}+1)}$ -QUESTION [8 upvotes]: If $\displaystyle S_{n}=\frac{x}{x+1}+\frac{x^2}{(x+1)(x^2+1)}+\frac{x^{2^{2}}}{(x+1)(x^2+1)(x^{2^2}+1)}+...........+\frac{x^{2^{n}}}{(x+1)(x^2+1)...(x^{2^{n}}+1)}$ -Then $\displaystyle \lim_{n\rightarrow \infty}S_{n} = \;,$ Where $x>1$ - -$\bf{My\; Try::}$ First we will calculate $\bf{r^{th}}$ term of the sequence. -So $$\displaystyle \bf{T_{r}} = \frac{x^{2^{r}}}{(x+1)(x^2+1)............(x^{2^{n}}+1)} = \frac{x^{2^{r}}(x-1)}{x^{2^{r+1}}-1}$$ -So We get $$\displaystyle \bf{T_{r}} = \frac{x^{2^{r}}(x-1)}{(x^{2^r}-1)(x^{2^{r}}+1)}$$ -Now I did not Understand How can I convert into Telescopic Sum. -Help me -Thanks - -REPLY [11 votes]: $$\begin{align} S_{\infty}&=\frac{x}{x+1}+\frac{x^2}{(x+1)(x^2+1)}+\frac{x^{2^{2}}}{(x+1)(x^2+1)(x^{2^2}+1)}+\ldots+\frac{x^{2^{n}}}{(x+1)(x^2+1)\cdots(x^{2^{n}}+1)}\\\ -&=\frac{x+1-1}{x+1}+\frac{x^2+1-1}{(x+1)(x^2+1)}+\frac{x^{2^{2}}+1-1}{(x+1)(x^2+1)(x^{2^2}+1)}+\ldots+\frac{x^{2^{n}}+1-1}{(x+1)(x^2+1)\cdots}\\\ -&=1-\frac{1}{x+1}+\frac{1}{x+1}-\frac{1}{(x+1)(x^2+1)}+\frac{1}{(x+1)(x^2+1)}-\frac{1}{(x+1)(x^2+1)(x^{2^2}+1)}+\ldots\\\ -&=1 -\end{align} -$$ -The last term becomes vanishingly small as we increase $n$, that's why we can ignore it and say that the limit of $S_n$ as $n\to\infty$ is unity.<|endoftext|> -TITLE: Let $f: \Bbb R \to \Bbb R$ be a differentiable function such that $\sup_{x \in \Bbb R}|f'(x)| \lt \infty$. Then -QUESTION [9 upvotes]: (UGC CSIR-2015, DECEMEMBER, MATHEMATICAL SCIENCES) - - -$f$ maps a bounded sequence to a bounded sequence. -$f$ maps a Cauchy sequence to a Cauchy sequence. -$f$ maps a convergent sequence to a convergent sequence. -$f$ is uniformly continuous. - - -I choose all of the options as possible answers because the condition $\sup_{x\in \Bbb R}|f'(x)| \lt \infty$ forces $f$ to be uniformly continuous.(Because $f$ becomes Lipschitz and Lipschitz condition implies uniform continuity) -i.e. $\frac {|f(x)-f(y)|}{|x-y|} \le \sup_{x\in \Bbb R}|f'(x)|$ -$ \forall x,y$. -Hence all other options are bound to be true. -Am I correct? - -REPLY [3 votes]: All are correct. - -If $\{x_n\}_{n\in\mathbb N}\subset\mathbb R$ is bounded, i.e., $\lvert x_n\rvert\le M<\infty$, then -$$ -\lvert\,f(x_n)-f(x_1)\rvert=\lvert x_n-x_1\rvert\lvert f'(y_n)\rvert, -$$ -for some $y_n\in(x_1,x_n)$, by virtue of the Mean Value Theorem, and hence -$$ -\lvert\,f(x_n)\rvert\le \lvert\,f(x_1)\rvert +\lvert x_n-x_1\rvert\lvert f'(y_n)\rvert\le \lvert\,f(x_1)\rvert +2M \|f'\|_\infty. -$$ -If If $\{x_n\}_{n\in\mathbb N}\subset\mathbb R$ is Cauchy, -then -$$ -\lvert\,f(x_m)-f(x_n)\rvert=\lvert\,f'(y_{m,n})\rvert\lvert x_m-x_n\rvert\le\|f'\|_\infty \lvert x_m-x_n\rvert, -$$ -hence $\{f(x_n)\}_{n\in\mathbb N}$ Cauchy. -If $x_n\to x$, then -$$ -\lvert\,f(x_n)-f(x)\rvert=\lvert\,f'(y_n)\rvert\lvert x_n-x\rvert\le\|f'\|_\infty \lvert x_n-x\rvert, -$$ -where $y\in(x,x_n)$, and hence $f(x_n)\to f(x)$. -If $x,y\in\mathbb R$, then -$$ -\lvert\,f(x)-f(y)\rvert\le\|f'\|_\infty \lvert x-y\rvert, -$$ -etc...<|endoftext|> -TITLE: Find $\Im((\cos 12^\circ +i \sin 12^\circ +\cos 48^\circ+i\sin 48 ^\circ )^6)$ -QUESTION [5 upvotes]: Find $\Im((\cos 12^\circ +i \sin 12^\circ +\cos 48^\circ+i\sin 48 - ^\circ )^6)$. - -I've solved this problem but I think I've taken the long way to do this, so I am asking if there's some slick way to solve this. -That's how I solved it: -I've applied the identity $\cos 48 +\cos 12=2\cos\left(\cfrac{48+12}{2}\right)\cdot \cos\left(\cfrac{48-12}{2}\right)=2 \cos 30 \cdot \cos18 =\sqrt{3}\cdot \cos 18 $ -and the same for $\sin (12)+\sin(48)=2\sin (30)\cdot \cos(18)=\cos(18) $ -Thus I have $\Im((\cos 12^\circ +i \sin 12^\circ +\cos 48^\circ+i\sin 48 ^\circ )^6)=\Im((\sqrt{3} \cos 18 +i \cos 18)^6)$ -In the end I've applied the Binomial Theorem -(if it's necessary, I will edit including this step ), however this was really a painfull process. -So is there some slick way to solve this ? -My thought: -I think there must be some way to turn the above expression into something like $(\cos\theta +i \sin \theta )^6$, that would be pretty neat. - -REPLY [3 votes]: You were close. The argument of the expression is equivalent to -$$ -\left(\mathrm{e}^{i\frac{\pi}{15}}+\mathrm{e}^{i\frac{4\pi}{15}}\right)^6 = \mathrm{e}^{i\frac{2\pi}{5}}\left(1+\mathrm{e}^{i\frac{\pi}{5}}\right)^6 -$$ -then you can extract $\mathrm{e}^{i\pi/10}$ to find -$$ -\mathrm{e}^{i\frac{2\pi}{5}+i\frac{6\pi}{10}}\left(\mathrm{e}^{-i\frac{\pi}{10}}+\mathrm{e}^{i\frac{\pi}{10}}\right)^6 -$$ -then knowing that -$$ -\cos (x) =\frac{\mathrm{e}^{ix}+\mathrm{e}^{-ix}}{2}\implies 2\cos (x) = \mathrm{e}^{ix}+\mathrm{e}^{-ix} -$$ -I can re-write as -$$ -\mathrm{e}^{i\frac{2\pi}{5}+i\frac{3\pi}{5}}\left(2\cos (\frac{\pi}{10}) \right)^6 -$$ -you should be good to go right? - -REPLY [2 votes]: Let me try. -$$2^6\cos^6 18(\cos 30 + i\sin 30 )^6 = 64\cos^6 18 (\cos 180 + i^6\sin 180) = -64\cos^6 18.$$<|endoftext|> -TITLE: What is known about the 'Double log Eulers constant', $\lim_{n \to \infty}{\sum_{k=2}^n\frac{1}{k\ln{k}}-\ln\ln{n}}$? -QUESTION [6 upvotes]: The Euler constant is defined as $$\gamma = \lim_{n \to \infty}{\sum_{k=1}^n\frac{1}{k}-\ln{n}}$$ -Let $$q = \lim_{n \to \infty}{\sum_{k=2}^n\frac{1}{k\ln{k}}-\ln\ln{n}}$$ -I managed to prove that $$\frac{1}{3\ln{3}}+\frac{1}{2\ln{2}}-\ln\ln{3} \geq q \geq \frac{1}{2\ln{2}}-\ln\ln{3}$$ -Is there something known about the constant $q$? For instance, is $q$ expressible in terms of $\gamma$? - -REPLY [2 votes]: Applying the Euler-Maclaurin Sum Formula we get -$$ -\begin{align} -&\sum_{k=2}^n\frac1{k\log(k)}\\ -&=\log(\log(n))+q+\frac1{2n\log(n)}-\frac1{12n^2}\left(\frac1{\log(n)}+\frac1{\log(n)^2}\right)\\ -&+\frac1{720n^4}\left(\frac6{\log(n)}+\frac1{\log(n)^2}+\frac{12}{\log(n)^3}+\frac6{\log(n)^4}\right)\\ -&-\frac1{15120n^6}\scriptsize\left(\frac{60}{\log(n)}+\frac{137}{\log(n)^2}+\frac{225}{\log(n)^3}+\frac{255}{\log(n)^4}+\frac{180}{\log(n)^5}+\frac{60}{\log(n)^6}\right)\\ -&+\frac1{604800n^8}\left(\tiny\frac{2520}{\log(n)}+\frac{6534}{\log(n)^2}+\frac{13132}{\log(n)^3}+\frac{20307}{\log(n)^4}+\frac{23520}{\log(n)^5}+\frac{19320}{\log(n)^6}+\frac{10080}{\log(n)^7}+\frac{2520}{\log(n)^8}\right)\\ -&-\frac1{1995840n^{10}}\left(\tiny\frac{15120}{\log(n)}+\frac{42774}{\log(n)^2}+\frac{97725}{\log(n)^3}+\frac{180920}{\log(n)^4}+\frac{269325}{\log(n)^5}+\frac{316365}{\log(n)^6}+\frac{283500}{\log(n)^7}+\frac{182700}{\log(n)^8}+\frac{75600}{\log(n)^9}+\frac{15120}{\log(n)^{10}}\right)\\ -&+O\!\left(\frac1{n^{12}\log(n)}\right) -\end{align} -$$ -If we use $n=10000$, we get $q$ to over $49$ places: -$$ -\scriptsize\lim_{n\to\infty}\left(\sum_{k=2}^n\frac1{k\log(k)}-\log(\log(n))\right)=0.7946786454528994022038979620651495140649995908828 -$$<|endoftext|> -TITLE: Importance of Locally Compact Hausdorff Spaces -QUESTION [10 upvotes]: I mostly deal with measure and probability theory and quite often, whenever I look up something on wikipedia, I see the mathematical objects defined on a locally compact Hausdorff space. -I have very little background in topology and while I do understand the definition, why do I see this space so often is something that you can't simply see from the definition itself. -My guess is that it is in some sense a generalisation of the spaces we deal with (say $\mathbb R^n$), which is general enough to include a variety of spaces, but restricted enough to keep the nice properties we want. Similar to, say, formulating results in analysis in a metric space (even if we're mostly interested in $\mathbb R^n$ or even $\mathbb R$), or probability results formulated in $\sigma$-finite spaces (even though we really have a finite space). -Therefore: is the guess above correct? If so, what are some of the nice properties? Is there a particular connection to probability theory? -I would consider answering the first question sufficient, but would very much welcome a context along the lines of the second and third question. -Thank you. - -REPLY [8 votes]: I think it is probably more fruitful to ask what is special about the category of compact Hausdorff spaces. This is not much of a restriction, for the following reasons: - -Every locally compact Hausdorff space is an open subspace of some compact Hausdorff space, e.g. the one-point compactification. Note that the one-point compactification of a Hausdorff space need not even be Hausdorff without local compactness. -Every open subspace of a compact Hausdorff space is a locally compact Hausdorff space. Note that this requires the Hausdorff assumption, since open subspaces of compact spaces are not locally compact in general. - -There are a number of nice things about the category of compact Hausdorff spaces. For example, there is the Banach-Stone Theorem, which says that a compact Hausdorff space can be reconstructed from its topological algebra of continuous functions. In fact, there is a very long list of nice properties of continuous functions on compact Hausdorff spaces. -More concretely, compact Hausdorff ($T_2$) spaces are also normal ($T_4$), which means that we have various nice tools like the Tietze extension theorem that allow us to connect our space to the real numbers in concrete ways. -Then there is the Stone-Čech compactification, which gives a well-behaved, universal (i.e. left adjoint to the forgetful functor) way to transform any topological space into a compact Hausdorff space. In fact, buried in this construction is the idea that any compact Hausdorff space can be obtained by weakening the standard topology on $[0,1]^S$ for some set $S$. -We might sometimes want to work in a smaller category, for example the category of compact metrizable spaces. But this turns out to be exactly the same category with the additional axiom of second-countability. And if we don't need second-countability for our theorems to hold, then we might as well do without it. The same goes for many such attempts to restrict study to categories in between Euclidean spaces and compact Hausdorff spaces. -I'm certainly leaving out a number of things, but perhaps this gives some idea of why this particular category of spaces tends to be a highly privileged one.<|endoftext|> -TITLE: Definition of the category of group representations -QUESTION [7 upvotes]: One usually considers the category of complex linear group representations for a fixed group $G$. It is defined as the category whose objects are group morphisms $G \rightarrow GL(V)$ where $V$ is a complex vector space and whose morphisms are $G$-equivariant linear maps (equivalently, it is the category of functors $Vect_\mathbb{C}^G$). -My question is: can we consider a category of group representations where the group is allowed to change? -I would be tempted to define it as follows. Objects would be triples $(V, G, \pi: G \rightarrow GL(V))$ and morphisms -$\big(V, G, \pi: G \rightarrow GL(V)\big) \xrightarrow{\Phi} \big(V', G', \pi': G' \rightarrow GL(V')\big)$ would be pairs $\Phi:=(\phi, \alpha)$ where $\phi$ is a linear map from $V$ to $V'$ and $\alpha$ is a group morphism from $G$ to $G'$ such that the following property is satisfied: -$$\forall g \in G, \pi'(\alpha(g)) \circ \phi= \phi \circ \pi(g)$$ -Does this definition make sense? If yes, why is this category never considered? - -REPLY [8 votes]: The Grothendieck construction (see e. g. wikipedia for start) is the following: - -Let $C$ be any category and $F \colon C \to \def\Cat{\mathsf{Cat}}\Cat$ a functor. The Grothendieck construction for $F$ is the following category $\Gamma(F)$: - -Objects of $\Gamma(F)$ are pairs $(A, x)$, where $A \in \def\Ob{\mathrm{Ob}}\Ob(C)$ and $x \in \Ob\bigl(F(A)\bigr)$. -$\def\Hom{\mathrm{Hom}}$Morphisms $f\colon (A,x) \to (B,y)$ are pairs $f = (f_0, f_1)$ where $f_0 \in \Hom_C(A,B)$ and $f_1 \in \Hom_{F(B)}(F(f_0)x, y)$. -Composition of $f \in \Hom{\Gamma(F)}((A,x), (B,y))$ and $g \in \Hom_{\Gamma(F)}((B,y), (D,z))$ is given by - $$ (g_0, g_1) \circ (f_0, f_1) = (g_0 \circ_C f_0, g_1 \circ_{F(D)} F(g_0)f_1\bigr) $$ - - -Your construction is the Grothendieck construction for the functor $F \colon \mathsf{Group}^{\rm op} \to \mathsf{Cat}$, $F(G) = [G, \def\V{\mathsf{Vect}_{\mathrm C}}\V]$, which sends a group to the category of its representations. -A morphism $f \colon (G, \pi) \to (G', \pi')$ in the Grothendieck construction is a pair $(\alpha, \tau)$, where $\alpha \colon G \to G'$ is a group homomorphism and $\tau \colon F(\alpha)\pi \to \pi'$ is a natural transformation of functors $G' \to \V$. Such a natural transformation is (as $G'$ only has one object), a $\V$-morphism, that is a linear map.<|endoftext|> -TITLE: Inverse of $2 \times 2$ block matrices -QUESTION [11 upvotes]: Let matrices $A, B, C, D$ be invertible. How can I find the inverse of the following block matrix? -$$\begin{bmatrix} A & B \\ C & D \\ \end{bmatrix}$$ -Thank you. - -REPLY [13 votes]: As $D$ is invertible, the block matrix is invertible if and only if the Schur complement of $D$ --- i.e. the matrix $S=A-BD^{-1}C$ --- is invertible. In that case, you can see Wikipedia for the block matrix inversion formula.<|endoftext|> -TITLE: Group cohomology or classical approach for class field theory? -QUESTION [7 upvotes]: First of all, I don't think this is a duplicate, because the related questions I found were mainly about history of group cohomology in number theory and there was no one asking about the classical approach versus the cohomological approach. If you find one, post it below and I'll consider deleting this one. Thanks. -I'm an $2$nd year undergraduate student currently studying algebraic number theory by Neukirch's book and I think that I covered most of the things people usually consider the basics, which is the study of algebraic number fields, integrality, the ideal class group, ramification of prime ideals and the theory of $p$-adic numbers and valuations in general. It feels like the next step is (and I think that I have more or less appropriate background) to dive in class field theory. -Doing a little search, I found that class field theory (both local and global) can be done in a variety of ways, for example using group cohomology. I don't know anything about group cohomology, but I think that Neukirch's approach does not use group cohomology, even though he talks about groups $H^{0}(G(L/K),A_{L})$, which I believe is notation that shows up in group cohomology. -About this, I have some questions: - -In the aspect of difficulty, which approach should I take? Neukirch vs. Group cohomology (for example in Milne's online notes for CFT) -If group cohomology was the answer for the previous one, then should I learn some category theory before group cohomology or it is not that related? -Are the two approaches actually distinct or just different in language? For example, is the the approach via group cohomology just identifying number-theoretical objects with "cohomological" objects and then applying theorems? -Is it worthy to try and do both at the same time and get a wide view on the subject? - -EDIT: I forgot that Neukirch has a book about class field theory only. I'm actually using his Algebraic Number Theory. - -REPLY [2 votes]: For local class field theory without cohomology, see for instance - -M. Hazewinkel, Local class field theory is easy, Advances in Mathematics, -Volume 18, Issue 2, November 1975, Pages 148–181, DOI 10.1016/0001-8708(75)90156-5. Also available here. -T. Yoshida, Local class field theory via Lubin-Tate theory, arXiv:math/0606108 - -See also Hazewinkel's review of the books by Neukirch and by Iwasawa: - -J. Neukirch, Class field theory, Springer, 1986 -K. Iwasawa, Local class field theory, Oxford, 1986<|endoftext|> -TITLE: Does every non-singleton connected metric space $X$ contains a connected subset (with more than one point) which is not homeomorphic with $X$? -QUESTION [8 upvotes]: Does every non-singleton connected metric space $X$ contains a connected subset (with more than one point) which is not homeomorphic with $X$ ? -Also ; does every connected metric space $X$ contains a connected subset which is homeomorphic with $X$ ? -UPDATE : So as noticed by @orangeskid ; the answer to the 2nd question is "no" by considering $X=S^1$ . The first question still remains unanswered - -REPLY [4 votes]: Take $X$ a $1$-dimensional circle. $X$ does not contain any proper subspaces homeomorphic to a circle, since any connected proper subspace is a segment. -${\bf Added:}$ -The answer to the first question is yes for spaces that contain a segment. It seems a lot of connected metric spaces contain a segment.<|endoftext|> -TITLE: Counterexample of polynomials in infinite dimensional Banach spaces -QUESTION [8 upvotes]: I'm trying to prove exercise I.3.B in Mujica's "Complex analysis in Banach spaces". -DEFINITIONS: - -A map $P$ is an m-homogeneous polynomial from $E$ to $F$ if there is a m-linear map $A$ from $E^m$ to $F$ such that $P(x)=A(x, \dots, x)$. -$P$ is a polynomial of degree at most $m$ if $P = P_0 + \dots + P_m$ where each $P_j$ is an j-homogeneous polynomial. - -I have to find a function $f: E \to \mathbb{K}$ (where $E$ is infinite dimensional) such that $f(a + \lambda b)$ is a polynomial in $\lambda$ for all $a,b \in E$ but $f$ is not a polynomial. -$f$ clearly has to be discontinuous because there is a theorem implying that $f$ would be a polynomial in the continuous case. -I thought about considering something like (where $\theta$ stands for the step function): -$$f(a+\lambda b) = \theta (\| b\| -1) (a_1 + \lambda b_1)$$ -But I don't know how to prove that $f$ wouldn't be a polynomial or even how to apply it to an arbitrary $x \in E$. -I also know that the restriction of $f$ to any finite dimensional subspace of $E$ is indeed a polynomial and that there is a sequence of homogeneous polynomials $P_k$ such that $f(x)=\sum_{k=0}^{\infty} P_k(x)$ where for each $x \in E$ $P_k(x)=0$ for all but finitely many indices. -What function could act as a good example for this situation? -I can provide any definition if you're not familiar with the terminology. Please ask for clarification in a comment if that is the case. - -REPLY [3 votes]: Let $\{x_i;i\in I\}$ be a Hamel basis for $E.$ If $E$ is infinite dimensional then without loss of generality we can have $I$ include the natural numbers as a subset. -Define -$$f:E\to\mathbb K:\sum_{i\in I}\alpha_ix_i\mapsto\sum_{n\in\mathbb N}\alpha_n^n.$$ -Then $f$ cannot be a polynomial of degree at most $m$ for any $m,$ because for all $\lambda\in\mathbb K$ we have -$$f(\lambda x_{m+1})=\lambda^{m+1}$$ -On the other hand for fixed $a$ and $b$ in $E$ the function $\lambda\mapsto f(a +\lambda b)$ is a polynomial in $\lambda$ because $a$ and $b$ are linear combinations of a finite number of elements from the Hamel basis.<|endoftext|> -TITLE: Determining limit of recursive sequence -QUESTION [8 upvotes]: I was trying to calculate the limit of sequence defined as -$$a_1=k; a_2=l; a_{n+1}=\frac{a_n+(2n-1)a_{n-1}}{2n}; k, l\in\mathbb{N}k -TITLE: Why does this proof fail? -QUESTION [9 upvotes]: I'm reading some notes on topology, and the notes' author is trying to raise motivation to consider compactness by providing a theorem whose proof is built intentionally wrong, but I don't agree with the reason he gives why the proof fails. Here's the theorem, -Every sequence $0 -TITLE: $\operatorname{spectrum}(AB) = \operatorname{spectrum}(BA)$? -QUESTION [6 upvotes]: Suppose we have two $n \times n$ matrices $A, B$. It seems like $\delta(AB)=\delta(BA)$, but I can't generally poove it. -If $\det(A) \neq 0$ then $\det(AB - \lambda I) = 0 \Leftrightarrow $ $ \det(AB - \lambda I) \cdot \det(A) = 0 \Leftrightarrow \det(ABA - \lambda A) = 0 \Leftrightarrow $ $ \det(A) \cdot \det(BA - \lambda I) = 0 \Leftrightarrow \det(BA - \lambda I) = 0$, so $$\delta(AB)=\delta(BA)$$ -Same if $\det(B) \neq 0$. -But how to prove it for $\det(A) = \det(B) = 0$? Is it still true? - -REPLY [7 votes]: I've seen the following nice proof credited to Paul Halmos. Assume that $A$ is not invertible. By performing row and column operations on $A$ and encoding them with invertible matrices, we can write -$$ A = P \begin{pmatrix} I_{r \times r} & 0 \\ 0 & 0 \end{pmatrix} Q $$ -where $P, Q$ are invertible and $r = \mathrm{rank}(A)$. Write also -$$ QBP = \begin{pmatrix} B_{11} & B_{12} \\ B_{21} & B_{22} \end{pmatrix} $$ -where $B_{11} \in M_{r}(\mathbb{F}), B_{22} \in M_{(n-r)\times(n-r)}(\mathbb{F})$, etc. -Then we have -$$ AB = P \begin{pmatrix} I_{r \times r} & 0 \\ 0 & 0 \end{pmatrix} Q Q^{-1} \begin{pmatrix} B_{11} & B_{12} \\ B_{21} & B_{22} \end{pmatrix} P^{-1} = P \begin{pmatrix} I_{r \times r} & 0 \\ 0 & 0 \end{pmatrix}\begin{pmatrix} B_{11} & B_{12} \\ B_{21} & B_{22} \end{pmatrix}P^{-1} = P \begin{pmatrix} B_{11} & B_{12} \\ 0 & 0 \end{pmatrix} P^{-1}. $$ -This shows that $AB$ is similar to a block upper triangular matrix and so $\chi_{AB}(\lambda) = \lambda^{n-r} \chi_{B_{11}}(\lambda)$ (where $\chi$ is the characteristic polynomial). -Similarly, -$$ BA = Q^{-1} \begin{pmatrix} B_{11} & B_{12} \\ B_{21} & B_{22} \end{pmatrix} P^{-1} P \begin{pmatrix} I_{r \times r} & 0 \\ 0 & 0 \end{pmatrix} Q = Q^{-1} \begin{pmatrix} B_{11} & B_{12} \\ B_{21} & B_{22} \end{pmatrix} \begin{pmatrix} I_{r \times r} & 0 \\ 0 & 0 \end{pmatrix} Q = Q^{-1} \begin{pmatrix} B_{11} & 0 \\ B_{21} & 0 \end{pmatrix} Q $$ -which shows that $BA$ is similar to a block lower triangular matrix and so $\chi_{BA}(\lambda) = \lambda^{n-r} \chi_{B_{11}}(\lambda)$. -We have shown that $\chi_{AB}(\lambda) = \chi_{BA}(\lambda)$ which shows that the eigenvalues of $AB$ and $BA$ (and even their algebraic multiplicity) coincide.<|endoftext|> -TITLE: Textbooks for axiomatic approach to algebraic topology -QUESTION [5 upvotes]: Are there textbooks on algebraic topology which first starts with the Eilenberg-Steenrod axioms and then derives consequences and applications directly out of the axioms. Only at the end they show there are indeed theories which satisfy the Eilenberg-Steenrod axioms. - -REPLY [2 votes]: How about Foundations of Algebraic Topology, by none other than Eilenberg and Steenrod.<|endoftext|> -TITLE: Binomial expansion for $(x+a)^n$ for non-integer n -QUESTION [12 upvotes]: I finally figured out that you could differentiate $x^n$ and get $nx^{n-1}$ using the derivative quotient, but that required doing binomial expansion for non-integer values. -The most I can find with binomial expansion is the first, second, last, and second to last terms. -So how do I find something like $(x+a)^{\pi}$? When differentiating in calculus, I didn't need to find terms after the second because I knew they would all cancel out, but how do you find these terms? -Do they work for negative exponents as well? -And does this work for complex exponents? -Which came first, Euler's method for complex exponents or binomial expansion for complex exponents? - -REPLY [13 votes]: The Binomial theorem for any index $n\in\mathbb{R}$ with $|x|<1,$ is -$(1+x)^n=1+nx+\frac{n(n-1)}{2!}x^2+\frac{n(n-1)(n-2)}{3!}x^3+\ldots$ -For $(x+a)^\pi$ one could take $x$ or $a$ common according as if $|a|<|x|$ or $|a|<|x|$ and use Binomial theorem for any index. i.e., $x^\pi(1+a/x)^\pi$ in case $|a|<|x|.$<|endoftext|> -TITLE: All the ternary n-words with an even sum of digits and a zero. -QUESTION [6 upvotes]: I'm trying to find a recursive formula for all the ternary (using ${0,1,2}$) sequences of length $n$ which contain at least one zero, and have an even sum of digits. -My attempt so far is added below. I think the idea is right, but the recurrence relation I get fails when setting in numbers (and it's not always an integer..). any ideas? - -Denote with $f\left(n\right)$ the number of legal sequences as in the question, $\bar{f}\left(n\right)$ as the number of sequences with at least one 0 with an odd number of digits, and as $g\left(n\right)$ the number of sequences with an even sum of digits and with no zeroes. -For any sequence, the sum of digits would be even if and only if there is an even number of 1s, therefore any legal sequence of length $n$ can be achieved in a unique way out of the following: -• Starting from a sequence where the digits are even and there is no zero, append zero at the end. There are $g\left(n-1\right)$ such sequences. -• Starting from a legal sequence of length $n-1$, append 0 or 2 at the end, adding $2f\left(n-1\right)$ sequences. -• Starting from a sequence with a zero where the sum of digits is odd, append 1 at the end, adding $\bar{f}\left(n-1\right)$ sequences. -These gives us that $$f\left(n\right)=2f\left(n-1\right)+\bar{f}\left(n-1\right)+g\left(n-1\right)$$ -Considering $g\left(n\right)$ first, we are interested in strings of length n from $\left\{ 1,2\right\} $ with an even number of 1s. There's a simple bijection to the set of strings of length $n$ with an odd number of 1s by “flipping” the first value, and as these sets include all possibilities with no overlap, it follows that $g\left(n\right)=2^{n-1}$. -Next looking at $\bar{f}\left(n-1\right)$. We can use the fact that the sequences with a zero where the sum of digits is odd and the sequences with a zero where the sum of digits is even are exactly a disjoint union of all the sequences with a zero. -As there are $n\cdot3^{n-1}$ such sequences (we first choose one location to be 0, and fill the rest normally), we get that $\bar{f}\left(n\right)+f\left(n\right)=n\cdot3^{n-1}$, or $\bar{f}\left(n\right)=n\cdot3^{n-1}-f\left(n\right)$ -Putting this all together we get that $$f\left(n\right)=2f\left(n-1\right)+\left(n-1\right)\cdot3^{n-2}-f\left(n-1\right)+2^{n-2} - or f\left(n\right)=f\left(n-1\right)+\left(n-1\right)\cdot3^{n-2}+2^{n-2}$$ - -REPLY [2 votes]: As an addendum to Barry's answer above, here I show how to solve for the closed form of the recurrence relation directly. -Given the recurrence of the form $f(n)=f(n-1)+3^{n-1}-2^{n-2}$ and the initial condition $f(1)=1$ we first find what the homogeneous solution will look like. -The characteristic polynomial will be of the form $x-1=0$, so we know the final solution will be of the form: $f(n)=c_1\cdot (1)^n + p(n)$ for some particular solution $p(n)$. -Given our non-homogenous part, we expect $p(n)=d_13^n+d_22^n$ for some constants $d_1$ and $d_2$. Plugging this in for $f(n)$ and $f(n-1)$ respectively we have: -$d_13^n+d_22^n=d_13^{n-1}+d_22^{n-1}+3^{n-1}-2^{n-2}$ -By grouping similarly ordered terms, this implies the following system of equations: -$\begin{cases}3d_1=d_1+1\\4d_2=2d_2-1\end{cases}$ -which implies $d_1=\frac{1}{2}$ and $d_2=-\frac{1}{2}$ -So, we have $f(n)=c_1+\frac{1}{2}3^n-\frac{1}{2}2^n$ -Using our initial condition, we solve for $c_1$ -$1=c_1+\frac{1}{2}3^1-\frac{1}{2}2^1=c_1+\frac{1}{2}$ implying $c_1=\frac{1}{2}$ -Thus, $f(n)=\frac{1}{2}+\frac{1}{2}3^n-\frac{1}{2}2^n=\frac{3^n-2^n+1}{2}$<|endoftext|> -TITLE: dihedral group and its generators -QUESTION [6 upvotes]: I try to improve my understanding of the dihedral group. One way of presentation of the dihedral group $D_n$ of order $2n$ is -$$\langle a,b : a^2=b^2=(ab)^n=1 \rangle.$$ -After a moment of thought it seemed pretty 'obvious' to me that the set of all group elements could be written as $G= \lbrace (ab)^k, (ab)^ka:k=0,...,n-1 - \rbrace$. It was easy to show that G is a group. Unfortunately I could not prove that the set $G$ indeed represents the full group $D_n$. -Mor precisely I have trouble to show that all elements in the above set $G$ are pairwise different and that there are no other elements of $D_n$ not contained in G. -E.g. why is it not possible that $(ab)^k=1$ for some $k=1,...,n-1$? - -REPLY [5 votes]: Recall that when we say that $G = \langle a, b \, | \, a^2 = b^2 = (ab)^n = 1\rangle$, what we mean is that $G$ is the quotient of the free group $\langle a, b\rangle$ by the normal subgroup $N$ generated by $a^2, b^2, (ab)^n$. Now, let's concretely view $D_n$ as the group of rotations and reflections of the regular $n$-gon which preserve the vertices. I'll assume you're familiar with this group. -We can define a group homomorphism $\varphi :\langle a,b\rangle \to D_n$ by sending $a$ and $b$ to "adjacent" reflections. By this, I simply mean that $\varphi(ab) = \varphi(a)\varphi(b)$ should be a rotation of order $n$. Using our knowledge of $D_n$, it's easy to confirm that $a^2, b^2$ and $(ab)^n$ are in the kernel of $\varphi$. Therefore all of $N$ is contained in the kernel. It follows that there is an induced group homomorphism $$\overline{\varphi}: \langle a, b \, | \, a^2 = b^2 = (ab)^n = 1\rangle \to D_n$$ by the univeral property of the quotient. Moreover, you have shown that the domain of the map has at most $2n$ elements, and by construction $\overline{\varphi}$ is surjective (since $\varphi$ is). Since $|D_n| = 2n$ as well, $\overline{\varphi}$ must be bijective, so we're done.<|endoftext|> -TITLE: Isomorphic quotient groups $\frac{G}{H} \cong \frac{G}{K}$ imply $H \cong K$? -QUESTION [12 upvotes]: I know that given a group $G$ and two normal subgroups $H,K \subset G$ then it is not true that: -"if $H \cong K$ then $ \frac{G}{H} \cong \frac{G}{K} $ (the counterexample is quite easy with products of cyclic groups) " -My question is: Is the converse true? -i.e. - -Given that $\frac{G}{H} \cong \frac{G}{K}$ then $H \cong K$ ? - -I feel that the answer is no, but I can't think of an example. - -REPLY [18 votes]: Let $$G = \mathbb Z/4\mathbb Z\times\mathbb Z/2\mathbb Z$$ -and consider the subgroups -$$H = \mathbb Z/4\mathbb Z\times \{e\}\\K=\mathbb Z/2\mathbb Z\times\mathbb Z/2\mathbb Z$$ -Then $$G/K\cong G/H\cong\mathbb Z/2\mathbb Z$$ but $H\not\cong K$. - -REPLY [6 votes]: Take, for instance, $G=\mathbb Z/4\mathbb Z \times \mathbb Z/2\mathbb Z$, $H=\mathbb Z/4 \mathbb Z \times \mathbf 0$, and $K=\mathbb Z /2\mathbb Z \times \mathbb Z/2 \mathbb Z$, so that $G/K\cong G/H \cong \mathbb Z/2 \mathbb Z$.<|endoftext|> -TITLE: Find all $n$ for which $n^8 + n + 1$ is prime -QUESTION [6 upvotes]: Find all $n$ for which $n^8 + n + 1$ is prime. I can do this by writing it as a linear product, but it took me a lot of time. Is there any other way to solve this? The answer is $n = 1$. - -REPLY [9 votes]: HINT: -If $w$ is a complex cube root of unity and $f(x)=x^8+x+1$ -$f(w)=(w^3)^2\cdot w^2+w+1=0$ -So $(x^2+x+1)|(x^8+x+1)$ - -REPLY [4 votes]: Since $n^2+n+1$ divides $n^8+n+1$ and $11$, then $n=1$ is the unique solution (which indeed gives a prime).<|endoftext|> -TITLE: $R$ be a commutative ring with unity satisfying a.c.c. on radical ideals ; is it true that $R[x]$ also satisfies a.c.c. on radical ideals ? -QUESTION [5 upvotes]: $R$ be a commutative ring with unity satisfying ascending chain condition on radical ideals ; is it true that $R[x]$ also satisfies ascending chain condition on radical ideals ? - -REPLY [4 votes]: This is true and is a theorem of Ohm and Pendleton (Theorem 2.5 of this paper). Here's a sketch of the proof. Say that an ideal is radically finitely generated if its radical is the radical of a finitely generated ideal. The acc on radical ideals is equivalent to every ideal being radically finitely generated. An ideal which is maximal among the non-radically finitely generated ideals can be shown to be prime. Taking such a maximal counterexample $P$ in $R[x]$, $P\cap R$ is radically finitely generated by hypothesis, and so after modding out $P\cap R$, $P$ will still not be radically finitely generated. We can thus assume $R$ is a domain and $P\cap R=0$. Now let $K$ be the field of fractions of $R$ and note that $PK[x]$ can be generated by a single polynomial $f\in P$. If $c$ is the leading coefficient of $f$, then $P+(c)$ is radically finitely generated by maximality of $P$. You can then show that $P$ is the radical of the ideal generated by $f$ and the $P$-components of the elements of $P+(c)$ that radically generate it. Thus $P$ is radically finitely generated, which is a contradiction.<|endoftext|> -TITLE: Compact subset in colimit of spaces -QUESTION [15 upvotes]: I found at the beginning of tom Dieck's Book the following (non proved) result - -Suppose $X$ is the colimit of the sequence $$ X_1 \subset X_2 \subset X_3 \subset \cdots $$ Suppose points in $X_i$ are closed. Then each compact subset $K$ of $X$ is contained in some $X_k$ - -Now I really don't know how to prove this fact. The idea would be to find a suitable open cover to it and after taking a finite sub cover trying to claim that $K$ lies in one of the $X_k$. I'm able to do this reasoning in some more specific cases, where I've more control on how open subsets looks like, but in this full generality I don't see which open cover I can take. - -My Attempt: The only idea or approach I'm able to cook up so far is to try use some kind of sequence of points $x_n \in K\cap X_n \setminus X_{n-1}$ which can be assumed to exist by absurd. Being $K$ compact, there must be an accumulation point $k\in K$. Clearly $k \in X_k$ (little abuse of notation here) and for every neighbourhood of $k$, there is a tail of this sequence entirely contained in it. Now everything seems to boil down to find the right nbhd to find the counterexample. It seems doable, but I don't have any idea on how to choose it, because the only open I have for sure are complements of points, but they seems a little bit coarse for what I want to do. - -As a side note, May claims at page $67$ of his "Concise Course (revised)" that this result holds for any based spaces. The proof seems to use the above result without T1 assumption. How one can prove this result in such generalities? (no details where provided, only the rough idea. - -REPLY [19 votes]: As you suggest, choose a sequence of points $x_n\in K\cap X_n\setminus X_{n-1}$ (possibly replacing $(X_n)$ with a subsequence). Let $A=\{x_n\}$. Then if $B\subseteq A$, then $B\cap X_n$ is finite for each $n$, so since points are closed in $X_n$, $B\cap X_n$ is closed in $X_n$. Since $X$ is the colimit, this means $B$ is closed in $X$. In particular, $A$ is a closed subset of $X$, and every subset of $A$ is closed so it has the discrete topology. But a closed subset of a compact space is compact, and a compact discrete space must be finite. This is a contradiction. -Without any hypotheses about points being closed, the result is definitely not true. For instance, let $X=\mathbb{N}$, topologized by saying the sets $\{n:n\geq m\}$ are open for each $m\in\mathbb{N}$. Then $X$ is the colimit of the subspaces $X_n=\{0,\dots,n-1\}$, but $X$ itself is compact. However, I believe that in May's book all "spaces" are assumed to be compactly generated weak Hausdorff, which implies points are closed. -(Note that if $X$ is a colimit of a sequence of maps that are not necessarily injective, the hypothesis you need is not that points are closed in each $X_n$ but that points are closed in $X$; see this answer of mine on MO. I recall that when I wrote that answer I came up with a counterexample where points are closed in each $X_n$, but I don't remember the details at the moment.)<|endoftext|> -TITLE: What is the meaning of the eigenvalues of the matrix representation of a bilinear form? -QUESTION [9 upvotes]: Given a bilinear form $B$ on some finite-dimensional vector space $V$, we can always represent $B$ by some matrix $A$ such that $B(v,w) = [v]^TA[w]$. Thus we could associate the eigenvalues of $A$ with $B$. But does that have any meaning? -What, geometrically or algebraically, do the eigenvalues of the matrix representation of a bilinear form represent? - -REPLY [15 votes]: Fix a bilinear form $B$ on a finite-dimensional vector space $V$, say, over a field $\Bbb F$. -Pick two bases of $V$, say, $\mathcal E$ and $\mathcal F$, and let $P$ denote the change-of-basis matrix relating them. Then, the respective matrix representations $[B]_{\mathcal E}$ and $[B]_{\mathcal F}$ of $B$ with respect to those bases are related by -$$\phantom{(\ast)} \qquad [B]_{\mathcal F} = P^\top [B]_{\mathcal E} P . \qquad (\ast)$$ -In particular, taking the determinant of both sides gives -$$\phantom{(\ast\ast)} \qquad \det[B]_{\mathcal F} = (\det P)^2 \det[B]_{\mathcal E} . \qquad (\ast\ast)$$ -Since the determinant of a matrix is the product of its eigenvalues and $\det P$ can take on any nonzero value in $\Bbb F$, the spectrum (set of eigenvalues) of the matrix representation $[B]_{\mathcal E}$ of $B$ in general depends on the basis $\mathcal E$ and thus does not have intrinsic (i.e., basis-independent) meaning. -That said, bilinear forms do have some invariants, and at least some of these are expressible in terms of the eigenvalues of $[B]$. -Rank The rank of a matrix is unchanged by multiplication by an invertible matrix, so the transformation rule $(\ast)$ shows that the $\operatorname{rank} [B]_{\mathcal E}$ is an invariant of $B$, and it is equal to $n := \dim V$ less the number $n_0$ of zero eigenvalues (which thus is an invariant of $B$). We have $n_0 = \dim \ker B$, where $\ker B := \{v \in V : B(v, \,\cdot\,) = 0\}$ is the kernel of $B$. -Restricting temporarily to the symmetric case, the rank is a complete invariant for symmetric bilinear form over some fields, including algebraically closed ones not of characteristic $2$. -Theorem If $B$ is a symmetric bilinear form on a finite-dimensional vector space $V$ over an algebraically closed field of characteristic not $2$, there is a basis $\mathcal E$ of $V$ for which $$B = \operatorname{diag}(\underbrace{1, \ldots, 1}_{\operatorname{rank} B}, \underbrace{0, \ldots, 0}_{n_0}) .$$ -Discriminant While $(\ast\ast)$ tells us that the determinant of $[B]$ is not an invariant of $B$, if $[B]$ is invertible it also tells us that the image of $\det B$ under the canonical quotient homomorphism $\Bbb F^\times \to \Bbb F^\times / (\Bbb F^\times)^2$ (of abelian groups) is an invariant; this quantity is the discriminant of $B$. In terms of the eigenvalues of $B$, the discriminant is just the image of their product under that map. If $\Bbb F$ is algebraically closed (in fact a much weaker condition suffices), the target is the trivial group and so the discriminant contains no information. If $\Bbb F = \Bbb Q$ for example, we can identify the quotient $\Bbb F^\times / (\Bbb F^\times)^2$ with the set of squarefree nonzero integers. (Usually discriminant is applied to symmetric, nondegenerate bilinear forms, but I see no reason not to use it for nonsymmetric matrices, too.) -Again restricting temporarily to the symmetric case, over some fields the discriminant is a full invariant of a nondegenerate, symmetric bilinear form, and over others it is not. See Theorems 11 and 12 of Kaplansky's Linear Algebra and Geometry: A Second Course for details. (Thanks to rschweib for mentioning this reference in the comments.) -Over $\Bbb R$ we have a classic classification result that we can frame in terms of eigenvalues: - -Sylvester's Law of Inertia Given a real, symmetric bilinear form $B$ on a finite-dimensional vector space $V$, the number $n_+$ of positive eigenvalues, then number $n_0$ of zero eigenvalues, and the number $n_-$ of negative eigenvalues (all counting multiplicity) of the matrix representation $[B]_{\mathcal E}$, are all independent of the basis $\mathcal E$ of $V$ chosen. Moreover, these are the only invariants of real, symmetric bilinear forms in the sense that, for any such form, there is some basis $\mathcal E$ for which - $$[B]_{\mathcal E} = \operatorname{diag}(\underbrace{1, \ldots, 1}_{n_+}, \underbrace{0, \ldots, 0}_{n_0}, \underbrace{-1, \ldots, -1}_{n_-}) .$$ - -We say that the bilinear form $B$ is nondegenerate if $n_0 = 0$, in which case we say that it has signature $(n_+, n_-)$. The form $B$ is positive-definite iff $n_0 = n_- = 0$ and negative-definite iff $n_0 = n_+ = 0$. Geometrically, $n_+$ ($n_-$) is the dimension of the largest subspaces of $V$ on which $B$ restricts to be positive (negative) definite, and $n_0$ is the dimension of the annihilator $\ker B := \{v \in V : B(v, \cdot) = 0 \}$. We have $\Bbb R^\times / (\Bbb R^\times)^2 \cong \{\pm 1\}$, and under this identification the discriminant of a nondegenerate, real, symmetric bilinear form is just $(-1)^{n_-}$; the discriminant is a full invariant only for $n = 1$. -Finally, properties of $B$ can impose restrictions on the possible eigenvalues of $\Bbb R$ with respect to any basis. For example, if a real bilinear form $B$ is symmetric, all of its eigenvalues are real, so the existence of a nonreal eigenvalue tells us that $B$ is not symmetric (though in practice one usually knows that before knowing the eigenvalues). The converse does not hold for $n > 1$.<|endoftext|> -TITLE: $S1 = 1 + {x^3 \over 3!} + {x^6 \over 6!} + ...$ -QUESTION [6 upvotes]: In one of my lecturer's problem sheets we were asked to evaluate the following sums: -$$S1 = 1 + {x^3 \over 3!} + {x^6 \over 6!} + \dots $$ -$$S2 = {x^1 \over 1!} +{x^4 \over 4!} +{x^7 \over 7!} + \dots$$ -$$S3 = {x^2 \over 2!} +{x^5 \over 5!} +{x^8 \over 8!} + \dots$$ -In case it's relevant, we were previously required to solve $z^3 -1 =0$, which is simple. And show that if ω is one of the complex roots of the above equation, $ω^2 + ω +1 = 0$. -How would one go about solving these sums? Is there an obvious method which I am missing? - -REPLY [8 votes]: Hint: We have -$$e^x=1+x+\frac{1}{2!}x^2+\frac{1}{3!}x^3+\cdots$$ and -$$e^{\omega x}=1+\omega x+\frac{1}{2!}\omega^2 x^2+\frac{1}{3!}x^3+\cdots$$ and $$e^{\omega^2 x}=1+\omega^2 x+\frac{1}{2!}\omega x^2+\frac{1}{3!}x^3+\cdots.$$ -Add. After finding $S_1(x)$ we can get the others by differentiating.<|endoftext|> -TITLE: Generating functions - deriving a formula for the sum $1^2 + 2^2 +\cdots+n^2$ -QUESTION [10 upvotes]: I would like some help with deriving a formula for the sum $1^2 + 2^2 +\cdots+n^2$ using generating functions. -I have managed to do this for $1^2 + 2^2 + 3^2 +\cdots$ by putting -$$f_0(x) = \frac{1}{1-x} = 1 + x + x^2 + x^3 +\cdots$$ -$$f_1(x) = x \frac{d}{dx}[f_0(x)] = \frac{1}{(1-x)^2} = 0 + x + 2x^2 + 3x^3 +\cdots$$ -$$f_2(x) = x \frac{d}{dx}[f_1(x)] = \frac{x^2+x}{(1-x)^3} = 0^2 + 1^2x + 2^2x^3 + 3^2x^3+\cdots,$$ -and I assume I'm supposed to be able to do something similar in this case, but things get trickier when it's bounded by n and I keep getting stuck. - -REPLY [3 votes]: Hint: The following perspective with focus on operator methods might also be useful. - -We can successively apply the $\left(x\frac{d}{dx}\right)$-operator to a generating function -\begin{align*} - A(x)=\sum_{n=0}^{\infty}a_nx^n - \end{align*} -to obtain -\begin{align*} - \left(x\frac{d}{dx}\right)A(x)&=\sum_{n=0}^{\infty}na_nx^n\\ - \left(x\frac{d}{dx}\right)^2A(x)&=\sum_{n=0}^{\infty}n^2a_nx^n - \end{align*} - -Multiplication of $A(x)$ with $\frac{1}{1-x}$ results in summing up the coefficients $a_n$ -\begin{array}{crl} - (a_n)_{n\geq 0}\qquad &\qquad A(x)=&\sum_{n=0}^{\infty}a_nx^n\\ - \left(\sum_{k=0}^{n}a_k\right)_{n\geq 0}\qquad&\qquad\frac{1}{1-x}A(x)=&\sum_{n=0}^{\infty}\left(\sum_{k=0}^{n}a_k\right)x^n - \end{array} -It's also convenient to use the coefficient of operator $[x^n]$ to denote the coefficient of $x^n$ in a generating series. - -Putting all together and applying it to the geometric series $\frac{1}{1-x}=\sum_{n=0}^{\infty}x^n$ we finally obtain -\begin{align*} - \sum_{k=0}^nk^2&=[x^n]\frac{1}{1-x}\left(x\frac{d}{dx}\right)^2\frac{1}{1-x}\\ - &=[x^n]\frac{x(1+x)}{(1-x)^4}\\ - &=\left([x^{n-1}+x^{n-2}]\right)\sum_{k=0}^{\infty}\binom{-4}{n}(-x)^n\tag{1}\\ - &=\left([x^{n-1}+x^{n-2}]\right)\sum_{k=0}^{\infty}\binom{n+3}{3}x^n\tag{2}\\ - &=\binom{n+2}{3}+\binom{n+1}{3}\\ - &=\frac{1}{6}n(n+1)(2n+1) - \end{align*} - -Comment: - -In (1) we use the binomial series expansion and $[x^n]x^kA(x)=[x^{n-k}]A(x)$ -In (2) we use $\binom{-n}{k}=\binom{n+k-1}{k}(-1)^k=\binom{n+k-1}{n-1}(-1)^k$<|endoftext|> -TITLE: Closed form for $\sum_{n=0}^\infty\frac{\Gamma\left(n+\tfrac14\right)}{2^n\,(4n+1)^2\,n!}$ -QUESTION [20 upvotes]: I was experimenting with hypergeometric-like series and discovered the following conjecture (so far confirmed by more than $5000$ decimal digits): -$$\sum_{n=0}^\infty\frac{\Gamma\!\left(n+\tfrac14\right)}{2^n\,(4n+1)^2\,n!}\stackrel{\color{gray}?}=$$ -$$\frac{\Gamma\!\left(\tfrac14\right)\sqrt[4]2}{192}\left[\vphantom{\huge|}6\sqrt{2}\left(2\pi\ln2-\ln^22-8\operatorname{Li}_2\left(\tfrac1{\sqrt2}\right)\right)+3\psi^{(1)}\!\left(\tfrac18\right)-48G+\left(\vphantom{\large|}7\sqrt2-6\right)\pi^2\right]$$ -where $G$ is the Catalan constant, $\operatorname{Li}_2(x)$ is the dilogarithm and $\psi^{(1)}(x)$ is the trigamma function. Could you suggest any ideas how to prove it? - -To see what approach I use to find conjectures like this, see my another question. - -Update: I've found a generalization of this conjecture. See the corresponding Mathematica expression here. Hopefully, it can be simplified. - -REPLY [2 votes]: (Too long for a comment.) Note that, -$$\sum_{n=0}^\infty\frac{\Gamma\!\left(n+\tfrac14\right)}{2^n\,(4n+1)^2\,n!}=A=B$$ -$$A=\frac{\Gamma\!\left(\tfrac14\right)\sqrt[4]2}{192}\left[\vphantom{\huge|}6\sqrt{2}\left(2\pi\ln2-\ln^22-8\operatorname{Li}_2\left(\tfrac1{\sqrt2}\right)\right)+3\psi^{(1)}\!\left(\tfrac18\right)-48G+\left(\vphantom{\large|}7\sqrt2-6\right)\pi^2\right]$$ -$$B=\frac{\Gamma\!\left(\tfrac14\right)\sqrt[4]2}{192}\left[\vphantom{\huge|}6\sqrt{2}\left(2\pi\ln2-\ln^22-8\operatorname{Li}_2\left(\tfrac1{\sqrt2}\right)\right)\color{red}-3\psi^{(1)}\!\left(\color{red}{\tfrac58}\right)\color{red}+48G+\left(\vphantom{\large|}7\sqrt2\color{red}+6\right)\pi^2\right]$$ -Since, -$$\psi^{(1)}\!\left(\tfrac18\right)+\psi^{(1)}\!\left(\tfrac78\right)=2\pi^2(2+\sqrt2)$$ -$$\psi^{(1)}\!\left(\tfrac38\right)+\psi^{(1)}\!\left(\tfrac58\right)=2\pi^2(2-\sqrt2)$$ -Or in general, -$$\psi^{(1)}\!\left(k\right)+\psi^{(1)}\!\left(1-k\right)=\pi^2\csc^2(k\pi)$$ -then one can use any of the arguments $\tfrac18,\tfrac38,\tfrac58,\tfrac78$.<|endoftext|> -TITLE: Why is the word associative used to represent the concept of the associative property? -QUESTION [17 upvotes]: For the commutative property ... -According to wikipedia: - -The word "commutative" is a combination of the French word commuter - meaning "to substitute or switch" and the suffix -ative meaning - "tending to" so the word literally means "tending to substitute or - switch." - -Therefore the choice of the word commutative to represent the concept of commutative property makes sense. -if you switch the order of the operands, you get the same result -a * b = b * a -What is the corresponding story for the associative property? - -REPLY [13 votes]: In French, associer means making links and connections. Therefore, associative literally means tending to make links and connections. If $\star$ is an associative law, one has: $$(a\star b)\star c=a\star(b\star c).$$ -With an associative law, you get the same result regardless of the pairwise associations.<|endoftext|> -TITLE: Three pythagorean triples -QUESTION [10 upvotes]: Are there any solutions for $a, b, c$ such that: -$$a, b, c \in \Bbb N_1$$ -$$\sqrt{a^2+(b+c)^2} \in \Bbb N_1$$ -$$\sqrt{b^2+(a+c)^2} \in \Bbb N_1$$ -$$\sqrt{c^2+(a+b)^2} \in \Bbb N_1$$ - -REPLY [2 votes]: Here is a complete parametrization of all rational solutions to $\sqrt{a^2+(b+c)^2}\in\mathbb{Q}$, $\sqrt{b^2+(c+a)^2}\in\mathbb{Q}$, and $\sqrt{c^2+(a+b)^2}\in\mathbb{Q}$, where $a,b,c\in\mathbb{Q}$. If $p,q,r\in\mathbb{Q}_{\geq 0}$ be such that $$\frac{2p}{1+2p-p^2}+\frac{2q}{1+2q-q^2}+\frac{2r}{1+2r-r^2}=1\,,\tag{*}$$ -then $(a,b,c)=\left(\frac{2p}{1+2p-p^2}x,\frac{2q}{1+2q-q^2}x,\frac{2r}{1+2r-r^2}x\right)$ for some $x\in\mathbb{Q}$ (namely, $x=a+b+c$). All rational solutions $(a,b,c)$ are of this form. There exists a positive integer solution $(a,b,c)$ associated to $(p,q,r)$ iff $0< p,q,r<1+\sqrt{2}$. -Frankly, I don't know if solving (*) is any easier than using the method mentioned by Tito Piezas III, but at least, there is one equation to be solved now, and with only $3$ rational variables. (However, if you try to write $p=\frac{m_1}{n_1}$, $q=\frac{m_2}{n_2}$, $r=\frac{m_3}{n_3}$, where $m_i,n_i\in\mathbb{Z}$ for $i=1,2,3$, then you will end up with $6$ variables, but the method mentioned by Tito Piezas III can reduce the number of variables to $5$.) There may be an algebraic-geometry/algebraic-number-theory method to solve (*), but I'm not so knowledgeable in these fields. Here is an example: $(a,b,c)=(108,357,368)$ is given by $(p,q,r,x)=\left(\frac{2}{27},\frac{1}{3},\frac{8}{23},833\right)$, where $\frac{2p}{1+2p-p^2}=\frac{108}{833}$, $\frac{2q}{1+2q-q^2}=\frac{3}{7}=\frac{357}{833}$, and $\frac{2r}{1+2r-r^2}=\frac{368}{833}$.<|endoftext|> -TITLE: Is a symmetric matrix characterized by the diagonal of its resolvent? -QUESTION [6 upvotes]: The resolvent of a square matrix $A$ is defined by $R(s) = (A-sI)^{-1}$ for $s \notin \operatorname{spect}(A)$. -Is knowing the diagonal of $R(s)$ for all $s$ sufficient to recover $A$ when $A$ is symmetric? -edit: a counter-example of two matrices $A,B$ whose resolvent have the same diagonal has been found by Robert Israel. In the counter example, $A = P B P^T$ for some permutation matrix $P$. Now the question is, it is possible to recover $A$ up to permutations of rows and columns? - -REPLY [2 votes]: If $A$ and $B$ are the adjacency matrices of non-isomorphic strongly regular graphs on $n$ vertices with the same parameters, then the diagonal entries of the resolvent are all equal to $1/n$ times the common characteristic polynomial. Such graphs on 16 vertices can be constructed from the two $4\times4$ Latin squares. Since the graphs are not isomorphic, the two matrices are not permutation equivalent.<|endoftext|> -TITLE: What Yoneda tells us about algebraic geometry -QUESTION [9 upvotes]: I am currently learning about relative algebraic geometry, and I'm just trying to walk myself through some of the foundations and motivating examples before moving on to the proper stuff (symmetric monoidal categories et al.), starting with the category $\mathsf{Comm}_k$ of commutative $k$-algebras. -Below is my attempt at explaining the setup. - -Define $\mathsf{Aff}_k=\mathsf{Comm}_k^{\textrm{op}}$, the category of affine schemes over $k$; -Define $\mathsf{Sp}_k=\mathsf{PShv}(\mathsf{Aff}_k)=\mathsf{Fun}(\mathsf{Aff}_k^{\textrm{op}},\mathsf{Set})$, the category of $k$-spaces; -Yoneda's lemma tells us that,for $A\in\mathsf{Aff}_k^{\textrm{op}}$ and $F\in\mathsf{Sp}_k$, - - -$\mathrm{Hom}_{\mathsf{Sp}_k}(Y_A,F)\cong F(A)$, where the isomorphism is given by the canonical restriction; -the functor $Y_A$ is fully faithful; - -Define $\mathrm{Spec}=Y\colon\mathsf{Aff}_k\to\mathsf{Sp}_k$, the spectrum functor, as the Yoneda functor; -Note that $\mathsf{Aff}_k$ is equivalent to the essential image of the Yoneda embedding (since $\mathrm{Spec}$ is fully faithful (by the above) and essentially surjective onto its essential image). - -First of all, it'd be nice to know if all of the above is free of mistakes and the correct sort of approach, and if so (or even if not, I guess), here are my questions: - -For the first fact that Yoneda's lemma tells us in the above, what exactly does the 'canonical restriction' look like in this scenario? -What does the first fact actually tell us about this whole setup? I've used the second fact, which is sort of a corollary of Yoneda's lemma, from what I can tell, but don't see what the first fact tells us. -Why do we care about the last bit, that $\mathsf{Aff}_k$ is equivalent to the essential image of the Yoneda embedding? I know that some places define $\mathsf{Aff}_k$ to be this essential image, and others don't... -If we work with a finitely-generated $k$-algebra $A$ then we can write -$$A=\frac{k[x_1,\ldots,x_n]}{(f_1,\ldots,f_m)}$$ -and then we have the bijection of sets (if the category is small? see below) -$$\mathrm{Hom}(A,B)\longleftrightarrow\{y\in B^n\mid f_1(y)=\ldots=f_m(y)=0\}$$ -since a morphism is given by choosing a place to send the $x_i$, but since zero must be preserved, these images must satisfy the $f_i$. -But are we working with small categories, and should we just be working with finitely-generated commutative $k$-algebras, or is this nice fact just a useful way of looking at the 'nice' objects of $\mathsf{Comm}_k$? - -I know there are a lot of questions, so if you only answer one, please make it 2 or 3 (but I think they are all reasonably linked). - -REPLY [11 votes]: The Yoneda lemma tells you that specifying an affine scheme $X$ is the same thing as specifying its "functor of points," namely the functor $X(-) : \text{CRing} \to \text{Set}$ it represents. For this observation to have real weight you should know examples of affine schemes which are most easily described by describing their functors of points. -For example, there is a functor $GL_n(-) : \text{CRing} \to \text{Grp}$ sending a commutative ring $R$ to the group $GL_n(R)$ of invertible $n \times n$ matrices over $R$, and sending a morphism $R \to S$ of commutative rings to the obvious group homomorphism $GL_n(R) \to GL_n(S)$. The underlying set-valued functor of this group-valued functor is representable by an affine scheme (exercise), and the fact that it lifts to a group-valued functor means that this affine scheme is a group scheme, or equivalently that it is $\text{Spec}$ of a commutative Hopf algebra. But I don't have to write down this Hopf algebra to write down its functor of points. -At the level of morphisms, there is a natural transformation $\det : GL_n(-) \to GL_1(-)$ which, by the Yoneda lemma, comes from some morphism of Hopf algebras in the other direction. But again I don't have to write down this morphism of Hopf algebras to write down the effect it has on functors of points.<|endoftext|> -TITLE: Sampling, Fourier Transform, and Discrete Fourier Transform -QUESTION [9 upvotes]: The Fourier Transform inverse Fourier Transform and are defined as: -$$F(k) = \int_{-\infty}^\infty f(x)e^{-2\pi i k x}dx \\ f(x) = \int_{-\infty}^\infty F(k)e^{2\pi i k x}dk$$ -The Discrete Fourier Transform (DFT) and Inverse Discrete Fourier Transform (iDFT) are defined as -$$G[k] =\sum_0^{N-1} g[n] e^{-i2\pi \frac{kn}{N}}\\ - g[n] = \frac1N \sum_0^{N-1} G[k] e^{i2\pi \frac{kn}{N}}$$ -Let $f$ be a function such that $f:\mathbb{R}\rightarrow\mathbb{C}$. Consider the sampled signal $f_N(n)=f(nx\Delta)$ for $n\in{0,1,\ldots,N-1}$ and some positive scalar $\Delta$. Note that $f_N$ is an N-tuple. -Under what conditions is $\text{DFT}(f_N)$ directly related to the Fourier Transform of $f$? (By directly related, I'm hoping that the values of the DFT are either equal to or proportional to values of the Fourier Transform.) What is that relationship? - -REPLY [10 votes]: Let me expand on Matt L.'s excellent answer to a very similar question (please read that first). -We start with a continuous signal $s(t)$, which might be finite or infinite in length, but we're interested in signals that vanish beyond some window: - -Now we truncate the signal in time (if it's infinite or too long) and create a list of equidistant samples of this signal: - -This defines a new signal $x(t)$ with length $T_0$ (or a "rate" $f_0 = 1/T_0$), and a sequence $x[k]$ with length $N$ and sampling period $T_s$ (sampling rate $f_s = 1/T_s$). Note that the signals $s(t)$ and $x(t)$ do not have Fourier Series, because they're not periodic. But they do have Fourier Transforms: -$$ -X(f) \triangleq \int_{-\infty}^\infty x(t)e^{-i2\pi ft}dt \quad \text{(Fourier Transform)} \tag{1} -$$$$ -X[k] \triangleq \sum_{n=0}^{N-1} x[n]e^{-i2\pi nk/N} \quad \text{(Discrete Fourier Transform)} \tag{2} -$$ -One of the similarities between both transforms is that they can be used to recover their respective signals: -$$ -x(t) = \int_{-\infty}^\infty X(f)e^{i2\pi ft}df \tag{3} -$$$$ -x[k] = \frac{1}{N}\sum_{n=0}^{N-1} X[n]e^{i2\pi n\frac{k}{N}} \tag{4} -$$ -Most importantly, if the sampling rate ($f_s$) and the length of $x(t)$ ($T_0=1/f_0$) are high enough, then: -$$ -\bbox[5px,border:2px solid #C0A000]{X[k] \approx f_sX(kf_0)} \tag{5} -$$ -That is, the Discrete Fourier Transform is approximately a sampling of the regular Fourier Transform starting at $f=0$ and in steps of $\Delta f = f_0 = 1/T_0$ (where $T_0$ is the length of the truncated signal) and then scaled by $f_s$ (the sampling rate). - -It's not very obvious from the definitions $(1)$ and $(2)$ that this is the case, so I'll try to present a "proof" for the relationship in $(5)$. -"Proof" -We start by constructing a periodic signal (and its corresponding periodic sequence of samples) out of $x(t)$: -$$ -x_p(t) \triangleq \sum_{n=-\infty}^\infty x(t-nT_0) \tag{6} -$$$$ -x_p[k] \triangleq x_p(kT_s) \tag{7} -$$ -Now $x_p(t)$ has a Fourier Series: -$$ -x_p(t) = \sum_{n=-\infty}^\infty c[n]e^{i2\pi nf_0t} \tag{8} -$$ -where: -$$ -c[n] \triangleq \frac{1}{T_0}\int_{T_0} x_p(t)e^{-i2\pi nf_0t}dt \tag{9} -$$ -And $x(t)$ has a Fourier Transform $(1)$ which looks a lot like $c[n]$. In fact it's simply: -$$ -c[n] = f_0X(nf_0) \tag{10} -$$ -Substituting $(10)$ into $(8)$ and the result into $(6)$, we get one of the forms of the Poisson Summation Formula: -$$ -\sum_{n=-\infty}^\infty x(t-nT_0) = \sum_{n=-\infty}^\infty f_0X(nf_0)e^{i2\pi nf_0t} \tag{11} -$$ -This allows us to write $x_p[k]$ in a new way: -$$\begin{align} -x_p[k] &= \sum_{n=-\infty}^\infty f_0X(nf_0)e^{i2\pi nf_0(kT_s)} \tag{12} \\ - &= \sum_{n=-\infty}^\infty f_0X(nf_0)e^{i2\pi k\frac{n}{N}} \tag{13} -\end{align}$$ -where $N = T_0/T_s$ is the number of samples in the finite sequence. To simplify slightly, we assume that $N$ is even. If $|X(f)|$ drops fast enough, the terms for $|n|\ge N/2$ will vanish, giving: -$$ -x_p[k] \approx \sum_{n=1-N/2}^{N/2} f_0X(nf_0)e^{i2\pi k\frac{n}{N}} \tag{14} -$$ -Now we work on the other representation of $x_p[k]$, using $(4)$ and remembering that $x_p[k] = x[k]$ for $k=0,\dots,N-1$: -$$\begin{align} -x_p[k] &= \frac{1}{N}\sum_{n=0}^{N-1} X[n]e^{i2\pi n\frac{k}{N}} \tag{15} \\ - &= \frac{X[0]}{N} + \sum_{n=1}^{N/2-1} \frac{X[n]}{N}e^{i2\pi n\frac{k}{N}} - + \frac{X[N/2]}{N}e^{i2\pi \frac{N}{2}\cdot\frac{k}{N}} - + \underbrace{\sum_{n=N/2-1}^{N-1} \frac{X[n]}{N}e^{i2\pi n\frac{k}{N}}}_S \tag{16} -\end{align}$$ -$$\begin{align} -S &= \sum_{n=1}^{N/2-1} \frac{X[N-n]}{N}e^{i2\pi (N-n)\frac{k}{N}} \tag{17} \\ - &= \sum_{n=1}^{N/2-1} \frac{X[N-n]}{N}e^{i2\pi (-n)\frac{k}{N}}\underbrace{e^{i2\pi N\frac{k}{N}}}_{=1} \tag{18} \\ - &= \sum_{n=1-N/2}^{-1} \frac{X[n]}{N}e^{i2\pi n\frac{k}{N}} \tag{19} -\end{align}$$ -where we use circular index for $X[n]$, i.e. $n<0 \Rightarrow X[n]=X[n+N]$. Joining all the terms together simplifies to: -$$ -x_p[k] = \sum_{k=1-N/2}^{N/2} \frac{X[n]}{N}e^{i2\pi n\frac{k}{N}} \tag{20} -$$ -Finally, comparing $(20)$ to $(14)$, we conclude that: -$$ -\frac{X[n]}{N} \approx f_0X(nf_0) = \frac{X(nf_0)}{T_0} = \frac{X(nf_0)}{N T_s} = f_s\frac{X(nf_0)}{N} \tag{21} -$$ - -Simulations -In practice, when dealing with real signals, instead of calculating the Fourier Transform of the continuous signal, we sample the data (often the data is already in discrete form) and calculate its Fast Fourier Transform (which is exactly the same as the Discrete Fourier Transform, but computed by a faster method). Then we use this to get a decent approximation of the continuous Fourier Transform. Here is an example of such signal: - -Notice how this one is not perfectly symmetric. Its Fourier Transform can be computed by applying the definition $(1)$ directly, which might take a long time: - -Notice how the argument goes crazy. That's because most of the energy of the signal is shifted to $t\approx 2.3$, which adds a phase of $\approx -2\pi\cdot 2.3f$ to the spectrum relative to a more "centered" one. -We can also sample the signal and calculate the FFT for different values of $N$ and compare the result with samples of the exact Fourier Transform. The approximation gets better and better for higher sampling rates ($T_0$ is fixed here): - - - -The simulations above were done on Python 3, with the SciPy and NumPy libraries. The code was written for personal use only, so it's a mess, but it's given below just for reference: -import matplotlib.pyplot as plt -from cmath import * -from scipy.integrate import quad -from numpy import linspace, angle, inf, nan -import matplotlib.lines as mlines -from numpy.fft import fft -from numpy import array as vec - -hide_discontinuity = True -fourier_transform_plot_samples = 100 #recommended >1000, but it's very slow -signal_plot_samples = 1000 - -#signal(t+DT/2) = exp(t/5)*sin(2*pi*F*t)*bump(t) -F = 5.0 -DT = 4.0 -def bump(t): - if abs(t)>=1: - return 0 - return exp(-1/(1-t*t)) -def centralized_signal(t): - return (exp(t/3)*sin(2*pi*F*t)*bump(t)).real -def signal(t): - return centralized_signal(t-DT/2) -t = linspace(0,DT,num=signal_plot_samples) -signal_samples = [signal(k) for k in t] - -plt.plot(t,signal_samples) -plt.title('Signal') -plt.xlabel('Time (s)') -plt.ylabel('Amplitude') -plt.grid(True, linestyle='dotted') -fig = plt.gcf() -fig.canvas.set_window_title('Signal') -plt.show() - -############################### -def integrate(func, a, b): - def real_func(k): - return func(k).real - def imag_func(k): - return func(k).imag - real_integral = quad(real_func, a, b) - imag_integral = quad(imag_func, a, b) - return real_integral[0] + 1j*imag_integral[0] - -def fourier(func, f, a, b): - return integrate(lambda t: func(t)*exp(-2j*pi*f*t), a, b) - -def show_fourier_transform(): - f = linspace(-2*F,2*F,num=fourier_transform_plot_samples) - Xm = [] - Xa = [] - Xa_prev = nan - for k in f: - Xf = fourier(signal, k, -inf,inf) - Xm.append(abs(Xf)) - Xa_new = angle(Xf) - if hide_discontinuity and Xa_prev != nan and abs(Xa_prev-Xa_new) > 0.999*pi/2: - Xa_new = nan - Xa_prev = Xa_new - Xa.append(Xa_new) - - ay = plt.gca() - ay2 = ay.twinx() - ay.plot(f,Xm, color='C0') - ay2.plot(f,Xa, color='C1', linewidth=0.5) - C0_line = mlines.Line2D([], [], color='C0', label='Magnitude') - C1_line = mlines.Line2D([], [], color='C1', linewidth=0.5, label='Argument') - plt.legend(handles=[C0_line, C1_line], loc=1) - plt.title('Fourier Transform') - plt.xlabel('Frequency (Hz)') - ay.set_ylabel('Magnitude') - ay2.set_ylabel('Argument') - ay.grid(True, linestyle='dotted') - fig = plt.gcf() - fig.canvas.set_window_title('Fourier Transform') - plt.show() - -show_fourier_transform() - -############################### -def show_relation(N): - t = linspace(0,DT,num=N) - - Fs = N/DT - Ts = 1/Fs - f_samples = round(2*F*DT) - f_max = (f_samples-1)/DT - - indices_range = range(0,f_samples) - frequency_range = linspace(0,f_max,num=f_samples) - dense_frequency_range = linspace(0,f_max,num=fourier_transform_plot_samples) - - ########## DFT ########## - DFT = [] - for k in t: - DFT.append(signal(k)) - DFT = fft(DFT)[indices_range] - - ########## FT ########### - FT = [] - for k in frequency_range: - FT.append(fourier(signal, k, -inf,inf)) - FT = vec(FT) - dense_FT = [] - for k in dense_frequency_range: - dense_FT.append(fourier(signal, k, -inf,inf)) - dense_FT = vec(dense_FT) - - ######## Error ########## - dF = Ts*DFT - FT - - ######### Plot ########## - ax = plt.gca() - ax2 = ax.twiny() - - ax.plot(dense_frequency_range, abs(dense_FT), color='C1') - ax.scatter(frequency_range, abs(FT), marker='o', color='C1') - #ax.plot(frequency_range, angle(FT), marker='o', color='C1', linewidth=0.5) - ax2.plot(indices_range, abs(Ts*DFT), marker='d', color='C0') - #ax2.plot(indices_range, angle(Ts*DFT), marker='d', color='C0', linewidth=0.5) - ax2.plot(indices_range, abs(dF), color='C2') - - C0_line = mlines.Line2D([], [], color='C1', marker='o', label='Fourier Transform') - C1_line = mlines.Line2D([], [], color='C0', marker='d', label='DFT × Tₛ') - C2_line = mlines.Line2D([], [], color='C2', label='Error') - plt.legend(handles=[C0_line, C1_line, C2_line], loc=1) - - ax.set_xlim((-f_max/f_samples, f_max+f_max/f_samples)) - ax2.set_xlim((-1, f_samples-1+1)) - ax.set_xlabel('Frequency (Hz)') - #ax2.set_xlabel('Sample') - ax.set_ylabel('Magnitude') - - plt.title('N = '+str(N)+', Tₛ = '+str(1000*Ts)+' ms', y=1.08) - ax.yaxis.grid(True, linestyle='dotted') - ax.xaxis.grid(True, linestyle='dotted') - fig = plt.gcf() - fig.canvas.set_window_title('Relation between Fourier Transform and DFT') - plt.show() - -show_relation(100) -show_relation(1000) -show_relation(100000)<|endoftext|> -TITLE: Almost-identity: $[\int_0^\infty{\rm d}x-\sum_{x=1}^\infty] \prod_{k=0}^N\text{sinc}\left(\frac{x}{2k+1}\right) = \frac{1}{2}$ -QUESTION [8 upvotes]: Show that the identity - $$\int_0^\infty \prod_{k=0}^N \text{sinc}\left(\frac{x}{2k+1}\right)\,{\rm d}x - \sum_{n=1}^\infty \prod_{k=0}^N \text{sinc}\left(\frac{n}{2k+1}\right) = \frac{1}{2}$$ where $\text{sinc}(x) = \frac{\sin(x)}{x}$ holds for $N=0,1,2,\ldots,40000$ but fails for all larger $N$. - -I remember seeing this strange identity a few years ago and it stuck to my mind, but unfortunately I can't find the source of this right now and it's reconstructed from memory (and checked with a computer for small $N$ although $40000$ might not be accurate). This is why I'm asking it here. -If I remember correctly it's closely linked to Fourier transforms and for $N$ larger than $\sim 40000$ the difference between the left and right hand side should be smaller than $\sim 10^{-10000}$ so the agreement is extremely good. - -Do anyone know the source of this problem or otherwise how to solve it? What is the theory behind it (i.e. why does it break down at some finite value)? - -REPLY [5 votes]: There is a theorem stated on page 2 of this paper, which states that for $N+1 > 1$ positive numbers $a_0,a_1,\ldots,a_N > 0$ the identity -$$\dfrac{1}{2}+\sum_{n = 1}^{\infty}\prod_{k = 0}^{N}\text{sinc}(a_kn) = \int_{0}^{\infty}\prod_{k = 0}^{N}\text{sinc}(a_kx)\,dx$$ -holds provided that $\displaystyle\sum_{k = 0}^{N}a_k \le 2\pi$. (Also, for $N = 0$, the identity holds if $a_0 < 2\pi$). -Since $\displaystyle\sum_{k = 0}^{40248}\dfrac{1}{2k+1} \approx 6.283175 < 2\pi < 6.283188 \approx \displaystyle\sum_{k = 0}^{40249}\dfrac{1}{2k+1}$, the identity holds for $1 \le N \le 40248$ but fails for $N \ge 40249$. -The proof of this theorem has to do with Fourier transforms. Evaluating the Fourier transform of a function at $0$ gives you the integral of the function over $\mathbb{R}$. The Fourier transform of the product of sinc functions is a convolution of "rectangle" functions whose widths are proportional to $a_k$. If you convolve enough rectangle functions together, the value of the result at $0$ changes. This is an intuitive/non-rigorous explanation. I'm sure a more rigorous explanation can be found online.<|endoftext|> -TITLE: Show that subspace measure is a measure -QUESTION [5 upvotes]: Let $(X, \Sigma, \mu)$ be a measure space and $D \subseteq X$. - -Theorem: -Let - $$ \mu_D(S) = \inf\{\mu(U)\mid U \in \Sigma, S \subseteq U\}: \Sigma_D \to [0,\infty]$$ - where $\Sigma_D = \{U \cap D\mid U \in \Sigma\}$ - Then $(D, \Sigma_D, \mu_D)$ is a measure space. - -I am able to show that $\Sigma_D$ is a $\sigma$-algebra on $D$, but I am stuck while showing that $\mu_D$ is a measure. I tried the following: -Let $S, T\in \Sigma_D$ be disjoint sets. Then somehow I need to show that -$$\inf\{\mu(U)\mid U \in \Sigma, S \cup T \subseteq U\} -= \inf\{\mu(U) + \mu(V)\mid - U, V \in \Sigma \land U, V \text{ are disjoint} \land - S \subseteq U, T \subseteq V\}$$ -However, that introduces a problem: Disjoint sets have disjoint supersets. I know that this is not always true, so what is incorrect here? - -REPLY [4 votes]: Let $\{ A_{i}\}_{i=0}^{\infty} \subseteq \Sigma_{D}$ be pairwise disjoint. We need to show that -$$\mu_{D}\left(\bigcup_{i=0}^{\infty} A_{i}\right) = \sum_{i=0}^{\infty} \mu_{D}(A_{i}).$$ -Showing that $\mu_{D}(\cup A_{i}) \leq \sum \mu_{D}(A_{i})$ is fairly trivial: similar to any subadditivity proof, and using the subadditivity of $\mu$. Thus it remains to show that $\mu_{D}(\cup A_{i}) \geq \sum \mu_{D}(A_{i})$. -Well, let $\epsilon > 0$. Then there exists some $B \in \Sigma$ such that $B \supseteq \cup_{i=0}^{\infty} A_{i}$ and $\mu(B) \leq \mu_{D}(\cup_{i=0}^{\infty}A_{i}) + \epsilon$. By definition of $\Sigma_{D}$, for each $i \geq 0$, there exists some $B_{i} \in \Sigma$ such that $B_{i} \cap D = A_{i}$. Since the $A_{i}$'s are pairwise disjoint, $\{B_{i} \cap D\}_{i=0}^{\infty}$ is a pairwise disjoint sequence of sets. Define $$B_{i}^{*} := B \cap \left(B_{i} - \bigcup_{j\neq i}B_{j}\right), \ i \in \mathbb{N}.$$ -Then $B_{i}^{*} \supseteq A_{i}$ and $\{ B_{i}^{*}\}_{i=0}^{\infty} \subseteq \Sigma$ is pairwise disjoint. Therefore, -$$\sum_{i=0}^{\infty} \mu_{D}(A_{i}) \leq \sum_{i=0}^{\infty} \mu(B_{i}^{*}) = \mu\left(\cup_{i=0}^{\infty} B_{i}^{*}\right) \leq \mu(B) \leq \mu_{D}\left(\cup_{i=0}^{\infty} A_{i}\right) + \epsilon.$$ -Since $\epsilon > 0$ was arbitrary, $\sum_{i=0}^{\infty}\mu_{D}(A_{i}) \leq \mu_{D}\left(\cup_{i=0}^{\infty}A_{i}\right)$.<|endoftext|> -TITLE: Deceptively simple inequality involving expectations of products of functions of just one variable -QUESTION [7 upvotes]: For a proof to go through in a paper I am writing, I need to prove, as an auxiliary step, the following deceptively simple inequality: -$$E(X^a) E(X^{a+1} \ln X) > E(X^{a+1})E(X^a \ln X) $$ -where $X>e$ has a continuous distribution and $0\prod_i^nE(f_i(X)) $$ -as long as the functions $f_1\ldots f_n$ are continuous monotonic functions of $X$, and are all, for instance, increasing and satisfy $f_i(X)>0$ (e.g, John Gurland's "Inequalities of Expectations of Random Variables Derived by Monotonicity or Convexity", The American Statistician, April 1968). The inequality I am trying to prove is, in a sense, "in between" the two sides in the inequality above. -Any suggestion would be very greatly appreciated. - -REPLY [3 votes]: Thanks for the insightful and fun problem. Here is a proof (I think) via the Cauchy-Schwarz inequality. Consider the function -$$ -f(t) \equiv \frac{ \mathbb E[X^{a+t} \ln X] } { \mathbb E[X^{a+t}] }. -$$ -So the target inequality is $f(1) > f(0)$. We can show this by proving $f(t)$ is increasing, or $f'(t) \ge 0$. -But this is easy, because -$$ -\begin{aligned} -f'(t) -&= -\frac{d}{dt} \left( -\frac{ \mathbb E[e^{(a+t)\ln X} \ln X] } { \mathbb E[e^{(a+t) \ln X}] } -\right) -\\ -&= -\frac{ \mathbb E\left[ \frac{d}{dt} e^{(a+t)\ln X} \ln X \right] } -{ \mathbb E\left[e^{(a+t) \ln X} \right] } -- -\mathbb E[ e^{(a+t)\ln X} \ln X ] -\frac{ \mathbb E\left[ \frac{d}{dt} e^{(a+t) \ln X} \right] } - { \mathbb E[e^{(a+t) \ln X}]^2 } -\\ -%&= -%\frac{ \mathbb E\left[ e^{(a+t)\ln X} (\ln X)^2 \right] } -%{ \mathbb E\left[e^{(a+t) \ln X} \right] } -%- -%\mathbb E\left[ e^{(a+t) \ln X} \ln X \right] -%\frac{ \mathbb E\left[ e^{(a+t) \ln X} \ln X \right] } -%{ \mathbb E\left[e^{(a+t) \ln X}\right]^2 } -\\ -&=\frac{ - \mathbb E[X^{a+t} (\ln X)^2] \, \mathbb E[X^{a+t}] - - - \mathbb E[X^{a+t} (\ln X)]^2 -} { - \mathbb E\left[X^{a+t}\right]^2 -} \ge 0. -\qquad (1) -\end{aligned} -$$ -The numerator of (1) is nonnegative by the -Cauchy-Schwarz inequality. -That is, with $U = X^{\frac{a+t}{2}} \ln X, V = X^{\frac{a+t}{2}}$, we have -$$ -\mathbb E\left[U^2 \right] \mathbb E\left[V^2\right] \ge \mathbb E[U \, V]^2. -\qquad (2) -$$ -It remains to argue that the equality cannot hold for all $t \in [0,1]$, which is easy. -Alternative to the Cauchy-Schwarz inequality (2) -Alternatively, we can show (1) directly by observing that -$$ -\mathbb E\left[X^{a+t}(y - \ln X)^2 \right] \ge 0, -$$ -holds for all $y$ (for the quantity of averaging is nonnegative), i.e., the quadratic polynomial -$$ -\begin{aligned} -p(y) -&= -\mathbb E\left[X^{a+t}\right] y^2 -- 2 \, \mathbb E\left[X^{a+t} \ln X\right] y -+ \mathbb E\left[X^{a+t} (\ln X)^2\right] -\\ -&\equiv A \,y^2 - 2 \, B \, y + C, -\end{aligned} -$$ -has no zero. -Thus the discriminant of $p(y)$, which is $4B^2 - 4AC$, must be non-positive. This means $AC \ge B^2$, or -$$ -\mathbb E\left[X^{a+t}\right] \, -\mathbb E\left[X^{a+t} (\ln X)^2\right] -\ge -\mathbb E\left[X^{a+t} \ln X\right]^2. -$$ - -Further discussion -There is a more intuitive interpretation of (1). We define the characteristic function of $\ln X$ as -$$ -F(t) \equiv \log \left\{ \mathbb E\left[ X^{a+t} \right] \right\}. -$$ -We find $f(t) = F'(t)$, and $f'(t) = F''(t) \ge 0$. In other words, (1) is a generalized statement of that the second cumulant of $\ln X$ is non-negative at nonzero $a+t$.<|endoftext|> -TITLE: Proof of the Mazur-Ulam Theorem -QUESTION [5 upvotes]: The Mazur-Ulam Theorem Theorem $2.1$ states that any surjective isometry between any two real normed spaces $f:X \rightarrow Y$ is affine. -In the proof of the theorem, the author mentioned that it suffices to show that for any $x, y \in X$, -$$f\left(\dfrac{x+y}{2}\right) = \dfrac{f(x)+f(y)}{2}$$ -Why is it the case? How to conclude $f$ is affine from equation above? -Recall that $f:X \rightarrow Y$ is an affine function if for all $x,y \in X$ and $0 \leq t \leq 1$, -$$f[(1-t)x+ty] = (1-t)f(x) + t f(y)$$ - -REPLY [10 votes]: The midpoint-affine property $$f\left(\dfrac{x+y}{2}\right) = \dfrac{f(x)+f(y)}{2} \tag{1}$$ implies being affine under the assumption that $f$ is continuous (which it is, being an isometry). As stated, $(1)$ amounts to the case $t=1/2$ of -$$f[(1-t)x+ty] = (1-t)f(x) + t f(y)\tag{2}$$ -But applying $(1)$ again, the second time to $x$ and $(x+y)/2$, yields $(2)$ for $t=1/4$. Similarly, applying $(1)$ to $y$ and $(x+y)/2$ yields $(2)$ for $t=3/4$. -Continuing this process, we obtain $(2)$ for all dyadic rationals in $(0,1)$: numbers of the form $k/2^m$, $0 -TITLE: Is it 'more rigorous' to perform definite integrations, rather than indefinite integration while solving ODEs? -QUESTION [8 upvotes]: In the beginning of my ODE course, my professor said something about performing definite integrations being 'more rigorous' than indefinite ones somehow, but also that it really wasn't much important, as both ones worked for the same problems. -I remember he gave an example like the following: - -Consider the simple differential equation - $$ -x'=g(t),\quad x(\tau)=\xi. -$$ -Then, integrating from $\tau$ to $t$, we have -$$ -x(t)-\xi=\int_\tau^tg(s)ds \iff x(t)=\xi+\int_\tau^tg(s)ds -$$ -And we have our solution. -Note that if we performed an indefinite integration, we wouldn't have gotten a solution, but many 'candidates'. - -My questions are: - -How right is my professor, in terms of the 'rigorousity'? -What allows one to change $t\mapsto s$, before integrating? - -REPLY [4 votes]: Your professor's definition of "rigorous" is a personal one. -All his example does is use the explicit condition supplied. It's no more rigorous that integrating the general form with the arbitrary constant and calculating the value of the arbitrary constant from the given condition then. -However I would take the view that such convenient forms of solution do not generally provide themselves with differential equations. In these cases you often have to work out the general form. The arbitrary constant may even give the solution a "symmetry" or "shape" that an attempt to bypass that point will miss, even after you calculate the particular value for the constant in a particular case. Seeing the general shape is often a significant factor in extending and improving physical theories.<|endoftext|> -TITLE: Evaluating a certain integral which generalizes the ${_3F_2}$ hypergeometric function -QUESTION [6 upvotes]: Euler gave the following well-known integral representations for the Gauss hypergeometric function ${_2F_1}$ and the generalized hypergeometric function ${_3F_2}$: for $0<\Re{\left(\beta\right)}<\Re{\left(\gamma\right)}$, -$$\small{{_2F_1}{\left(\alpha,\beta;\gamma;z\right)}=\frac{1}{\operatorname{B}{\left(\beta,\gamma-\beta\right)}}\int_{0}^{1}\frac{t^{\beta-1}\left(1-t\right)^{\gamma-\beta-1}}{\left(1-zt\right)^{\alpha}}\,\mathrm{d}t};\tag{1}$$ -and for $0<\Re{\left(\mu\right)}<\Re{\left(\nu\right)}$, -$$\small{{_3F_2}{\left(\alpha,\beta,\mu;\gamma,\nu;z\right)}=\frac{1}{\operatorname{B}{\left(\mu,\nu-\mu\right)}}\int_{0}^{1}t^{\mu-1}\left(1-t\right)^{\nu-\mu-1}{_2F_1}{\left(\alpha,\beta;\gamma;zt\right)}\,\mathrm{d}t}.\tag{2}$$ -I'm curious to learn if there is a way evaluate the following integral (possibly in terms of higher order generalized hypergeometric functions or the two-variable Appell functions?): - -$$\small{\mathcal{I}{\left(\alpha,\beta,\gamma,z;\mu,\nu,\rho,w\right)}=\int_{0}^{1}\frac{t^{\mu-1}\left(1-t\right)^{\nu-\mu-1}}{\left(1-wt\right)^{\rho}}{_2F_1}{\left(\alpha,\beta;\gamma;zt\right)}\,\mathrm{d}t}.\tag{3}$$ - -Now, this integral $\mathcal{I}$ is a straightforward generalization of $(3)$, and it seems only natural to me that there is a paper on this integral out there somewhere. But if it exists it has eluded me, despite most furious Googling on my part. -If any of our resident master integrators have any insight to offer, I'd be very grateful. I'd also welcome any niche references that might be relevant here if someone happens to have any. -Cheers! - -REPLY [3 votes]: This answer is meant to connect the ones given by @Harry Peter and @Start wearing purple, clarifying a few questions emerged in the comments. -The integral of interest can be evaluated in the way pointed out by @Harry Peter, without forgetting to set some conditions on the parameters. First of all, for $\left|z\right|<1$ we can use the power series representation of the Gauss hypergeometric fucntion ${}_2F_1$ -$$\begin{align*}\mathcal{I}(\alpha,\beta,\gamma,z;\mu,\nu,\rho,w)&=\int_0^1\frac{t^{\mu-1}(1-t)^{\nu-\mu-1}}{(1-wt)^{\rho}}{}_2F_1(\alpha,\beta,\gamma,zt)\,\mathrm{d}t\\[6pt]&=\int_0^1\sum_{n=0}^{\infty}\frac{t^{\mu+n-1}(1-t)^{\nu-\mu-1}}{(1-wt)^{\rho}}\frac{(\alpha)_n(\beta)_n}{(\gamma)_n}\frac{z^n}{n!}\,\mathrm{d}t, -\end{align*}$$ -where $(d)_n$ is the (rising) Pochhammer symbol, defined by -$$(d)_n=\begin{cases} -1 &\;n=0\\ -d(d+1)\cdots(d+n-1) &\;n>0. -\end{cases}$$ -The integral can now be performed using Euler representation, which in our case holds for $\Re(\nu+n)>\Re(\mu+n)>0$ and $\left|\mathrm{arg}(1-w)\right|<\pi$, -$$\begin{align*}\mathcal{I}(\alpha,\beta,\gamma,z;\mu,\nu,\rho,w)&=\sum_{n=0}^{\infty}\frac{(\alpha)_n(\beta)_n}{(\gamma)_n}\frac{z^n}{n!}B(n+\mu,\nu-\mu){}_2F_1(\rho,n+\mu;n+\nu;w)\\[6pt]&=\sum_{n,m=0}^{\infty}\frac{\Gamma(n+\mu)\Gamma(\nu-\mu)}{\Gamma(n+\nu)}\frac{(\alpha)_n(\beta)_n}{(\gamma)_n}\frac{(\rho)_m(n+\mu)_m}{(n+\nu)_m}\frac{z^n}{n!}\frac{w^m}{m!}, -\end{align*}$$ -valid for $\left|w\right|<1$. Considering that -$$(d)_n=\frac{\Gamma(d+n)}{\Gamma(d)}\quad\text{for}\;\;d\neq 0,-1,-2,\dots$$ -when $\mu,\nu\neq 0,-1,-2,\dots$ we can write -$$\begin{align*}\mathcal{I}(\alpha,\beta,\gamma,z;\mu,\nu,\rho,w)&=\sum_{n,m=0}^{\infty}\frac{\Gamma(n+\mu+m)\Gamma(\nu-\mu)}{\Gamma(n+\nu+m)}\frac{(\alpha)_n(\beta)_n}{(\gamma)_n}(\rho)_m\frac{z^n}{n!}\frac{w^m}{m!}\\[6pt]&=\frac{\Gamma(\mu)\Gamma(\nu-\mu)}{\Gamma(\nu)}\sum_{n,m=0}^{\infty}\frac{(\mu)_{n+m}(\alpha)_n(\beta)_n(\rho)_m}{(\nu)_{n+m}(\gamma)_n}\frac{z^n}{n!}\frac{w^m}{m!}\\[6pt]&=B(\mu,\nu-\mu)\,\mathrm{F}^{1:2;1}_{1:1;0}\left(\left.\begin{matrix}\mu&:&\alpha,\beta&;&\rho&\\\nu&:&\gamma&;&-&\end{matrix}\right|z,w\right). -\end{align*}$$ -$\mathrm{F}^{p:q;k}_{l:m;n}$ denotes Kampé de Fériet's double hypergeometric function in the (modified) notation of Burchnall and Chaundy [see Srivastava and Panda - "An integral representation for the product of two Jacobi polynomials", Eq. (26)] -$$\begin{align*}&\mathrm{F}^{p:q;k}_{l:m;n}\left(\left.\begin{matrix}(a_p)&:&(b_q)&;&(c_k)&\\(\alpha_l)&:&(\beta_m)&;&(\gamma_n)&\end{matrix}\right|x,y\right)\\[6pt]&\quad=\sum_{r,s=0}^{\infty}\frac{\prod_{j=1}^p(a_j)_{r+s}\prod_{j=1}^q(b_j)_r\prod_{j=1}^k(c_j)_s}{\prod_{j=1}^l(\alpha_j)_{r+s}\prod_{j=1}^m(\beta_j)_r\prod_{j=1}^n(\gamma_j)_s}\frac{x^r}{r!}\frac{y^s}{s!}, -\end{align*}$$ -where $(d_h)$ denotes the sequence of $h$ parameters $d_1,\dots,d_h$. In general, convergence of this double series is assured if one of the following conditions holds -i) $p+ql\\[5pt]\max\left\{\left|x\right|,\left|y\right|\right\}<1 &\text{if}\;\;p\le l.\end{cases}\end{align*} -$ -In our case we have $p=1$, $q=2$, $k=1$, $l=1$, $m=1$ and $n=0$, so we are in (ii) and the double series converges only if $\max\left\{\left|z\right|,\left|w\right|\right\}<1$. Collecting all the constraints introduced we finally have -$$\mathcal{I}(\alpha,\beta,\gamma,z;\mu,\nu,\rho,w)=B(\mu,\nu-\mu)\,\mathrm{F}^{1:2;1}_{1:1;0}\left(\left.\begin{matrix}\mu&:&\alpha,\beta&;&\rho&\\\nu&:&\gamma&;&-&\end{matrix}\right|z,w\right),$$ -if $\max\left\{\left|z\right|,\left|w\right|\right\}<1$, $\left|\text{arg}(1-w)\right|<\pi$ and $\Re(\nu)>\Re(\mu)>0$. - -In the case $\mu=\gamma$ the general result reduces to the one given by @Start wearing purple -$$\begin{align*}\mathcal{I}(\alpha,\beta,\gamma,z;\gamma,\nu,\rho,&w)=B(\gamma,\nu-\gamma)\,\mathrm{F}^{1:2;1}_{1:1;0}\left(\left.\begin{matrix}\gamma&:&\alpha,\beta&;&\rho&\\\nu&:&\gamma&;&-&\end{matrix}\right|z,w\right)\\[6pt]&=\int_0^1\frac{t^{\gamma-1}(1-t)^{\nu-\gamma-1}}{(1-wt)^{\rho}}{}_2F_1(\alpha,\beta,\gamma;zt)\,\mathrm{d}t\\[6pt]&=\frac{1}{(1-w)^{\rho}}\int_0^1t^{\gamma-1}(1-t)^{\nu-\gamma-1}\left(1-\frac{w}{w-1}+\frac{wt}{w-1}\right)^{-\rho}{}_2F_1(\alpha,\beta,\gamma;zt)\,\mathrm{d}t, -\end{align*}$$ -where we have multiplied and divided by $(1-w)^{\rho}$. According to the single integral representation of Appell series $F_3$, the last expression is -$$\mathcal{I}(\alpha,\beta,\gamma,z;\gamma,\nu,\rho,w)=\frac{B(\gamma,\nu-\gamma)}{(1-w)^{\rho}}F_3\left(\rho,\alpha,\nu-\gamma,\beta;\nu;\frac{w}{w-1};z\right).$$<|endoftext|> -TITLE: Is there an intuitive way of seeing why there are only finitely many irreducible representations? -QUESTION [8 upvotes]: Let $G$ be a finite group. A basic result in representation theory is that up to $\mathbb{C}[G]$-module isomorphism, there are only finitely many irreducible representations of $G$ over $\mathbb{C}$. The way I'm familiar with proving this is to let $(\pi, W)$ be any irreducible representation of $G$ with character $\chi$, and let $(\phi, V)$ be the left regular representation of $G$ ($V$ is the vector space with formal basis $x_g : g \in G$, and the $\mathbb{C}[G]$-module structure is given by $gx_h = x_{gh}$) with character $\gamma$. One can prove that "the number of times $W$ occurs in $V$" is equal to $$(\chi, \gamma) = \frac{1}{|G|} \sum\limits_{g \in G} \chi(g) \overline{\gamma(g)}$$ and one can then quickly argue that this is not zero. -It follows that every irreducible representation of $G$ occurs as a direct summand in $V$, and from the limited uniqueness a representation has as a direct sum of irreducible subrepresentations, you can see that there are only finitely many irreducible representations up to isomorphism -However, is there a more intuitive way of seeing that there are only finitely many irreducible representations of $G$? Without any character theory, I know that if $(\pi,W)$ is irreducible, then the dimension of $W$ must be $\leq$ the order of $G$, because if $0 \neq v \in W$, then the span of $gv : g \in G$ must be all of $W$. I am having trouble coming up with any immediate results beyond that. - -REPLY [3 votes]: Let $\rho$ be a representation of $G$ on the vector space $V$. For any $v \in V$ and $l$ linear functional on $V$ consider the "matrix coefficient" $\phi_{l,v}$, a function from $G$ to $k$ defined as follows -$$\phi_{l,v}(g) = l (g v)$$ -The equality $$\phi_{l,hv}(g) = l (g hv)= \phi_{l,v}(gh)$$ -implies that the map from $V$ to $k$ valued functions on $G$, -$$v\mapsto \phi_{l,v}(\cdot)$$ -is a morphism of representations. Assume now that $V$ is irreducible and $l\ne 0$. Then we conclude that $V$ imbeds into the space of $k$ valued functions on $G$, that you may call $k[G]$. -Thus, we see that every irreducible representation is a subrepresentation of $k[G]$. -Assume now that $G$ is finite. Let us show that the sum of the dimensions of irreducible representations is $\le |G|$. Let $V_1$, $\ldots$, $V_N$ irreducible subrepresentations of $k[G]$ so that $\sum \dim V_i > |G|$. Let $i$ minimum so that the sum of $V_1$, $\ldots$, $V_i$ is not direct. Then $V_i \subset V_1 \oplus \cdots \oplus V_{i-1}$ and so it will be isomorphic to one of the $V_1$, $\ldots$, $V_{i-1}$.<|endoftext|> -TITLE: Does $\cdots \to G_1\overset f\to G_2 \overset g\to G_3\to \cdots$ exact imply $0\to \ker(g) \to G_2 \to \operatorname{coker}(f)\to 0$ exact? -QUESTION [9 upvotes]: Given a (part of a) long exact sequence of abelian groups (or modules over some commutative ring) -$$ -\cdots \to G_1\overset f\to G_2 \overset g\to G_3 \to \cdots -$$ -we have the short exact sequence -$$ -0 \to \ker(g) \to G_2 \to \operatorname{coker}(f)\to 0 -$$ -which may be verified by simple diagram chasing. Does the same hold in a general abelian category (where diagram chasing doesn't make sense)? If not, does it more specifically hold in the category of $\mathcal O_X$-modules over a scheme $(X, \mathcal O_X)$? -I came across this problem because I need to do diagram chasing on the global sections of a diagram of sheaves of $\mathcal{O}_X$-modules with exact rows and columns. While the global section functor is not right-exact, I only need exactness at $\Gamma(X, G_2)$ in the short sequence to do the chasing that I want, which I have, since $\Gamma$ is a left-exact functor. - -REPLY [5 votes]: The Freyd-Mitchell theorem guarantees that you can perform diagram chasing in any abelian category (provided that your diagram only involves a set of objects). -In any case, diagram chasing is unnecessary. In any abelian category, any morphism $g : G_2 \to G_3$ gives rise to an exact sequence -$$0 \to \text{ker}(g) \to G_2 \to \text{im}(g) \to 0$$ -and it's furthermore true that if $G_1 \xrightarrow{f} G_2 \xrightarrow{g} G_3$ is exact, then $\text{im}(g) = \text{coker}(f)$; this is the categorical dual of the more familiar version of exactness that $\text{im}(f) = \text{ker}(g)$. -If you prefer the dual argument, in any abelian category, any morphism $f : G_1 \to G_2$ gives rise to an exact sequence -$$0 \to \text{im}(f) \to G_2 \to \text{coker}(f) \to 0$$ -and it's furthermore true by exactness that $\text{im}(f) = \text{ker}(g)$.<|endoftext|> -TITLE: Flow on compact manifold -QUESTION [5 upvotes]: These questions seem simple, and but I have not found the answer on the web (I have no mathematician in my neighborhood). -Does a continuous injective function from $E$ to $E$ have to be surjective, where $E$ is either the $n$-sphere, the $n$-torus, or more generally a compact manifold without boundary? (I have found simple counterexamples in case the manifold is not compact, or has a boundary) -The case $E=S^1$ is solved in "An injective continuous map on the unit sphere is a homeomorphism", but I cannot generalize the argument. -The initial motivation of my question comes from a flow defined on a sphere (or torus), which is injective. And I need it to be surjective as well for it to define a new coordinate system on the sphere. -Thanks in advance! - -REPLY [4 votes]: (This is written backwards, as I realized what tools were necessary. Sorry!) -Suppose that $M$ is a compact manifold without boundary of pure dimension $n$. Let $f : M \to N$ be a map which is continuous and injective, and let $N$ be a connected manifold of the same dimension as M. Then $f$ is also surjective. -Pf: The image must be compact, hence closed. We will show that the image is also open, hence by connectedness of $N$ it must be all of $N$ (it is obviously non-empty). The image is homeomorphic to $M$, using compactness again to guarantee that the set theoretic inverse is continuous, and hence the image is locally homeomorphic to $R^n$. (This is were we are using the assumption that $M$ has no boundary.) Since we can check whether the image is open by restricting to Euclidean charts in the codomain, we are reduced to the following lemma. -Lemma: Suppose that $V$ is a subset of $R^n$, so that with the induced subspace topology, $V$ is homeomorphic to a manifold of dimension $n$. Then $V$ is open in $R^n$. -Pf: We reduce again to the case that $V$ is homeomorphic to $R^n$. So we need to show that any subspace of $R^n$ homeomorphic to $R^n$ is as an open set. -This is a theorem in Hatcher, 2B.3: "If a subspace $X$ of $R^n$ is homeomorphic to an open set in $R^n$, the $X$ is itself open in $R^n$." -I'll copy the argument here. Let me know if you have any questions about it! (Or you can just go and read it in Hatcher. Not sure why I typed it up, but I did. Okay, I admit it. It was so that I would read his argument carefully...) -Proof: - -Regard $S^n$ as the one point compactification of $R^n$. We will show that $X$ is open in $S^n$. -For a point $x \in X$, let $D$ be a neighborhood homeomorphic to a closed disc, and let $S$ be the boundary. -Then $S^n - D$ is open, and is connected by (lemma below - a computation in homology). Also, $S^n - S$ is open, and has two components. (Also by lemma below). -Thus, $S^n - S$ is decomposed as the disjoint union of the connected sets $S^n - D$ and $D - S$, so these must be components of $S^n - S$. Thus, $D - S$ is open in $S^n$, since it is a component of the open set $S^n - S$, and hence it gives an open neighborhood of $x$ in $S^n$ which is contained in $X$. So $X$ is open. - -Lemma: Homology computations (Hatcher 2B.1) -a) If $D$ is a subspace of $S^n$ homeomorphic to $D^k$ for some $k \geq 0$, then $\tilde{H}_i(S^n - D) = 0$ for all $i$. -b) If $S$ is a subspace of $S^n$ homeomorphic to $S^k$ for some $k$ with $0 \leq k < n$, then $\tilde{H}_i(S^n - S) $ is $Z$ for $i = n - k - 1$ and 0 else. -Proofs: -a) - -The proof is by induction. The case when $k = 0$ is easy, because then $S^n - D \cong R^n$. -$h : I^k \to D$ is a homeomorphism. Let $A = S^n \setminus h(I^{k-1} \times [0,1/2])$ and $B = S^n \setminus h(I^{k-1} \times [1/2,1])$. So $A \cap B = S^n - D$ and $A \cup B = S^n \setminus h(I^{k-1} \times {1/2})$. -The inductive step tells us $\tilde{H_i}(A \cup B) = 0$ for all $i$, so Mayer-Vietoris gives $\phi : \tilde{H_i}(S^n - D) \to \tilde{H_i}(A) \oplus \tilde{H_i}(B)$ for all $i$. -This map $\Phi$ is induced by the inclusions $S^n - D \to A$. (There are signs in the Mayer-Vietoris sequence to make it exact, but ignore them here.) The point is that if there is an $i$-dimensional cycle $\alpha$ in $S^n - D$ that is not a boundary in $S^n - D$, then $\alpha$ is not a boundary in $A$ or $B$. -We iterate this last idea, chopping up the last $I$ factor of $I^k$ into finer pieces to try to reduce to the lower dimension disc case. We end up with a sequence of $I_1 \supset \ldots$ of closed intervals in $I$ with intersection a point $p \in I$ so that $\alpha$ is not a boundary in $S^n \setminus h(I^{k-1} \times I_m$ for each $m$. By the inductive step, $\alpha$ is the boundary of a chain $\beta$ in $S^n \setminus h(I^{k-1} \times {p})$. Since $\beta$ is a finite linear combination of singular simplices, its support is compact, and hence will be contained in some $S^n \setminus h(I^{k-1} \times I_m)$. This is a contradiction, hence actually $\alpha$ was a boundary. - -b) The base case, when $S$ is two points, again is easy, as $S^n$ minus two points is $S^{n-1} \times R$. To do the inductive step, we write the k sphere $S$ as a union of two $k$ dimensional discs. Then let $A = S^n - D_1$ and $B = S^n - D_2$. Both of these have trivial reduced homology by the previous argument, and the Mayer Vietoris sequence gives isomorphisms $\tilde{H_i(S^n - S)} \cong \tilde{H_{i+1}}(S^n - (D_1 \cap D_2))$. (here we use that $A \cap B = S^n \setminus S$ and $A \cup B = S^n \setminus (D_1 \cap D_2)$.<|endoftext|> -TITLE: Evaluation of $\lim_{n\rightarrow \infty}\sum^{n}_{r=1}\frac{r}{n^2+n+r}$ -QUESTION [5 upvotes]: Evaluation of $$\lim_{n\rightarrow \infty}\sum^{n}_{r=1}\frac{r}{n^2+n+r}$$ - -$\bf{My\; Try::}$ Let $$L = \lim_{n\rightarrow \infty}\sum^{n}_{r=1}\frac{r}{n^2+n+r} = \lim_{n\rightarrow \infty}\sum^{n}_{r=1}\frac{\frac{r}{n}}{\frac{r^2}{n^2}+\frac{1}{n}+\frac{r}{n^2}}\cdot \frac{1}{n}$$ -I want to convert into Reinmann Integral, But it is not possible here. -So How can I solve it -Help me -Thanks - -REPLY [4 votes]: Besides Jack's neat answer, a different approach giving more than the desired limit. -One may rewrite your sum with the standard harmonic numbers -$$ -\begin{align} -\sum^{n}_{r=1}\frac{r}{n^2+n+r}&=\sum^{n}_{r=1}\frac{n^2+n+r-(n^2+n)}{n^2+n+r}\\\\&=n-(n^2+n)\sum^{n}_{r=1}\frac1{n^2+n+r}\\\\ -&=n-(n^2+n)\left(H_{n^2+2n}- H_{n^2+n+1}\right) -\end{align} -$$ then use the asymptotics of harmonic numbers, as $ N \to \infty$, -$$ -H_N=\log N+\gamma+\frac1{2N}-\frac1{12N^2}+\mathcal{O}\left(\frac1{N^4} \right) -$$ leading readily to - -$$ -\sum^{n}_{r=1}\frac{r}{n^2+n+r}=\frac12-\frac1{3n}+\mathcal{O}\left(\frac1{n^2} \right) -$$ - -as $n \to \infty$.<|endoftext|> -TITLE: Prove that if $a_1 + a_2 + \ldots$ converges then $a_1+2a_2+4a_4+8 a_8+\ldots$ converges and $\lim na_n=0$ -QUESTION [10 upvotes]: Let $a_1,a_2,a_3,\ldots$ be a decreasing sequence of positive numbers. - Show that -(a) if $a_1+a_2+\ldots$ converges then $\lim_{n\rightarrow\infty} n a_n=0$ -(b) $a_1+a_2+\ldots$ converges if and only if $a_1+2 a_2+4 a_4 +\ldots - $ converges. - -(a) -If $\sum a_i$ converges then for any $\epsilon>0$ there is natural number $N_1$ such that if $n>N_1$ then $$2n \cdot a_{2n} \le\sum_{i=n}^{2n} a_i <\epsilon$$ -We cam deal in the same way with the odd terms and for given $\epsilon>0$ find $N_2$ such that -$$(2n+1) \cdot a_{2n+1} \le\sum_{i=n+1}^{2n+1} a_i <\epsilon$$ -So for every $\epsilon>0$ there is $N=\max\{N_1,N_2\}$ such that whenever $n>N$ then $na_n <\epsilon$. -Is this the correct way of proving that fascinating fact? -(b) -If the second series converges then since $a_1,a_2,\ldots$ is decreasing sequence of nonegative numbers, from comparison test we know that the first series converges too. -For the converse I will show that partial sums of the second series are bounded. -$$\begin{align*} -a_1+\frac12\sum_{i=1}^N2^ia_{2^i}&=a_1+a_2+2a_4+4a_8+\dots+2^{N-1}a_{2^N}\\ -&\leq a_1+ a_2+a_3+a_4+a_5+a_6+a_7+a_8+\dots+a_{2^{N-1}+1}+\dots+a_{2^N-1}+a_{2^N}\\ -&\leq \sum_{i=1}^\infty a_i<\infty -\end{align*}$$ - -REPLY [3 votes]: If you're interested, the Cauchy condensation test is actually a special case of the Schlömilch test. The proof is actually relatively straightforward and very simply stated (with some missing details) here: http://arxiv.org/pdf/1011.4697.pdf. -The generalization covers many of the same series that the integral test would otherwise cover, so its utility is partially diminished.<|endoftext|> -TITLE: It is possible to get a closed-form for $1+2^i+3^i+\cdots (N-1)^i$? -QUESTION [9 upvotes]: Let $i=\sqrt{-1}$ the complex imaginary unit, taking $$arg(2)=0$$ for the definition of the summand $2^i$ in $$1^i+2^i+3^i+\cdots (N-1)^i,$$ -as $$2^i=\cos\log 2+ i\sin\log 2,$$ -see [1]. - -Question. It is possible to get a closed-form (or the best approximation possible), for an integer $N\geq 1$ - $$1+2^i+3^i+\cdots (N-1)^i,$$ - where the summands are defined in the same way, taking principal branches of complex argument and complex exponentiation? - -Thanks in advance, my goal is start to refresh some easy facts in complex variable, please tell me if there are mistakes in the use of previous definitions. -References: -[1] MathWorld, http://mathworld.wolfram.com/ComplexExponentiation.html http://mathworld.wolfram.com/ComplexArgument.html - -REPLY [3 votes]: I think there is a closed form, notice: -$$1^i+2^i+3^i+\dots+(n-1)^i=\sum_{k=2}^{n}\left(k-1\right)^i=\sum_{k=1}^{n-1}k^i=\text{H}_{n-1}^{(-i)}=\zeta(-i)-\zeta(-i,n)$$ -Where $\zeta(s,a)$ is the Hurwitz zeta function, $\zeta(s)$ is the Riemann zeta function and -$\text{H}_{n}^{(r)}$ is the generalized harmoic number. - -EDIT: -$$\zeta(-i)=\lim_{s\to0}\left[\sum_{k=1}^{\infty}k^{i-s}\right]\approx 0.0033+0.4182i$$<|endoftext|> -TITLE: Find all solutions to the functional equation $f(x) +f(x+y)=y+2 $ -QUESTION [5 upvotes]: I've started studying functions and I am having trouble with the following question: - -Find all solutions to the functional equation $f(x) +f(x+y)=y+2 $ - -Using the substitution technique when $y=0$ I have $f(x)=1$. -This implies that also $f(x+y)=1 $ and since $f(x)+f(x+y)=y+2$ , I am left with the conclusion that there are not solutions for the above functional equation. -Is this correct ? - -REPLY [5 votes]: You are correct. Setting $y = 0$ gives us -$$ \forall x \in \mathbb{R} : f(x) = 1$$ -In particular -$$ \forall y \in \mathbb{R} : 1 + 1 = y + 2 \iff y = 0$$ -which clearly is a contradiction!<|endoftext|> -TITLE: Evaluating $\int_{0}^{\infty} \left[\left(\frac{2015}{2015+x}+\cdots +\frac{2}{2+x}+\frac{1}{1+x}-x\right)^{2016}+1 \right] ^{-1}\mathrm{d}x$ -QUESTION [13 upvotes]: I need to evaluate $$\int_{0}^{\infty} \left[\left(\frac{2015}{2015+x}+\cdots +\frac{2}{2+x}+\frac{1}{1+x}-x\right)^{2016}+1 \right] ^{-1}\mathrm{d}x -$$ -I've been told that the way forward is showing that the integral is the same as $$\int_0^{\infty} (x^{2016} + 1)^{-1} \, \mathrm{d}x$$ -i.e: that the weird sum of fractions doesn't affect the integral. -I've tried $$\sum_{n=1}^{2015} \frac{n}{n+x} = \sum_{n=1}^{2015} \left(1 - \frac{x}{n+x}\right) = 2015 - \sum_{n=1}^{2015} \frac{x}{n+x}$$ -but it's getting me nowhere. - -REPLY [12 votes]: $\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} - \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} - \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} - \newcommand{\dd}{\mathrm{d}} - \newcommand{\ds}[1]{\displaystyle{#1}} - \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} - \newcommand{\ic}{\mathrm{i}} - \newcommand{\mc}[1]{\mathcal{#1}} - \newcommand{\mrm}[1]{\mathrm{#1}} - \newcommand{\pars}[1]{\left(\,{#1}\,\right)} - \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} - \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} - \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} - \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ - -$\ds{\int_{0}^{\infty}\bracks{\pars{{2015 \over 2015 + x} + \cdots + -{2 \over 2 + x} + {1 \over 1 + x} - x}^{2016} + 1}^{-1}\,\dd x:\ {\large ?}}$. - -\begin{align} -&\int_{0}^{\infty}\bracks{\pars{{2015 \over 2015 + x} + \cdots + -{2 \over 2 + x} + {1 \over 1 + x} - x}^{2016} + 1}^{-1}\,\dd x -\\[5mm] = &\ -\int_{0}^{\infty}{\dd x \over \mc{F}^{2016}\pars{x} + 1}\quad -\mbox{where}\quad\mc{F}\pars{x} \equiv \sum_{k = 1}^{2015}{k \over k + x} - x -\end{align} - -\begin{align} -\mc{F}\pars{x} & \equiv -\sum_{k = 1}^{2015}{k \over k + x} - x = -2015 - x\sum_{k = 1}^{2015}{1 \over k + x} - x -\\[5mm] & = -2015 - x\sum_{k = 1}^{\infty}\pars{{1 \over k + x} - {1 \over k + 2015 + x}} - x \\[5mm] & = -2015 - x\pars{H_{x + 2015} - H_{x}} - x\qquad -\pars{~H_{z}:\ Harmonic\ Number~} -\end{align} - -\begin{align} -&\bbx{\mc{F}\pars{x} = -2015 - \pars{\vphantom{\large A}H_{x + 2015} - H_{x} + 1}x} -\\[5mm] -&\mbox{Some characteristic behaviours of}\ \mc{F}\pars{x}\ \mbox{are}\ -\left\{\begin{array}{l} -\ds{\mc{F}\pars{0} = 2015} -\\[2mm] -\ds{\mc{F}\pars{x} \to -\infty\quad \mbox{as}\quad x \to \infty} -\\[2mm] -\ds{\mc{F}'\pars{x} \leq 0\,,\quad \forall\ x \geq 0} -\\[2mm] -\ds{\mc{F}\pars{r} = 0\,,\quad r \approx 939.105} -\\[2mm] -\ds{\mc{F}\pars{939.105} \approx -6.28337 \times 10^{-4}} -\\[2mm] -\ds{\mc{F}'\pars{939.105} \approx -1.46404} -\end{array}\right. -\\[1cm] -&\mbox{Hereafter, I'll perform a numerical evaluation which is based in the Laplace Method}: -\\ -&\int_{0}^{\infty}{\dd x \over \mc{F}^{2016}\pars{x} + 1} = -\int_{0}^{r}{\dd x \over \mc{F}^{2016}\pars{r - x} + 1} + -\int_{0}^{\infty}{\dd x \over \mc{F}^{2016}\pars{x + r} + 1} -\\[5mm] \approx &\ -2\int_{0}^{\infty} -\expo{-\bracks{\mc{F}'\pars{r}}^{\large 2016}x^{\large 2016}}\,\dd x = -{2 \over \verts{\mc{F}'\pars{r}}}\int_{0}^{\infty}\expo{-x^{\large 2016}}\,\dd x -\\[5mm] = &\ -{2 \over \verts{\mc{F}'\pars{r}}}\,{1 \over 2016} -\int_{0}^{\infty}x^{1/2016 - 1}\expo{-x}\,\dd x = -{1 \over 1008}\,{1 \over \verts{\mc{F}'\pars{r}}}\,\Gamma\pars{1 \over 2016} -\\[5mm] & \approx \bbx{1.36569} -\quad\mbox{with}\quad -\left.\mc{F}'\pars{r}\right\vert_{\ r\ \approx\ 939.105} \approx -1.46404 -\end{align} - -This value $\ds{\pars{~1.36569~}}$ is the numerical one reported by $\texttt{@achille hui}$ in another answer comment.<|endoftext|> -TITLE: Where is basic algebraic topology in basic algebraic geometry? -QUESTION [10 upvotes]: I'm a student meeting commutative algebra and algebraic geometry for the first time. The idea of studying every (commutative) ring geometrically via its spectrum (as a locally ringed space) is amazing. The techniques of homological algebra appear very quickly - already in dimension theory. Sheaf cohomology comes up later in algebraic geometry too (I'm not there yet). -However, the basic ideas of algebraic topology kind of seem like they're missing (at least at the basic level): we have this topological space - the spectrum, but no books seem to play with it in the sense of deformation retracts, fundamental groups, etc. -This MO question starts with: - -Every (?) algebraic geometer knows that concepts like homotopy groups or singular homology groups are irrelevant for schemes in their Zariski topology. - -So I'm guessing the answer to the following question will be a one-liner, but still: -Why? -A comment by the user Anonymous on his answer to the linked question mentions the maximal spectrum is a deformation retract of the prime one, so it looks at least like basic homotopical concepts are not completely useless. -What are some examples of these? - -REPLY [9 votes]: Spectra of rings are not the right kind of spaces to understand via fundamental groups, higher homotopy groups, usual (co)homology groups, etc. This is easy to see already in the case of the Zariski affine line over a field. The only closed sets are finite...so what are the continuous maps from the circle, en route to computing the fundamental group? Well, there are tons-in particular, any function with finite fibers (no point has infinite inverse image) is continuous, so e.g. any permutation of the circle is a continuous map from the circle to the affine line over $\mathbb{C}$. This is clearly totally ridiculous and can't give us any useful information. -The problem can be understood less concretely: schemes are not really topological spaces. In other words, it's of no interest, in general, to study continuous maps between the raw topological spaces of schemes. For instance, there is the theorem (maybe someone can remind me of the name attached to it) that every so-called spectral space, i.e. every sober $T_0$ quasicompact space with a basis of quasicompact opens closed under finite intersections, is homeomorhic to the spectrum of a ring. But all quasicompact schemes are spectral spaces, so for instance projective schemes are equivalent, as topological spaces, to affine ones-this approach is telling us absolutely nothing about the geometry. -Since the late '60s, considerations like these, as well as many others, have led geometers from Grothendieck on down the generations to be somewhat skeptical of the conceptual value of the locally ringed approach to schemes. It's arguably clearer to take the "functor-of-points" approach, which has the advantage of avoiding inappropriate questions about topological spaces, since the latter no longer explicitly appear. This is also the only way to study algebraic spaces and stacks, useful higher abstractions in modern geometry. -In any case, one does want analogues in geometry for the useful tools of topology-the problems in the first paragraphs just tell us that we need a less naive generalization. There is, for instance, an algebraic ("etale") fundamental group which uses the covering space perspective on the topological fundamental group as its starting point. So one tries to define a group of covering schemes over a given scheme, and in particular to directy generalize covering spaces. This works pretty well, and is related to the first algebraic analogue of singular cohomology (which has the same problems as the fundamental group, since it eventually depends on continuous maps from subspaces of real vector spaces,) namely etale cohomology. There are now many other cohomology theories for varieties and schemes, which have many of the same properties as topological cohomology theories. Homotopy theory is harder to generalize, beyond the fundamental group, but it is possible, to some extent, by studying something like a whole complex (simplicial set) of schemes, and using the abstract homotopy theory of such complexes to understand how the schemes fit together. This is tied to the theory of motives, which has long been seen as one of the most ambitious goals of the whole modern approach to algebraic geometry, and which is still very much an active area of research.<|endoftext|> -TITLE: An interesting Sum involving Binomial Coefficients -QUESTION [9 upvotes]: How would you evaluate -$$\sum _{ k=1 }^{ n } k\left( \begin{matrix} 2n \\ n+k \end{matrix} \right) $$ -I tried using Vandermonde identity but I can't seem to nail it down. - -REPLY [2 votes]: The following proof uses complex variable techniques and improves the -elementary one I posted earlier. It serves to demonstrate the method -even though it requires somewhat more of an effort. - -Suppose we seek to evaluate -$$\sum_{k=1}^n k {2n\choose n+k}.$$ -Introduce -$${2n\choose n+k} = -\frac{1}{2\pi i} -\int_{|z|=\epsilon} -\frac{1}{z^{n-k+1}} -\frac{1}{(1-z)^{n+k+1}} \; dz.$$ -Observe that this is zero when $k\gt n$ so we may extend -$k$ to infinity to obtain for the sum -$$\frac{1}{2\pi i} -\int_{|z|=\epsilon} -\frac{1}{z^{n+1}} -\frac{1}{(1-z)^{n+1}} -\sum_{k\ge 1} k \frac{z^k}{(1-z)^k} -\; dz -\\ = \frac{1}{2\pi i} -\int_{|z|=\epsilon} -\frac{1}{z^{n+1}} -\frac{1}{(1-z)^{n+1}} -\frac{z/(1-z)}{(1-z/(1-z))^2} -\; dz -\\ = \frac{1}{2\pi i} -\int_{|z|=\epsilon} -\frac{1}{z^{n}} -\frac{1}{(1-z)^{n}} -\frac{1}{(1-2z)^2} -\; dz.$$ -Now put $z(1-z)=w$ so that (observe that with $w=z+\cdots$ the image of $|z|=\epsilon$ with $\epsilon$ small is another closed circle-like contour which makes one turn and which we may certainly deform to obtain another circle $|w|=\gamma$) -$$z = \frac{1-\sqrt{1-4w}}{2} -\quad\text{and}\quad -(1-2z)^2 = 1-4w$$ -and furthermore -$$dz = -\frac{1}{2} -\times \frac{1}{2} \times (-4) \times (1-4w)^{-1/2} \; dw -= (1-4w)^{-1/2} \; dw$$ -to get for the integral -$$\frac{1}{2\pi i} -\int_{|w|=\gamma} -\frac{1}{w^n} \frac{1}{1-4w} -(1-4w)^{-1/2} \; dw -= \frac{1}{2\pi i} -\int_{|w|=\gamma} -\frac{1}{w^n} \frac{1}{(1-4w)^{3/2}} \; dw.$$ -This evaluates by inspection to -$$4^{n-1} {n-1+1/2\choose n-1} -= 4^{n-1} {n-1/2\choose n-1} -= \frac{4^{n-1}}{(n-1)!} -\prod_{q=0}^{n-2} (n-1/2-q) -\\ = \frac{2^{n-1}}{(n-1)!} -\prod_{q=0}^{n-2} (2n-2q-1) -= \frac{2^{n-1}}{(n-1)!} -\frac{(2n-1)!}{2^{n-1} (n-1)!} -\\ = \frac{n^2}{2n} {2n\choose n} -= \frac{1}{2} n {2n\choose n}.$$ -Here the mapping from $z=0$ to $w=0$ determines the choice of square -root. For the conditions on $\epsilon$ and $\gamma$ we have that for the series to converge we require $|z/(1-z)|\lt 1$ or $\epsilon/(1-\epsilon) \lt 1$ or $\epsilon \lt 1/2.$ The closest that the image contour of $|z|=\epsilon$ comes to the origin is $\epsilon-\epsilon^2$ so we choose $\gamma \lt \epsilon-\epsilon^2$ for example $\gamma = \epsilon^2-\epsilon^3.$ This also ensures that $\gamma \lt 1/4$ so $|w|=\gamma$ does not intersect the branch cut $[1/4,\infty)$ (and is contained in the image of $|z|=\epsilon$). For example $\epsilon = 1/3$ and $\gamma = 2/27$ will work.<|endoftext|> -TITLE: Does a weak homotopy equivalence induce an equivalence of categories on the fundamental groupoids? -QUESTION [5 upvotes]: Let $f\colon X\rightarrow Y$ be a weak homotopy equivalence. ($\pi_0(f)$ is a bijection and $\pi_n(f,x)$ is an isomorphism for all basepoints $x\in X$ and all $n$.) It induces a functor $\Pi(f)\colon\Pi(X)\rightarrow \Pi(Y)$ on the fundamental groupoids. -Is $\Pi(f)$ an equivalence of categories? - -REPLY [2 votes]: Lemma 1: If the morphism of groupoids $g:G\to H$ induces an isomorphism $G(x_0) \to H(g(x_0))$ for any object $x_0$, then $g$ is fully faithful restricted to each component of $G$. -Proof. Let $x_0,x_1$ be objects in the same component of $G$, so there exists a path $\kappa:x_0\rightsquigarrow x_1$. Then we have a bijection $r_\kappa: G(x_0) \to G(x_0,x_1)$ sending the loop $\lambda$ to the path $\lambda\kappa$. Likewise, there is a bijection $r_{g(\kappa)}: H(g(x_0)) \to H(g(x_0),g(x_1))$, and the equality $r_{g(\kappa)}g|_{G(x_0)} = g|_{G(x_0,x_1)}r_\kappa$ holds. Since $g|_{G(x_0)}$ is a bijection, so is $g|_{G(x_0,x_1)}$. -The next lemma is immediate from the fact that the path components of a space correspond to the components of its fundamental groupoid. -Lemma 2: If the map of spaces $f:X\to Y$ induces a bijection on $\pi_0$, then the induced morphism $\pi(f):\pi(X)\to\pi(Y)$ induces a bijection of components of the groupoids. -Corollary 1: If $f:X\to Y$ is a weak homotopy equivalence, then $\pi(f)$ gives a bijection between the components of $\pi(X)$ and those of $\pi(Y)$ and is fully faithful on each component. -Lemma 3: A morphism $g:G\to H$ of groupoids with the properties stated in corollary 1 is an equivalence of groupoids: -Proof. Since for any object $y$ of $H$, there is an object $x$ of $G$ mapped to the component of $y$, $g$ is essentially surjective. Also if $x_0$ and $x_1$ lie in distinct components of $G$, then the components of $g(x_0)$ and $g(x_1)$ are distinct. And if $x_0$ and $x_1$ are in the same component, then we have a bijection from their arrow set to that of their images. That means $g$ is essentially surjective and fully faithful. -Corollary 2: A weak homotopy equivalence induces an equivalence of fundamental groupoids.<|endoftext|> -TITLE: Definition for Shimura datum -QUESTION [6 upvotes]: The following definition for $\textbf{shimura datum}$ is due to wikipedia. -Let $S=\mathrm{Res}_\mathbb{R}^\mathbb{C}G_m$ be the Weil restriction of the multiplicative group from complex field $\mathbb{C}$ to real field $\mathbb{R}$. A $\textbf{shimura datum}$ is a pair $(G,X)$ consisting of a reductive algebraic group $G$ defined over the rational number field $\mathbb{Q}$ and a $G(\mathbb{R})$-conjugacy class $X$ of homomorphisms $h:S\rightarrow G_\mathbb{R}$ satisfying the following axioms: -(i) The complexified Lie algebra of $G$ decomposes into a direct sum $\mathfrak{g}\bigotimes\mathbb{C}=\mathfrak{k}\bigoplus\mathfrak{p}^+\bigoplus\mathfrak{p}^-$, where for any $z\in S$, $h(z)$ acts trivially on the first summand and via $\frac{z}{\bar{z}}$ (respectively, $\frac{\bar{z}}{z}$ on the second (respectively, third) summand. -(ii) The adjoint action of $h(i)$ induces a Cartan involution on the adjoint group of $G_\mathbb{R}$. -(iii) The adjoint group of $G_\mathbb{R}$ does not admit a factor $H$ defined over $\mathbb{Q}$ such that the projection of $h$ on $H$ is trivial. -It is not quite clear to me in this definition. -(a) If $g\in G(\mathbb{R})$ and $h:S\rightarrow G_\mathbb{R}$, how does $g$ act $h$? Is it given by $(g\cdot h)(z):=g^{-1}h(z)g$? -(b) In (i), what does "$h(z)$ acts via $\frac{z}{\bar{z}}$" mean? Does $h(z)$ act as multiplying by $\frac{z}{\bar{z}}$? -(c) In (iii), What is "a factor H"? Is $H$ a subgroup? Then what is "the projection of $h$ on H"? - -REPLY [3 votes]: (a) No, the action is by conjugation--if one has a homomorphism $\mathbb{S}\to G_\mathbb{R}$ one can conjugate this by any elementof $G(\mathbb{R})$ to obtain another such homomorphism. -(b) The map $\mathbb{S}\to G_\mathbb{R}$ induces a representation -$$\mathbb{S}_\mathbb{C}\to \mathrm{GL}(\mathfrak{g}_\mathbb{C})$$ -in the obvious way. But, the group $\mathbb{S}_\mathbb{C}\cong \mathbb{G}_m\times \mathbb{G}_m$. Thus, we get a decomposition -$$\mathfrak{g}_\mathbb{C}=\bigoplus_{p,q}V^{p,q}$$ -where $(z_1,z_2)$ in $\mathbb{C}^\times\times\mathbb{C}^\times$ acts on $V^{p,q}$ by $z_1^p z_2^q$. It's really just saying that it wants $V^{p,q}$ to be non-zero for $(p,q)$ in $\{(-1,1),(0,0),(1,-1)\}$. -If you want to read more about this google 'Deligne torus' and 'Hodge structure'. If you'd like to hear more about the geometric reason for this assumption, let me know. -(c) The group $G^\mathrm{ad}$, being adjoint, is actually isomorphic to a product of simple $\mathbb{Q}$-groups $G_1\times \cdots \times G_n$, which the $G_i$ are just the simple normal subgroups of $G^\mathrm{ad}$ (e.g. [Mil, §24.a]). Condition (c) is then just saying that the composition -$$h\to G_\mathbb{R}\to G^\mathrm{ad}_\mathbb{R}\to (G_i)_{\mathbb{R}}$$ -is non-trivial for all $i$. -[Mil Milne, J.S., 2017. Algebraic groups: The theory of group schemes of finite type over a field (Vol. 170). Cambridge University Press.<|endoftext|> -TITLE: Divide a line segment in the ratio $\sqrt{2}:\sqrt{3}.$ -QUESTION [9 upvotes]: "Divide a line segment in the ratio $\sqrt{2}:\sqrt{3}.$" -I have got this problem in a book, but I have no idea how to solve it. -Any help will be appreciated. - -REPLY [3 votes]: Draw an isosceles rectangle triangle. Carry the hypothenuse on a side. If the hypothenuse is $\sqrt2$, the new segment is $\sqrt3$. - -By Thales' theorem you can divide any segment in this ratio.<|endoftext|> -TITLE: Definition for "relatively sequentially compact" -QUESTION [6 upvotes]: A topological space $X$ is sequentially compact if every sequence in $X$ has a convergent subsequence. -Let $X$ be a topological space and $A \subseteq X$. -I've seen two definitions for $A$ to be relatively sequentially compact in $X$: - -the closure $\overline{A}$ of $A$ in $X$ is sequentially compact, which means that every sequence in $\overline{A}$ has a convergent subsequence (with limit in $\overline{A}$). -every sequence in $A$ has a convergent subsequence with limit in $\overline{A}$. - -Clearly, 1 => 2. -Are these definitions equivalent? (I don't see how to reduce the sequence in $\overline{A}$ to a sequence in $A$, so it seems that they are not equivalent unless $X$ is something like a Fréchet-Urysohn in which points in the closure $\overline{A}$ can be approximated by a sequence in $A$. Then we could try to perform a diagonal argument.) If they are not equivalent, what is the right definition for abstract topological spaces $X$? - -REPLY [2 votes]: There may be contexts where the first definition is appropriate, but it does -seem somewhat pathological in that a sequentially compact subspace may not be -relatively sequentially compact. In this respect it differs -from the second definition. -For example, take the Tychonoff plank -$X = ([0, \omega_1] \times [0, \omega]) \setminus (\omega_1, \omega)$ -and the subspace $A = [0, \omega_1) \times [0, \omega]$. -Then $A$ is sequentially compact, since any sequence is confined to a -subspace $[0, \alpha] \times [0, \omega]$ with $\alpha < \omega_1$, which -is compact and first countable. On the other hand $\overline{A} = X$, -which is not sequentially compact since $\{ (\omega_1, n) \}_{n=0}^\infty$ -has no cluster point. -To decide which is the most useful definition would probably involve looking -at a large number of applications, which I don't have available. I could -even imagine some use for a definition $1\frac12$: a subspace $A$ of a -topological space $X$ is relatively sequentially compact if there is a -sequentially compact $B \subset X$ such that $A \subset B$. This is strictly -weaker than definition 1 and stronger than definition 2, although I don't -know if it is strictly stronger than definition 2.<|endoftext|> -TITLE: Finding the maximum value of $ab+ac+ad+bc+bd+3cd$ -QUESTION [8 upvotes]: If $a,b,c,d>0$ satisfy the condition ${ a }^{ 2 }+{ b }^{ 2 }+{ c }^{ 2 }+{ d }^{ 2 }=1$, find the maximum value of $ab+ac+ad+bc+bd+3cd$. - -I'm not progress in this inequality problem. Please help. -Thank you. - -REPLY [2 votes]: Let $ab+ac+ad+bc+bd+3cd=k$. -Hence, $k>0$ and $ab+ac+ad+bc+bd+3cd=k(a^2+b^2+c^2+d^2)$ or -$ka^2-(b+c+d)a+k(b^2+c^2+d^2)-bc-bd-3cd=0$. -Hence, $(b+c+d)^2-4k(k(b^2+c^2+d^2)-bc-bd-3cd)\geq0$ or -$(4k^2-1)b^2-2(2k+1)(c+d)b+(4k^2-1)(c^2+d^2)-2(6k+1)cd\leq0$. -If $0\frac{1}{2}$. Hence, $(2k+1)^2(c+d)^2-(4k^2-1)\left((4k^2-1)(c^2+d^2)-2(6k+1)cd\right)\geq0$ or -$(2k+1)(c+d)^2-(2k-1)\left((4k^2-1)(c^2+d^2)-2(6k+1)cd\right)\geq0$ or -$(2k^2-k-1)c^2-(6k-1)cd+(2k^2-k-1)d^2\leq0$. -If $\frac{1}{2}1$. Hence, $(6k-1)^2-4(2k^2-k-1)^2\geq0$, which gives $k\leq1+\frac{\sqrt5}{2}$. -Easy to see that for $k=1+\frac{\sqrt5}{2}$ the equality indeed occurs. -Id est, the answer is $1+\frac{\sqrt5}{2}$.<|endoftext|> -TITLE: $P(X) = X^6 - 11X^4 + 36X^2 - 36$ has a root in $\mathbb{Q}_p$ for every $p$ -QUESTION [6 upvotes]: Problem: Prove that $P(X) = X^6 - 11X^4 + 36X^2 - 36$ has a root in $\mathbb{R}$, has no roots in $\mathbb{Q}$, but has a root in $\mathbb{Q}_p$ for every $p$. -What I have done: I think this is actually false. We can find this factorization: $P(X) = (X^2 - 2)(X^2 - 3)(X^2 - 6)$. So we deduce that there are 6 different roots in $\mathbb{R}$, and there are no roots in $\mathbb{Q}$. For $p \not= 2,3$ combining Hensel's Lemma and multiplicativity of Legendre Symbol we can say that there is a root in $\mathbb{Q}_p$. -But for $p = 3$ we can't find $\sqrt{3}$ and $\sqrt{6}$ in $\mathbb{Q}_3$ beacuse they should have absolute value $|\sqrt{3}|_3 = |\sqrt{6}|_3 = 3^{-1/2}$ which is not possible cause, as sets, $|\mathbb{Q}_p|_p = |\mathbb{Q}|_p$. A root of $(X^2 - 2)$ in $\mathbb{Q}_3$ should be in $\mathbb{Z}_3$ because $\mathbb{Q}_3$ is the field of fractions of $\mathbb{Z}_3$ which is DVR and hence integrally closed (is that true?). But we can't solve $a_0^2 \equiv 2 \pmod 3$. -$\mathbb{Q}_2$ is almost the same cause we must find a root for $(X^2 - 3)$ but $3 \not \equiv 1 \pmod 8$. -Did I make any mistakes or is this problem just wrong? - -REPLY [3 votes]: Here is a valid counterexample for the Hasse principle of this sort. Take -$$ -f(x)=(x^2-2)(x^2-17)(x^2-34)=0. -$$ -It has a real solution, but no rational one; and it has a solution in all completions of $\mathbb{Q}$. For $p\neq 2,17$ this goes as before with Hensel's lemma; and for $p=2$ we have now $17\equiv 1 \bmod 8$, so that $17$ is a $2$-adic square. For $p=17$ we have $6^2\equiv 2\bmod 17$, so that $f(6)\equiv 0\bmod 17$, but $f'(6)=12\not\equiv 0\bmod 17$, as required. -So $f$ does not have a rational solution even though it has a solution in all $p$-adic fields, for $p$ prime and $p=\infty$. -In your example, for $p=3$ we would need an $n$ with $f(n)\equiv 0\bmod 3$, but $f'(n)\not\equiv 0\bmod 3$.<|endoftext|> -TITLE: Does a lattice in $SL_n(\mathbb R)$ which is contained in $SL_n(\mathbb Z)$ have finite index in $SL_n(\mathbb Z)$? -QUESTION [5 upvotes]: A lattice $H$ in a locally compact group $G$ is a discrete subgroup such that the coset space $G/H$ admits a finite $G$-invariant measure. -I have read several places that any lattice H in $SL_n(\mathbb{R})$ which is contained in $SL_n(\mathbb{Z})$ must have finite index in $SL_n(\mathbb{Z})$. But I have been unable to prove this. -I have tried using the correspondence between the Haar measure on $SL_n(\mathbb{R})$ and the counting measure on $SL_n(\mathbb{Z})$, where we can partition $SL_n(\mathbb{R})$ into sets each containing one element of $SL_n(\mathbb{Z})$, and then normalizing s.t. each of these has measure one. But this seemed to lead nowhere. Also just restricting the measure on $SL_n(\mathbb{R})/H$ to $SL_n(\mathbb{Z})/H$ does not work either since the latter has measure zero. -Thanks a lot to the ones who will answer. - -REPLY [3 votes]: First, for $H\subset \Gamma\subset G$ with $G$ unimodular, $\Gamma$ discrete, fixing a Haar measure on $G$, there is a unique $G$-invariant measure on $\Gamma\backslash G$ such that -$$ -\int_{\Gamma\backslash G} \sum_{\gamma\in \Gamma} \varphi(\gamma\cdot g)\;dg \;=;\ \int_G \varphi(g)\;dg -$$ -for all $\varphi\in C^o_c(G)$. Suppose $\Gamma\backslash G$ has finite measure. Similarly, by the same general uniqueness results, there is a unique measure on $H\backslash G$ such that -$$ -\int_{\Gamma\backslash G} \sum_{\gamma\in H\backslash \Gamma} \varphi(hg)\;dg -\;=\; \int_{H\backslash G} \varphi(g)\;dg -$$ -for all $\varphi\in C^o_c(H\backslash G)$. This set-up answers most questions about the trio $H\subset \Gamma \subset G$. For example, yes, if $H\backslash G$ has finite volume, then $H$ must be of finite index in $\Gamma$, or else the sum over $H\backslash \Gamma$ is infinite...<|endoftext|> -TITLE: How to prove that $\int_0^\infty\frac{\left(x^2+x+\frac{1}{12}\right)e^{-x}}{\left(x^2+x+\frac{3}{4}\right)^3\sqrt{x}}\ dx=\frac{2\sqrt{\pi}}{9}$? -QUESTION [17 upvotes]: A friend gave me this integral as a challenge -$$ -\int_0^\infty\frac{\left(x^2+x+\frac{1}{12}\right)e^{-x}}{\left(x^2+x+\frac{3}{4}\right)^3\sqrt{x}}\ dx=\frac{2\sqrt{\pi}}{9}. -$$ -This integral can be written in an equivalent form -$$ -\int_0^\infty\frac{x^4+x^2+\frac{1}{12}}{\left(x^4+x^2+\frac{3}{4}\right)^3}e^{-x^2}\ dx=\frac{\sqrt{\pi}}{9}. -$$ -I don't know how to prove this. I checked it numerically and it appears to be correct with 1000 digit accuracy. -I tried several approaches. It seems this integral could be tackled by contour integration but so far I was unable to find a suitable contour. I also tried substitution, however with no luck. -Does anybody know how to calculate this integral? - -REPLY [14 votes]: A Generalisation of the Integral: -Given a fixed $n\in\mathbb{N}$, let $A$, $P$, $Q$ be polynomials satisfying the following conditions: - -$A(x)=Q(x)^{n+1}-2xP(x)Q(x)+P'(x)Q(x)-nP(x)Q'(x)$ -$\deg A=\deg Q$ -$P(0)=0, \ Q(0)\neq 0$ -$A(x)Q(x)^{-(n+1)}=A(-x)Q(-x)^{-(n+1)}$ - - -Then the integral of $e^{-x^2}A(x)Q(x)^{-(n+1)}$ over $\mathbb{R}^+$ can be computed as such. -\begin{align} -\int^\infty_0 e^{-x^2}\frac{A(x)}{Q(x)^{n+1}}\ dx -&=\int^\infty_0 e^{-x^2}\left(1+\frac{-2xP(x)Q(x)+P'(x)Q(x)-nP(x)Q'(x)}{Q(x)^{n+1}}\right)\ dx\\ -&=\frac{\sqrt{\pi}}{2}+\int^\infty_0\frac{((e^{-x^2})P(x))'Q(x)^n-e^{-x^2}P(x)(Q(x)^n)'}{Q(x)^{2n}}\ dx\\ -&=\frac{\sqrt{\pi}}{2}+\left[e^{-x^2}\frac{P(x)}{Q(x)^n}\right]^\infty_0\\ -&=\frac{\sqrt{\pi}}{2} -\end{align} -This means that as long as we can find polynomials $A$, $P$, $Q$ that satisfy all these conditions, we will be able to "construct" similar integrals to that which was posted in the question (at least in principle). - -Two Useful Facts: -Before we proceed to determine suitable $A$, $P$ and $Q$, we first prove the following facts: - -$\text{Fact 1}$: $$\deg P=n\deg Q-1\tag{*}$$ - -To deduce this fact, observe that the polynomials $Q(x)^{n+1}$, $-2xP(x)Q(x)$, $P'(x)Q(x)$ and $-nP(x)Q'(x)$ have degrees $(n+1)\deg Q$, $\deg P+\deg Q+1$, $\deg P+\deg Q-1$ and $\deg P+\deg Q-1$ respectively. In order for the sum of these polynomials to have degree $\deg Q$, for all $\deg Q < j\leq\max((n+1)\deg Q, \deg P+\deg Q+1)$, the coefficients of the terms $x^j$ in each of these four polynomials have to add up to equal $0$. This requires -$$\max((n+1)\deg Q, \deg P+\deg Q+1)=\min((n+1)\deg Q, \deg P+\deg Q+1)$$ -and the desired result follows. - -$\text{Fact 2}$: $$P(x)Q(x)^{-n}=-P(-x)Q(-x)^{-n}\tag{**}$$ - -This follows from Condition 4. Since $A(x)Q(x)^{-(n+1)}$ is even, $A(x)Q(x)^{-(n+1)}-1$ must also be even. But $A(x)Q(x)^{-(n+1)}-1=e^{x^2}(e^{-x^2}P(x)Q(x)^{-n})'$, so $P(x)Q(x)^{-n}$ must be odd. -Note that $(^{**})$ also implies Condition 3. - -A Simple Example: $n=1$, $\deg Q=2$ -Let $n=1$ and $Q(x)=x^2+c\ $ with $c\neq 0$. By $(^{*})$ and $(^{**})$, $P$ is an odd polynomial of degree $1$, i.e. $P(x)=kx$ for some constant $k$. By Condition 1, -\begin{align} -A(x) -&=(x^2+c)^2-2kx^2(x^2+c)+k(x^2+c)-2kx^2\\ -&=(1-2k)x^4+(2c(1-k)-k)x^2+c(k+c) -\end{align} -Since $\deg A=2$, $k=\frac{1}{2}$ and $c\neq\frac{1}{2}$. Thus -$$A(x)=\left(c-\frac{1}{2}\right)x^2+c\left(c+\frac{1}{2}\right)$$ -and we obtain the identity, for $c\in\mathbb{R}^+\setminus\{\frac{1}{2}\}$ -$$\int^\infty_0e^{-x^2}\cdot\frac{x^2+\frac{c(2c+1)}{2c-1}}{(x^2+c)^2}\ dx=\frac{\sqrt{\pi}}{2c-1}$$ - -The Case in Question: $n=2$, $\deg Q=4$ -We follow the exact same procedure outlined above. In this case $n=2$ and $Q(x)=x^4+px^2+q$. Then $P(x)=rx^7+sx^5+tx^3+ux$. Applying Condition 1 and noting that $\deg A=4$ (i.e. the coefficients of $x^{12}$, $x^{10}$, $x^8$, $x^6$ are all $0$) , -\begin{align} -A(x) -&=(x^4+px^2+q)^3-2x(rx^7+sx^5+tx^3+ux)(x^4+px^2+q)\\ -&\ \ \ \ \ +(7rx^6+5sx^4+3tx^2+u)(x^4+px^2+q)-2(rx^7+sx^5+tx^3+ux)(4x^3+2px)\\ -&=(p(3pq-t-2u)+q(3q+5s-2t)-7u)x^4+(3p(q^2-u)+q(3t-2u))x^2+q(q^2+u) -\end{align} -where -\begin{align} -&\ \ \ \ \ 1-2r=3p-2s-r(1+2p)=p(3p-2s)+r(3p-2q)+3q-3s-2t\\ -&=p(p^2+6q+s-2t)+q(7r-2s)-5t-2u=0 -\end{align} -After some algebra, we may express $r,s,t,u$ in terms of the free variables $p,q$. -$$(r,s,t,u)=\left(\frac{1}{2},\frac{4p-1}{4},\frac{4p^2-4p+8q+3}{8},\frac{-4p^2+16pq+12p-8q-15}{16}\right)$$ -Therefore -\begin{align} -A(x) -&=\tfrac{12p^2+16q^2-16pq-60p+24q+105}{16}x^4+\tfrac{12p^3-16p^2q+16pq^2-36p^2+64q^2-24pq+45p+48q}{16}x^2\\ -&\ \ \ \ \ +\tfrac{16q^3-4p^2q+16pq^2-8q^2+12pq-15q}{16} -\end{align} -This yields, for $p,q,s,t,u\neq 0$, -$$\small{\int^\infty_0e^{-x^2}\tiny{\frac{(12p^2+16q^2-16pq-60p+24q+105)x^4+(12p^3-16p^2q+16pq^2-36p^2+64q^2-24pq+45p+48q)x^2+(16q^3-4p^2q+16pq^2-8q^2+12pq-15q)}{16(x^4+px^2+q)^3}}\ dx=\frac{\sqrt{\pi}}{2}}$$ -If $p=1$ and $q=\frac34$, this integral reduces to the one posted in the question.<|endoftext|> -TITLE: Digits of $\pi$ using Integer Arithmetic -QUESTION [6 upvotes]: How can I compute the first few decimal digits of $\pi$ using only integer arithmetic? By 'integer arithmetic' I mean the operations of addition, subtraction, and multiplication with both operands as integers, integer division, and exponentiation with a positive integer exponent. The first hundred decimal digits or so would be sufficient if the method is not a completely general one. -By 'compute', I mean that I would like to obtain subsequent digits of $\pi$ one-by-one, printing them to the screen as I go along. -(Context: I'm writing a Befunge-98 program...) - -REPLY [2 votes]: The (or rather a) spigot algorithm for $\pi$ does exactly that: extract digits of $\pi$ one by one based entirely on integer arithmetic. See this paper.<|endoftext|> -TITLE: Can one find a closed form solution to $\ln x=\frac{1}{x}$, -QUESTION [5 upvotes]: Is there a closed form solution of the equation $\ln x=\frac{1}{x}$? I couldn't find a proof myself and I don't know any theorems that says when a closed form solution exists. - -REPLY [5 votes]: The real solution is: -$$\ln(x)=\frac{1}{x}\Longleftrightarrow$$ -$$x\ln(x)=1\Longleftrightarrow$$ -$$e^{x\ln(x)}=e^{1}\Longleftrightarrow$$ -$$x^x=e\Longleftrightarrow$$ -$$x=e^{\text{W}(1)}$$ -With $\text{W}(z)$ is the product log function.<|endoftext|> -TITLE: What's the use of quadratic forms? -QUESTION [14 upvotes]: Starting with the abstract concept of a vector space, I can see why we'd want to add some structure to be able to perform useful operations. For instance if we add a metric/ norm to a vector space we can talk about distances. If we add an inner product to a vector space we can talk about angles. These two operations also give us a bunch of inequalities (like Cauchy-Schwarz) that we get as well. -But I don't see the point of equipping our vector space with some degree two polynomial. What does that get us? Is there some geometric meaning to it (like how we got distance and angle from norm and inner product)? - -REPLY [2 votes]: Another place where bilinear and quadratic forms appear naturally and have geometric meaning is in algebraic and differential topology. If $M$ is a compact connected oriented $2n$ dimensional manifold, the wedge product induces a bilinear form on the vector space $H_{\mathrm{dR}}^n(M)$ of $n$-dimensional de Rham cohomology classes on $M$. Using Poincaré duality, one can interpret this bilinear form as computing generic signed intersections between $n$ dimensional submanifolds of $M$. -I can demonstrate this intuitively for the case $M$ is a two-dimensional torus. Consider the following image (taken from the Wolfram Mathworld page on homology intersection): - -For the torus, $H_{\mathrm{dr}}^1(M)$ is a two dimensional real vector space and we can choose a basis $\mathcal{B} = (v_1, v_2)$ for $H_{\mathrm{dr}}^1(M)$ under which $v_1$ corresponds to the blue circle in the picture and $v_2$ corresponds to the red circle. On $H_{\mathrm{dr}}^1(M)$ we have a bilinear form $g$ which encodes the intersections between the circles. We have $ g(v_1, v_1) = 0 $ which corresponds to the fact that if we perturb the blue circle a little, the intersection between the original blue circle and the perturbed circle is zero. Similarly, we have $g(v_2, v_2) = 0$. However, we (can choose the basis so that) have $g(v_1, v_2) = 1$ which corresponds to the fact that the blue circle and the red circle intersect at a single point (counted with a plus sign) and even if we perturb them a little, they will still generically intersect at a single point. If we change the order of $v_1$ and $v_2$, this doesn't effect the geometric intersection but does change the sign and so $g(v_2, v_1) = -1$. Thus, the intersection form $g$ is a bilinear form on $H_{\mathrm{dr}}^1(M)$ represented by the matrix -$$ [g]_{\mathcal{B}} = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}. $$ -In this case, $g$ is an anti-symmetric bilinear form but if the dimension of $M$ is divisible by $4$, it will be a symmetric bilinear form (corresponding to a quadratic form). The form $g$ encodes information about submanifolds sitting inside $M$ and is an important invariant of $M$. It can be used to show that two manifolds are not homeomorphic but showing that corresponding bilinear forms are not equivalent. You can read a bit more about this here.<|endoftext|> -TITLE: Show $a^2 + b^2 + 1 \equiv 0 \mod p$ always has a solution if $p = 4k+3$ -QUESTION [7 upvotes]: If $p = 4k+3$ is a prime number (so $p = 7,11,19$ but not $p = 5,13$ or $p =15$) then there are numbers $a,b$ such that: -$$a^2 + b^2 + 1 \equiv 0 \mod p$$ -For example $2^2 + 3^2 + 1 = 14 = 7 \times 2\equiv 0 \mod 7 $. -My reasoning was that in the finite field $\mathbb{F}_{p^2}$, $-1$ is always a perfect square - -if $p = 4k+1$ then $-1$ is a perfect square in $\mathbb{F}_p$ -if $p = 4k+3$ then $x^2 +1 $ is irreducible then always $\mathbb{F}_p[x]/(x^2+1) \simeq \mathbb{F}_{p^2}$ - -I remember this as $i = \sqrt{-1} \in \mathbb{F}_p$ if $p = 4k+1$ and $i = \sqrt{-1} \in \mathbb{F}_p^2$ if $p = 4k+3$ -Then somehow I made the magical conclusion that we always had a solution for $z \overline{z} = -1$ with $z = a+bx$ -$$z \overline{z} = (a+bx)(a-bx) = a^2 + b^2 = -1 $$ -This does not logically follow, but I made a leap of intution. What is the correct logic here? -This is a variant of Fermat's theorem that $p = 4k+1$ can be written as the sum of two squares: $p = a^2 + b^2$ but that is not a congruence. That is an equality involving whole numbers $a,b \in \mathbb{Z}$. - -REPLY [2 votes]: The argument given by user7530 is what I would recommend as well. It gets everything done inside $\Bbb{F}_p$. -Your intuitive leap can also be justified. The mapping $z\mapsto z\overline{z}$ is known as the relative norm map. We can view it differently. The Galois theory of extensions of finite fields tells us that the non-trivial $\Bbb{F}_p$-automorphism of $\Bbb{F}_{p^2}$ is the Frobenius mapping $z\mapsto z^p$. So we know $N(z)=z\overline{z}=z^{p+1}$ for all $z\in\Bbb{F}_{p^2}$. -The key to success is to recall the fact that the multiplicative groups $\Bbb{F}_{p^2}^*$ and $\Bbb{F}_{p}^*$ are cyclic of respective orders $p^2-1$ and $p-1$. The basic facts about cyclic groups tell us that raising to power $p+1$ is a surjective homomorphism between the two groups. Therefore any element of $\Bbb{F}_p$ is the norm of some element of $\Bbb{F}_p$ - the element $-1$ in particular. -The assumption $p\equiv3\pmod4$ is used as it gives us the description $\Bbb{F}_{p^2}=\Bbb{F}_p[i]$, where $i^2=-1$. User7530's argument is immune to that detail.<|endoftext|> -TITLE: Improper integral of $\log x \operatorname{sech} x$ -QUESTION [7 upvotes]: How to prove the following? -$$ \int_0^\infty \log x \operatorname{sech}x\,dx = \frac{\pi}{2} \log\left( \frac{4\pi^3}{\Gamma(1/4)^4} \right) -$$ -I obtained the right side with CAS. It seems like this function has many poles on the imaginary axis so a simple contour integral cannot be used. I also tried the following -$$\begin{align*} -\int_0^\infty \log x \operatorname{sech} x\, dx -&= \int_0^\infty \operatorname{sech} x \left. \frac{\partial}{\partial s} x^s \right|_{s=0} dx \\ -&= \left. \frac{\partial}{\partial s} \int_0^\infty x^s \operatorname{sech} x\,dx \right|_{s=0} -\end{align*}$$ -However, the last integral is very difficult to evaluate and contains terms of $\zeta$ functions. - -REPLY [3 votes]: This direct proof was discovered by Abdulhafeez Ayinde Abdulsalam (see https://arxiv.org/abs/2203.02675). -Let $\displaystyle \Delta\left(a\right) = \int_0^{\infty} \frac{\log\left(x^2 + a^2\right)}{\cosh\left(\pi x\right)} \, \mathrm{d}x$. -Then -\begin{align*} -\Delta\left(a\right) &= \int_0^{\infty} \frac{\log\left(\left|a\right| - ix\right)}{\cosh\left(\pi x\right)} \, \mathrm{d}x + \int_0^{\infty} \frac{\log\left(\left|a\right| + ix\right)}{\cosh\left(\pi x\right)} \, \mathrm{d}x -\end{align*} -where $i=\sqrt{-1}$. -\begin{align*} -\Delta\left(a\right) &= 2\int_0^{\infty} \frac{\log\left(\left|a\right| - ix\right)}{e^{-2 \pi x} + 1} e^{-\pi x} \, \mathrm{d}x + 2\int_0^{\infty} \frac{\log\left(\left|a\right| + ix\right)}{e^{-2 \pi x} + 1} e^{-\pi x} \, \mathrm{d}x -\\&= -\frac{2}{\pi}\int_0^{\infty} \log\left(\left|a\right| - ix\right) \,\, \mathrm{d}\left(\arctan\left(e^{-\pi x}\right)\right) - \frac{2}{\pi}\int_0^{\infty} \log\left(\left|a\right| + ix\right) \,\, \mathrm{d}\left(\arctan\left(e^{-\pi x}\right)\right) -\\&=\frac{-2i}{\pi}\int_0^{\infty} \frac{\arctan\left(e^{-\pi x}\right)}{\left|a\right| - ix}\, \mathrm{d}x + \frac{2i}{\pi}\int_0^{\infty} \frac{\arctan\left(e^{-\pi x}\right)}{\left|a\right| + ix} \, \mathrm{d}x + \ln{a} -\end{align*} -Therefore -\begin{align*} -\Delta\left(a\right) - \ln{a} &= \frac{-2i}{\pi}\int_0^{\infty} \arctan\left(e^{-\pi x}\right) \int_0^{\infty} e^{-t\left(\left|a\right| - ix\right)} \,\, \mathrm{d}t \, \mathrm{d}x -\\&\quad+ \frac{2i}{\pi}\int_0^{\infty} \arctan\left(e^{-\pi x}\right) \int_0^{\infty} e^{-t\left(\left|a\right| + ix\right)} \,\, \mathrm{d}t \, \mathrm{d}x -\\&= \frac{-2i}{\pi}\int_0^{\infty}e^{-\left|a\right|t}\int_0^{\infty} e^{itx} \arctan\left(e^{-\pi x}\right)\mathrm{d}x \,\, \mathrm{d}t -\\&\quad+\frac{2i}{\pi}\int_0^{\infty}e^{-\left|a\right|t}\int_0^{\infty} e^{-itx} \arctan\left(e^{-\pi x}\right)\mathrm{d}x \,\, \mathrm{d}t -\\&= \frac{-2i}{\pi}\int_0^{\infty}e^{-\left|a\right|t}\int_0^{\infty} \left(e^{itx} - e^{-itx}\right) \arctan\left(e^{-\pi x}\right)\mathrm{d}x \,\, \mathrm{d}t -\end{align*} -By Euler's formula, -$$e^{itx} - e^{-itx} = 2i\sin{\left(tx\right)}.$$ -Therefore -\begin{align*} -\Delta\left(a\right) - \ln{a} &= \frac{4}{\pi}\int_0^{\infty}e^{-\left|a\right|t}\int_0^{\infty} \sin{\left(tx\right)} \arctan\left(e^{-\pi x}\right)\mathrm{d}x \,\, \mathrm{d}t -\\&= \frac{4}{\pi}\int_0^{\infty}e^{-\left|a\right|t}\int_0^{\infty} \mathrm{d}\left(\frac{-\cos{\left(tx\right)}}{t}\right) \arctan\left(e^{-\pi x}\right) \,\, \mathrm{d}t -\\&= \frac{4}{\pi}\int_0^{\infty}e^{-\left|a\right|t}\left( \frac{\pi}{4t} - \frac{\pi}{2t}\int_0^{\infty}\frac{\cos{\left(tx\right)}}{\cosh\left(\pi x\right)} \, \mathrm{d}x\right) \,\, \mathrm{d}t -\\&= \frac{4}{\pi}\int_0^{\infty}e^{-\left|a\right|t}\left( \frac{\pi}{4t} - \frac{1}{2t}\int_0^{\infty}\frac{\cos{\left(\frac{tx}{\pi}\right)}}{\cosh\left(x\right)}\, \mathrm{d}x\right) \,\, \mathrm{d}t -\\&= \frac{4}{\pi}\int_0^{\infty} e^{-\left|a\right|t} \left(\frac{\pi}{4t} - \frac{\pi}{4t}\mathrm{sech}\left(\frac{t}{2}\right)\right) \,\, \mathrm{d}t -\\&= \int_0^{\infty} e^{-\left|a\right|t}\left(\frac{1}{t} - \frac{1}{t}\mathrm{sech}\left(\frac{t}{2}\right)\right) \,\, \mathrm{d}t -\\&= \int_0^{\infty} \left(\frac{e^{-\left|a\right|t}}{t} - \frac{2e^{-\left(\left|a\right| + \frac{1}{2}\right)t}}{t\left(1 + e^{-t}\right)} \right)\,\, \mathrm{d}t {\stackrel{\,\,t \rightarrow 2t}{=}} \int_0^{\infty} \left(\frac{e^{-2\left|a\right|t}}{t} - \frac{2e^{-\left(2\left|a\right| + 1\right)t}}{t\left(1 + e^{-2t}\right)} \right)\,\, \mathrm{d}t -\\&{\stackrel{z \rightarrow e^{-t}}{=}} -\int_0^{1} \left(z^{2\left|a\right|} - \frac{2z^{2\left|a\right|+1}}{1 + z^2} \right)\,\, \frac{\mathrm{d}z}{z\ln{z}} = -\int_0^{1} \frac{z^{2\left|a\right|}}{\ln{z}}\left(\frac{1}{z} - \frac{2}{1 + z^2}\right)\,\, \mathrm{d}z -\\&= -\int_0^{1} \frac{z^{2\left|a\right|}}{\ln{z}}\left(\frac{1 + z^2 - 2z}{z\left(1 + z^2\right)}\right)\,\, \mathrm{d}z = -\int_0^{1} \frac{z^{2\left|a\right|}}{\ln{z}}\left(\frac{\left(1 - z\right)^2}{z\left(1 + z^2\right)}\right)\,\, \mathrm{d}z -\\&= \int_0^{1} \frac{z^{2\left|a\right| - 1}\left(1 - z\right)}{1 + z^2}\int_0^1 z^p \mathrm{d}p\,\, \mathrm{d}z = \int_0^{1}\int_0^1 \frac{z^{2\left|a\right| + p - 1}\left(1 - z\right)}{1 + z^2} \mathrm{d}p\,\, \mathrm{d}z -\\&= \int_0^{1}\int_0^1 \sum_{k=0}^{\infty} \left(-1\right)^k z^{2\left|a\right| + p + 2k - 1}\left(1 - z\right) \mathrm{d}z\,\, \mathrm{d}p \\&= \int_0^{1} \sum_{k=0}^{\infty} \left(-1\right)^k \int_0^1 z^{2\left|a\right| + p + 2k - 1}\left(1 - z\right) \mathrm{d}z\,\, \mathrm{d}p -\\&= \int_0^{1} \sum_{k=0}^{\infty} \left(-1\right)^k \left(\frac{1}{2\left|a\right| + p + 2k} - \frac{1}{2\left|a\right| + p + 2k + 1}\right)\, \mathrm{d}p -\\&= \frac{1}{2}\int_0^{1} \sum_{k=0}^{\infty} \left(-1\right)^k \left(\frac{1}{k + \frac{2\left|a\right| + p}{2}} - \frac{1}{k + \frac{2\left|a\right| + p + 1}{2}}\right)\, \mathrm{d}p -\\&= -\frac{1}{4}\int_0^{1} \left(\psi_0\left(\frac{2\left|a\right| + p}{4}\right) - \psi_0\left(\frac{2\left|a\right| + p}{4} + \frac{1}{2}\right) \right. -\\&\qquad\qquad\qquad\left.- \psi_0\left(\frac{2\left|a\right| + p + 1}{4}\right) + \psi_0\left(\frac{2\left|a\right| + p + 1}{4} + \frac{1}{2}\right)\right) \mathrm{d}p -\\&= -\log\left(\frac{\Gamma\left(\frac{2\left|a\right| + p}{4}\right)\Gamma\left(\frac{2\left|a\right| + p+ 1}{4} + \frac{1}{2}\right)}{\Gamma\left(\frac{2\left|a\right| + p}{4} + \frac{1}{2}\right)\Gamma\left(\frac{2\left|a\right| + p + 1}{4}\right)}\right)\biggr\vert_0^1 -\\&= - \log\left(\frac{\Gamma\left(\frac{2\left|a\right| + 1}{4}\right)\Gamma\left(\frac{2\left|a\right| + 2}{4} + \frac{1}{2}\right)}{\Gamma\left(\frac{2\left|a\right| + 1}{4} + \frac{1}{2}\right)\Gamma\left(\frac{2\left|a\right| + 2}{4}\right)}\right) + \log\left(\frac{\Gamma\left(\frac{2\left|a\right|}{4}\right)\Gamma\left(\frac{2\left|a\right| + 1}{4} + \frac{1}{2}\right)}{\Gamma\left(\frac{2\left|a\right|}{4} + \frac{1}{2}\right)\Gamma\left(\frac{2\left|a\right| + 1}{4}\right)}\right) -\\&= - \log\left(\frac{\Gamma\left(\frac{2\left|a\right| + 1}{4}\right)^2\Gamma\left(\frac{2\left|a\right| + 2}{4} + \frac{1}{2}\right)}{\Gamma\left(\frac{2\left|a\right| + 1}{4} + \frac{1}{2}\right)^2 \Gamma\left(\frac{2\left|a\right| + 2}{4}\right)}\right) + \log\left(\frac{\Gamma\left(\frac{2\left|a\right|}{4}\right)}{\Gamma\left(\frac{2\left|a\right|}{4} + \frac{1}{2}\right)}\right) -\\&= -\log\left(\frac{\Gamma\left(\frac{2\left|a\right| + 1}{4}\right)^2\Gamma\left(\frac{\left|a\right|}{2} + 1 \right)}{\Gamma\left(\frac{2\left|a\right| + 1}{4} + \frac{1}{2}\right)^2 \Gamma\left(\frac{2\left|a\right| + 2}{4}\right)}\right) + \log\left(\frac{\Gamma\left(\frac{\left|a\right|}{2}\right)}{\Gamma\left(\frac{2\left|a\right| + 2}{4}\right)}\right) -\\&= -\log\left(\frac{\left|a\right|\Gamma\left(\frac{2\left|a\right| + 1}{4}\right)^2}{2\Gamma\left(\frac{2\left|a\right| + 1}{4} + \frac{1}{2}\right)^2}\right) = 2\log\left(\frac{\sqrt{2}\Gamma\left(\frac{2\left|a\right| + 1}{4} + \frac{1}{2}\right)}{\sqrt{\left|a\right|}\Gamma\left(\frac{2\left|a\right| + 1}{4}\right)}\right). -\end{align*} -Hence -\begin{align} -\Delta\left(a\right) &= 2\log\left(\frac{\sqrt{2}\Gamma\left(\frac{2\left|a\right| + 1}{4} + \frac{1}{2}\right)}{\sqrt{\left|a\right|}\Gamma\left(\frac{2\left|a\right| + 1}{4}\right)}\right) + \ln{\left|a\right|} -\\&= 2\log\left(\frac{\sqrt{2}\Gamma\left(\frac{2\left|a\right| + 1}{4} + \frac{1}{2}\right)}{\Gamma\left(\frac{2\left|a\right| + 1}{4}\right)}\right).\tag{1}\label{rzq} -\end{align} -Taking the limit of \eqref{rzq} as $a \to 0$, we have -$$\int_0^{\infty} \frac{\log\left(x\right)}{\cosh\left(\pi x\right)} \, \mathrm{d}x = \log\left(\frac{\sqrt{2}\Gamma\left(\frac{3}{4}\right)}{\Gamma\left(\frac{1}{4}\right)}\right),$$ -which implies -$$ \int_0^{\infty} \ln{x}\mathrm{sech}\left(\pi x\right)\mathrm{d}x= \log\left(\frac{\sqrt{2}\Gamma\left(\frac{3}{4}\right)}{\Gamma\left(\frac{1}{4}\right)}\right)$$ -$$ \frac{1}{\pi}\int_0^{\infty} \ln\left(\frac{x}{\pi}\right)\mathrm{sech}\left(x\right)\mathrm{d}x = \log\left(\frac{\sqrt{2}\Gamma\left(\frac{3}{4}\right)}{\Gamma\left(\frac{1}{4}\right)}\right)$$ -$$\int_0^{\infty} \ln\left(\frac{x}{\pi}\right)\mathrm{sech}\left(x\right)\mathrm{d}x = \pi\log\left(\frac{\sqrt{2}\Gamma\left(\frac{3}{4}\right)}{\Gamma\left(\frac{1}{4}\right)}\right)$$ -$$\int_0^{\infty} \ln{x}\mathrm{sech}\left(x\right)\mathrm{d}x - \ln{\pi}\int_0^{\infty} \mathrm{sech}\left(x\right)\mathrm{d}x = \pi\log\left(\frac{\sqrt{2}\Gamma\left(\frac{3}{4}\right)}{\Gamma\left(\frac{1}{4}\right)}\right)$$ -$$\int_0^{\infty} \ln{x}\mathrm{sech}\left(x\right)\mathrm{d}x - \frac{\pi}{2}\ln{\pi} = \pi\log\left(\frac{\sqrt{2}\Gamma\left(\frac{3}{4}\right)}{\Gamma\left(\frac{1}{4}\right)}\right)$$ -$$\int_0^{\infty} \ln{x}\mathrm{sech}\left(x\right)\mathrm{d}x= \pi\log\left(\frac{\sqrt{2}\pi^{\frac{1}{2}}\Gamma\left(\frac{3}{4}\right)}{\Gamma\left(\frac{1}{4}\right)}\right) = \pi\log\left(\frac{2\pi^{\frac{3}{2}}}{\Gamma\left(\frac{1}{4}\right)^2}\right).$$<|endoftext|> -TITLE: Constructivism versus the unicorn -QUESTION [6 upvotes]: Consider the following statement: "All unicorns have wings". As far as I know, Aristotle would consider this statement false, because as there are no unicorns, they cannot have any properties (like "having wings"). -But modernly, said statement is considered true, precisely because there are no unicorns: for the statement to be false, there would have to exist (at least one) unicorn with no wings. -My question is this: the modern interpretation seems to be based (somewhat) in the principle of the excluded middle, that is, the statement must be true because it cannot be false (i.e. assuming its falsity leads to a "contradiction"). But from a constructive point of view, would one still consider the statement as being true? (And in case I have gone astray, does it even make sense to talk about constructivism for that sort of statement?) -Thank you in advance. - -REPLY [5 votes]: The issue of Aristotle's beliefs is somewhat complex, and I will merely link to an article on the Stanford Encyclopedia: The Traditional Square of Opposition. -The answer to the main question depends on the sort of constructive logic we employ. In what is often called "intuitionistic" constructive logic, we have a rule called the "principle of explosion" (also called ex falso quodlibet), which says that if can prove a contradiction then, from this, we can conclude any formula that we wish. This rule is also present in classical logic, but it is not as strong constructively as it is in classical logic, because we do not have excluded middle in constructive systems. Nevertheless, some weaker varieties of constructive logic do not include the principle of explosion. -Suppose we do accept that principle. Here is how to show "All unicorns have wings" in the usual informal constructive style, assuming we know that there are no unicorns. Knowing that there are no unicorns, constructively, means that we have a method to derive a contradiction from any proof of "there is a unicorn". -We next need to think about the constructive meaning of "All unicorns have wings". To prove "All unicorns have wings" constructively means to produce a procedure which, given an object $x$ and a proof that $x$ is a unicorn, produces a proof that $x$ has wings. This is related to the BHK interpretation of constructive logic. -One such procedure is as follows: - -First, suppose we are given a proof that some object $x$ is a unicorn. In particular, we have a proof of "there is a unicorn". -But we know that there are no unicorns. In other words, we know some proof of a contradiction from "there is a unicorn". -Therefore, we can derive a contradiction from (1) and (2). -Finally, we use the principle of explosion, which says in particular that from a contradiction we can conclude "$x$ has wings". - -Overall, this procedure gives a proof of "$x$ has wings" from any proof of "$x$ is a unicorn". This means that we can prove "Every unicorn has wings" constructively, if we assume the principle of explosion. - -Another approach to this is to ignore the quantifier, and work in propositional constructive logic. Here we assume $\lnot U$ (something is not a unicorn) and want to prove $W$ (that something has wings). So we want to look at the scheme -$$ -(\lnot U) \to (U \to W). -$$ -This scheme is provable in intuitionistic propositional logic (which includes the principle of explosion), but it is not provable in minimal logic, which is a weaker form of constructive logic without the principle of explosion. -The proof of the identity $(\lnot U) \to (U \to W)$ in intuitionistic logic is actually very simple. $\lnot U$ means $U \to \bot$, where $\bot$ is a symbol for a contradictory statement, and the principle of explosion says we may assume $\bot \to W$. We then obtain $U \to W$ from $U \to \bot$ and $\bot \to W$ by applying the inference rule of hypothetical syllogism, which is constructively acceptable.<|endoftext|> -TITLE: Is it true that $\lvert\alpha+\beta\rvert=\lvert\alpha\rvert+\lvert\beta\rvert$ for ordinals $\alpha$ and $\beta$? -QUESTION [5 upvotes]: Suppose $\alpha$ and $\beta$ are ordinals. I was asked to prove that $\lvert\alpha+\beta\rvert=\lvert\alpha\rvert+\lvert\beta\rvert$. However, I think to have found a counterexample for this equality, namely if $\alpha=\omega$ and $\beta=1$ then -$$\lvert\alpha+\beta\rvert=\lvert\omega+1\rvert=\lvert\omega\rvert=\omega\neq\omega+1=\lvert\alpha\rvert+\lvert\beta\rvert.$$ -Am I doing something wrong or is there a mistake in the exercise? - -REPLY [4 votes]: As I've mentioned in the comments, you are confusing ordinal and cardinal numbers. This is a very common mistake, and it's important to clarify it. -Let me add the subscript $c$ when we talk of cardinals and subscript $o$ when we talk of ordinals. Then we have $\alpha=\omega_o,\beta=1_o$. Then your argument is -$$|\alpha+\beta|=|\omega_o+1_o|=|\omega_o|=\omega_c\neq\omega_c+1_c=|\omega_o|+|1_o|=|\alpha|+|\beta|$$ -Right now it's easy to see where the mistake is - in the world of cardinal numbers, equality $\omega_c=\omega_c+1_c$ does hold. This is why your counterexample is invalid. -Edit: As Noah points out, the alternative way to look at this is to note that the two ways in which you use $+$ are different - once it is an operation of ordinal sum, the other is cardinal sum. -As Asaf mentions, the safest way to go is to represent cardinals using aleph notation (which has become pretty much a standard in set theory), and leave $\omega$ to only denote ordinals.<|endoftext|> -TITLE: How to generate random points on a sphere? -QUESTION [61 upvotes]: How do I generate $1000$ points $\left(x, y, z\right)$ and make sure they land on a sphere whose center is -$\left(0, 0, 0\right)$ and its diameter is $20$ ?. -Simply, how do I manipulate a point's coordinates so that the point lies on the sphere's "surface" ?. - -REPLY [3 votes]: Wolfram Mathworld provides a methodology for randomly picking a point on a sphere: - -To obtain points such that any small area on the sphere is expected to contain the same number of points, choose $u$ and $ν$ to be random variates on $[0,1]$. Then: $$\begin{array}{ll}\theta=2\pi u\\ -\varphi= arccos(2v - 1)\end{array}$$ gives the spherical coordinates for a set of points which are uniformly distributed over $\mathbb{S}^2$.<|endoftext|> -TITLE: Do there exist integers $a,b,c$ such that $a^5+b^5+c^5=2016abc$ and $a+b+c=5776$? -QUESTION [6 upvotes]: This question should be solvable without a calculator - I tried playing around with odd/even properties, but didn't get very far. -I also tried looking at the average of $a,b,c$ (about $1900$), but this involved a lot of manual computations and this is supposed to be solvable without a calculator. - -REPLY [13 votes]: No, there does not. In modulo $3$, you have $a+b+c\equiv 1$ and $a^5+b^5+c -^5\equiv 0$. However, $a^5\equiv a$. Therefore, the latter equation reduces to $a+b+c\equiv 0$. This is a contradiction.<|endoftext|> -TITLE: Complex subfields of finite index -QUESTION [5 upvotes]: It is known that the field $\mathbb{R}$ of real numbers is a complex subfield of index 2, that is, $[\mathbb{C},\mathbb{R}]=2$. Given an integer $n>2$ fixed, does there exist a subfield of $\mathbb{C}$ of index $n$? - -REPLY [5 votes]: There is no such subfield. It is a theorem of Artin-Schreier that if $K$ is algebraically closed and $L$ is a proper subfield of $K$ such that $[K:L]<\infty$, then $K$ is obtained from $L$ by adding a square root of $-1$, so $[K:L]=2$. See this MO answer.<|endoftext|> -TITLE: Can one find a line that is tangent to a cubic polynomial more than once? -QUESTION [5 upvotes]: I know that any line cannot be tangent to the graph of $y=Ax^3+Bx^2+Cx+D$ at more than one point. -Question: how can one show this, or even prove it? - -REPLY [2 votes]: Suppose the cubic has a tangent line with slope $m$, and for the sake of contradiction assume the tangent line touches at 2 distinct points $x=a$ and $x=b$. -By the mean value theorem, there must be a $c$ such that $a < c < b$ and $f'(c) = m$. This means that $f'(x) = m$ has three solutions $\{a, b, c\}$, but as $f'(x) = m$ is a quadratic, it can only have 2 solutions.<|endoftext|> -TITLE: Generating numbers by repeated doubling and digit reversal -QUESTION [13 upvotes]: Let $S$ be the smallest set of positive integers satisfying the following conditions: - -$1 \in S$, -If $n \in S$ then $2n \in S$, -If $n \in S$ then the digit reversal of $n$ is also in $S$. -We assume that any leading zeros are dropped after digit reversal. For example, the digit reversal of $12300$ is $321.$ -Is it true that $S$ contains all positive integers, except those divisible by $3$ or $11$? -EDIT: I have verified the conjecture up to $n = 10\,000$. - -REPLY [3 votes]: Let $A$ be the set of natural numbers not divisible by $3$ or $11$. Working backwards, we want to be able to derive $1$ from any $n\in A$, using a combination of the following two steps: - -$n\rightarrow R(n)\cdot 10^k$, if $n$ is not a multiple of $10$, where $R(n)$ is the reversal of $n$, and $k$ is any non-negative integer; -$n\rightarrow n/2$, if $n$ is even. - -The second step is the inverse of the rule "if $n\in S$ then $2n\in S$" in the original problem, which only produces even numbers. The first step is the inverse of the rule "if $n\in S$ then the digit-reversal of $n$ is in $S$", with its condition that trailing zeroes in $n$ are dropped after the reversal... this rule only produces non-multiples of $10$, and when it is inverted, we can recover any number of trailing zeroes. Note that these steps preserve membership in $A$. -By induction, we can get from any number $n\in A$ to $1$ iff we can get from any number $n \in A \setminus \{1\}$ to some smaller number $m -TITLE: How many rectangles are there on an $8 \times 8$ checkerboard? -QUESTION [8 upvotes]: How many rectangles are there on an $8 \times 8$ checkerboard? -\begin{array}{|r|r|r|r|r|r|r|r|} - \hline - & & & & & & & \\ \hline - & & & & & & & \\ \hline - & & & & & & & \\ \hline - & & & & & & & \\ \hline - & & & & & & & \\ \hline - & & & & & & & \\ \hline - & & & & & & & \\ \hline - & & & & & & & \\ \hline -\end{array} - -Attempt -I just counted them up via casework: - a -$1 \times 1: 64 $ -$1 \times 2: 56 $ -$\vdots$ -$1 \times 8: 8$ -Then, -$2 \times 2: 49$ -$2 \times 3: 42 $ -$\vdots$ -The pattern continues like it looks like it should. -Thus, we can sum up all of these solutions as $8(1+\cdots+8)+7(1+\cdots+7)+\cdots+2(1+2)+1(1) = 750$, but the correct answer is $1296$. Where did I go wrong? - -REPLY [16 votes]: First, the easy way to count them is to notice that each rectangle is completely determined by its top and bottom edges and its left and right edges. Pick any two of the nine horizontal lines and any two of the nine vertical lines, and you’ve picked out a rectangle. Conversely, each rectangle determines two horizontal and two vertical lines, those on which its edges lie. Thus, there must be -$$\binom92^2=36^2=1296$$ -rectangles. -Your calculation is off because you forgot that a rectangle can be wider than it is tall, so you missed half of the non-square rectangles.<|endoftext|> -TITLE: Any left ideal of $M_n(\mathbb{F})$ is principal -QUESTION [11 upvotes]: I'm working on the following problem: - -Let $A$ be the ring of $n \times n$ matrices over a field $\mathbb{F}$. -(a) Show that for any subspace $V$ of $\mathbb{F}^n$, the set $I_V$ of matrices whose kernel contains $V$ is a left ideal of $A$. -(b) Show that every left ideal of $A$ is principal. - -I've done part $a)$, but would like to know if you can prove $b)$ directly from $a)$. It seems to me that given the left ideal $J$, it should be the case that if $V$ is the intersection of the kernels of matrices in $J$, then we should have $J = I_V$. I can show that $I_V$ is principal, and certainly $J$ is contained in $I_V$, but I can't show the other direction. -I think you can prove $b)$ by considering the subspace $W$ of $\mathbb{F}^n$ consisting of the rows of elements of $J$, which is of dimension $k \leq n$ say, and then showing that $J$ is generated by any matrix whose first $k$ rows are some basis for $W$ and whose final rows are all $0$. But it seems that we should be able to do the problem just using $a)$, and I'd like to know how to do it! - -REPLY [2 votes]: Note that $I_V$ is principal left ideal of $M_n(\mathbb{F})$ whose generator can be taken to be any linear map $T \colon \mathbb{F}^n \rightarrow \mathbb{F}^n$ with $\ker(T) = V$. To see this, choose a basis $(w_1, \ldots, w_n)$ of $\mathbb{F}^n$ such that $(w_1, \ldots, w_k)$ is a basis of $\ker(T) = V$. Define $f_i = T(w_i)$ for $k + 1 \leq i \leq n$ and complete them to a basis $(f_1, \ldots, f_n)$ of $\mathbb{F}^n$. -Let $S \colon \mathbb{F}^n \rightarrow \mathbb{F}^n$ be a linear map with $V \subseteq \ker(S)$. We need to find a linear map $P \colon \mathbb{F}^n \rightarrow \mathbb{F}^n$ such that $PT = S$. Define $P$ by requiring that -$$ P(f_i) = \begin{cases} 0 & 1 \leq i \leq k, \\ -S(w_i) & k + 1 \leq i \leq n. \end{cases}$$ -Then -$$ P(T(w_i)) = \begin{cases} P(0) = 0 = S(w_i) & 1 \leq i \leq k, \\ -P(f_i) = S(w_i) & k + 1 \leq i \leq n \end{cases} $$ -which shows that $PT = S$. -Given any left-sided ideal $J$, let $V = \cap_{T \in J} \ker(T)$. Note that there must exist a linear map $T \in J$ with $\ker(T) = V$. Since $J$ is a left-sided ideal, we have -$$ (M_n(\mathbb{F}))T = I_V \subseteq J \subseteq I_V $$ -which shows that $J = I_V$.<|endoftext|> -TITLE: Scalar Multiplication of Limits $\epsilon$ - $\delta $ Proof -QUESTION [6 upvotes]: I am having troubling understanding the $\epsilon$ - $\delta $ proof of the scalar multiplication property of limits, which basically states: -$$\lim_{x\to a}[f(x) \cdot c]=c\cdot L$$ -The way I understand it (which doesn't feel to be a good understanding) our choice of $\delta$ is $\frac{\epsilon}{|c|}$, and then do we substitute this value of $\delta$ into the antecedent or the consequent? -Second we are basically trying to satisfy the definition, in this case, $$|x-a|<\delta \Longrightarrow |c \cdot f(x)-c \cdot L|< \epsilon$$ -right? So should I start from this definition(below) and try to work create the above to the above definition(so substitute in the consequent)? -$$|x-a|<\delta \Longrightarrow | f(x)- L|< \frac{\epsilon}{|c|}$$ -Also, why does this proof use $\delta=\delta_1$?? - -The proof in the picture is from the following link: http://tutorial.math.lamar.edu/Classes/CalcI/LimitProofs.aspx - -REPLY [3 votes]: You are given that $\lim_{x\rightarrow a}f(x)=L$. That means given any "positive value" there exists "another positive value"(depends on the "positive value") such that -if $0<|x-a|<$"another positive value" then $|f(x)-L|<$"positive value". This is the fact we have. -Now, you need to show that given any $\epsilon>0$ there exists $\delta>0$, such that $0<|x-a|<\delta$ then $|cf(x)-cL|<\epsilon$. -So, first take any $\epsilon>0$. Then $\epsilon/|c|$ is also positive. Then by the fact we have, there exists a $\delta>0$ such that if $0<|x-a|<\delta$ then $|f(x)-L|<\epsilon/|c|$. That means $|cf(x)-cL|<\epsilon$. So, we are done.<|endoftext|> -TITLE: General definition of angle/ rotation -QUESTION [5 upvotes]: It is well known that in the Euclidean plane a rotation about the origin can be computed with the formula -$$R_{\theta}(x,y) = \big(\cos(\theta)x-\sin(\theta)y, \sin(\theta)x+\cos(\theta)y\big)$$ -It is somewhat well known that in the hyperbolic (Minkowski) plane a hyperbolic rotation (Lorentz boost) about the origin can be computed with the formula -$$HR_{\phi}(t,x) = \big(\cosh(\phi)t - \sinh(\phi)x, -\sinh(\phi)t+\cosh(\phi)x\big)$$ -I'm well aware that given an inner product we can define the angle between two vectors $u,v$ by $\cos(\theta) = \dfrac{\langle u, v\rangle}{\|u\|\|v\|}$. But the hyperbolic plane (thought of as a vector space) isn't an inner product space. As far as I know the difference between the Euclidean plane and the hyperbolic plane is that they are equipped with different quadratic forms. In the Euclidean plane that quadratic form can be used to define an inner product, but not in the hyperbolic plane as it's not positive definite. -This leads me to think that there's a generalization of the formula for angles (or rotations as one seems to be expressible in terms of the other) for quadratic spaces. Does anyone know of a way of extending the concept of angle/ rotation to general quadratic spaces? - -REPLY [3 votes]: While the indefinite bilinear form for the hyperbolic plane technically doesn't count as an inner product, it is still a symmetric bilinear form, and that is really the most appropriate generalization of the two. -Things like orthogonality and rotation can be and are defined in the same way that they are in real inner product spaces. Orthogonal transformations are those preserving the bilinear form, and rotations are the subset of those with determinant 1, reflections are those with determinant -1 etc. -Of course, the bilinear form corresponds to the quadratic form you mentioned. Perhaps you didn't trust that the analogy could be stretched to indefinite forms. It is still definitely useful. The quadratic form no longer suggests length ( no pun intended) but I know that while interpreting real spaces with indefinite metrics for relativity, they sometimes call the quantity an 'interval' in spacetime, rather than the length. Of course the form can sort out the timeline, lightlike, and spacelike vectors, too. -The theory of bilinear forms is very rich and approachable. I recommend Kaplansky's book Linear algebra and geometry for this, but enjoyment of the exposition may vary based on your temperament. Reading this, I found out why spaces with indefinite forms are as useful, if not more useful, compared to definite spaces.<|endoftext|> -TITLE: Finding an angle in a figure involving tangent circles -QUESTION [7 upvotes]: The circle $A$ touches the circle $B$ internally at $P$. The centre $O$ of $B$ is outside $A$. Let $XY$ be a diameter of $B$ which is also tangent to $A$. Assume $PY > PX$. Let $PY$ intersect $A$ at $Z$. If $Y Z = 2PZ$, what is the magnitude of $\angle PYX$ in degrees? -What I have tried: - -Obviously, the red angles are equal, and the orange angles are equal. This gives $XY \parallel TZ$. -$YZ=2PZ$. From this $XY=3TZ$ then $O'Z=3OY$. Let $O'Z=a=O'S$ so $SZ=\sqrt{2} a$, and also $O'O=2a$ -Then $SO=\sqrt{3} a$. Now we can use trigonometry to find $\angle PYX$ in triangle $ZSY$. - - -Please verify whether my figure is correct. Your solution to this question is welcomed, especially if it is shorter. - -REPLY [5 votes]: Triangle O'SO fits the description of a 30-60-90 special angled triangle. Therefore, $\angle O'OS = 30^0$ -Then, $\angle PYX = 15^0$ [angles at center = 2 times angles at circumference] - -REPLY [4 votes]: Hint: -Notice that $\angle PYX=\angle PYO= \angle OPY$ and $\angle POX=\angle PYO+\angle OPY$, so $$\angle PYX=\frac{1}{2}\angle POX$$ -On the other hand, -$$\tan \angle POX=\frac{|O'S|}{|SO|}=\frac{a}{a\sqrt{3}}=\frac{1}{\sqrt{3}}\qquad\implies\qquad \angle POX=\tan^{-1}\left(\frac{1}{\sqrt{3}}\right)=30^{\circ}$$<|endoftext|> -TITLE: How to recognize a finitely generated abelian group as a product of cyclic groups. -QUESTION [19 upvotes]: Let $G$ be the quotient group $G=\mathbb{Z}^5/N$, where $N$ is generated by $(6,0,-3,0,3)$ and $(0,0,8,4,2)$. Recognize $G$ as a product of cyclic groups. - -Honestly, I do not know how to solve these type of problems. But I know that this is somehow an application of Fundamental theorem of finitely generated abelian groups. That theorem states an existence of such a product as $\mathbb{Z}^r\times \mathbb{Z}_{n_1}\times ... \times \mathbb{Z}_{n_s}$, but does not states a way to find $r,n_1,...,n_s$. I know how to use this theorem for a finite abelian group. But could not find a way to solve these type of problems even in a book. Could somebody explain me? - -REPLY [10 votes]: kaiten has already provided a good answer to your question, so I just want to make some remarks about the general theory and show why computation of the Smith normal form gives the desired answer. -Given a free module $M$ with rank $n < \infty$ over a PID $R$, every submodule $N \leq M$ is also free of finite rank. Moreover, there is a basis $y_1, \ldots, y_n$ of $M$ and scalars $a_1, \ldots, a_m \in R$ such that $a_1 \mid \cdots \mid a_m$ and $a_1 y_1, \ldots, a_m y_m$ is a basis of $N$. This is what Keith Conrad calls an aligned basis. This blurb of his has some great pictures illustrating aligned vs. unaligned bases which I've copied below. - - -Once we've found an aligned basis, writing $M/N$ as a direct sum of cyclic $R$-modules is easy: -\begin{align} -\frac{M}{N} &= \frac{Ry_1 \oplus \cdots \oplus Ry_m \oplus \cdots \oplus Ry_n}{Ra_1 y_1 \oplus \cdots \oplus Ra_m y_m} \cong \frac{Ry_1}{Ra_1 y_1} \oplus \cdots \oplus \frac{Ry_m}{Ra_m y_m} \oplus R y_{m+1} \oplus \cdots \oplus R y_n\\ -&\cong \frac{R}{a_1 R} \oplus \cdots \oplus \frac{R}{a_m R} \oplus R^{n-m} \tag{1} -\end{align} -Okay, so how do we find an aligned basis and compute scalars $a_1, \ldots, a_m$? As I mentioned in my comment, this can be achieved by computing the Smith normal form of the matrix of the homomorphism -\begin{align*} -\varphi: \mathbb{Z}^2 &\to \mathbb{Z}^5 = M\\ -\begin{pmatrix} 1\\ 0\end{pmatrix}, \begin{pmatrix} 0\\ 1\end{pmatrix} &\mapsto \begin{pmatrix}6\\0\\-3\\0\\3\end{pmatrix}, \begin{pmatrix}0\\0\\8\\4\\2\end{pmatrix} -\end{align*} -which has matrix -$$ -A=\begin{pmatrix}6&0\\0&0\\-3&8\\0&4\\3&2\end{pmatrix} -$$ -with respect to the standard bases for $\mathbb{Z}^2$ and $\mathbb{Z}^5$. The Smith normal form might seem strange, but really it's just a version of performing a change of basis like in linear algebra. As kaiten showed, by performing row and column operations, we can turn $A$ into a diagonal matrix $D$. Recall that a row (resp. column) operation can be achieved by multiplying $A$ on the left (resp. right) by an elementary matrix $E$. (Simply apply the same row or column operation to the identity matrix to determine $E$.) Multiplying these elementary matrices together yields invertible matrices $P$ and $Q$ such that $PAQ = D$ is a diagonal matrix. In your example, we have -\begin{align*} -PAQ = -\left(\begin{array}{rrrrr} -0 & 0 & 0 & 0 & 1 \\ --1 & 0 & 1 & -3 & -1 \\ --3 & 0 & -4 & 7 & 2 \\ --1 & 0 & -2 & 4 & 0 \\ -0 & 1 & 0 & 0 & 0 -\end{array}\right) -\begin{pmatrix}6&0\\0&0\\-3&8\\0&4\\3&2\end{pmatrix} -\left(\begin{array}{rr} --1 & -2 \\ -2 & 3 -\end{array}\right) -= -\left(\begin{array}{rr} -1 & 0 \\ -0 & 6 \\ -0 & 0 \\ -0 & 0 \\ -0 & 0 -\end{array}\right) = D \, . -\end{align*} -What do these row and column operations mean? They correspond to new choices of basis for $\mathbb{Z}^2$ and $\mathbb{Z}^5$ such that $\varphi$ can be written particularly simply. More explicitly, since $P$ and $Q$ are invertible, they are change of basis matrices for some bases we seek to determine. Writing $Q = {_{\mathcal{E}}[\text{id}]_\mathcal{B}}$ (where $\mathcal{E} = \{e_1, e_2\}$ is the standard basis for $\mathbb{Z}^2$), by the definition of the matrix of a linear map, we see that $\mathcal{B} = \{x_1 = -e_1 + 2e_2, x_2 = -2e_1 + 3e_2\}$. Similarly, by computing -$$ -P^{-1} = -\left(\begin{array}{rrrrr} --6 & -2 & 2 & -5 & 0 \\ -0 & 0 & 0 & 0 & 1 \\ -19 & 5 & -7 & 16 & 0 \\ -8 & 2 & -3 & 7 & 0 \\ -1 & 0 & 0 & 0 & 0 -\end{array}\right) -$$ -we see that $P = {_\mathcal{C}[\text{id}]_\mathcal{F}}$ (where $\mathcal{F} = \{f_1, \ldots, f_5\}$ is the standard basis for $\mathbb{Z}^5$) for the basis -\begin{align*} -\mathcal{C} = \{y_1 &= -6f_1 + 19 f_3 + 8 f_4 + f_5,\\ -y_2 &= -2 f_1 + 5f_3 + 2f_4,\\ -y_3 &= 2 f_1 -7 f_3 -3 f_4,\\ -y_4 &= -5 f_1 + 16 f_3 + 7 f_4,\\ -y_5 &= f_2 \}\, . -\end{align*} -You can check that $\varphi(x_1) = y_1$ and $\varphi(x_2) = 6y_2$, which verifies that $N = \text{img}(\varphi) = \mathbb{Z}y_1 \oplus \mathbb{Z} 6y_2$. Then $(1)$ gives $M/N \cong \mathbb{Z}^3 \oplus \mathbb{Z}/6\mathbb{Z}$, which agrees with answer given by kaiten.<|endoftext|> -TITLE: Prove that $\cosh^{-1}(1+x)=\sqrt{2x}(1-\frac{1}{12}x+\frac{3}{160}x^2-\frac{5}{896}x^3+....)$ -QUESTION [9 upvotes]: How can we prove the series expansion of - $$\cosh^{-1}(1+x)=\sqrt{2x}\left(1-\frac{1}{12}x+\frac{3}{160}x^2-\frac{5}{896}x^3+...\right)$$ - - -I know the formula for $\cosh^{-1}(x)=\ln(x+\sqrt{x^2-1})$ so, $$\cosh^{-1}(1+x)=\ln(1+x+\sqrt{x^2+2x}).$$ I tried to apply Maclaurin series but i could not find the $f(0),f'(0),f''(0),f'''(0)$ -Is there any other method available to prove this series expansion like Laurent series etc. -Please help me.Thanks. - -REPLY [5 votes]: The trick is to recognize the $1/\sqrt x$ singularity in the derivative of the function of interest and transform the series into a series in $\sqrt x$. -To proceed, we substitute $z=\sqrt{x}$. Then, we have -$$\begin{align} -\cosh^{-1}(1+x)&=\cosh^{-1}(1+z^2)\\\\ -&=\log\left((z^2+1)+z\sqrt{z^2+2}\right)\tag 1 -\end{align}$$ -Now, expanding $f(z)=\log\left((z^2+1)+z\sqrt{z^2+2}\right)$ in a series around $z=0$ reveals -$$\begin{align} -f'(z)&=\frac{1}{(z^2+1)+z\sqrt{z^2+2}}\left(2z+\sqrt{z^2+2}+\frac{z^2}{\sqrt{z^2+2}}\right)\\\\ -&=\frac{2}{\sqrt{z^2+2}} -\end{align}$$ -Then, continuing to differentiate, we find that -$$\begin{align} -f^{(2)}(z)&=-2z(z^2+2)^{3/2}\\\\ -f^{(3)}(z)&=4(z^2-1)(z^2+2)^{-5/2}\\\\ -f^{(4)}(z)&=-12z(z^2-3)(z^2+2)^{-7/2}\\\\ -f^{(5)}(z)&=24(2z^4-12z^3+3)(z^2+2)^{-9/2} -\end{align}$$ -Evaluating these at $z=0$ reveals -$$\begin{align} -f(z)&=f(0)+f'(0)z+\frac12f^{(2)}(0)z^2+\frac16f^{(3)}(0)z^3+\frac1{24}f^{(4)}(0)z^4+\frac1{120}f^{(5)}(0)z^5+O(z^7)\\\\ -&=\sqrt 2\,z-\frac{\sqrt{2}}{12}z^3+\frac{3\sqrt 2}{160}z^5+O(z^7)\\\\ -&=\sqrt{2x}\left(1-\frac1{12}x+\frac{3}{160}x^2+O(x^3)\right) -\end{align}$$<|endoftext|> -TITLE: Calculate the sum of first $n$ natural numbers taken $k$ at a time -QUESTION [12 upvotes]: So sum of first $n$ natural numbers taken $1$ at a time is $n\cdot(n+1)/2$ -but what about $2,3,\dots,k$ at a time? -Is there a general formula? -For example, taking 1 at a time -$$\sum_{i = 1}^{n} i = \frac{n(n+1)}{2}$$ -taking 2 at a time -$$\sum_{i = 1}^{n}\sum_{j = 1}^{n - i} i\cdot(i+j) = \frac{1}{24} n(n+1)(n-1)(3n+2)$$ -taking 3 at a time -$$\sum_{i = 1}^{n}\sum_{j = 1}^{n - i}\sum_{h = 1}^{n - j - i} i\cdot(i+j)\cdot(i + j +h) = \frac{1}{48} n^2(n+1)^2(n-1)(n-2)$$ -taking $k$ at a time? - -REPLY [11 votes]: Overview: This answer consists of three parts - -At first we develop summation formulae for slightly different $k$-fold sums for $k=1,2,3$. We take indices starting from $0$ instead of $1$, which makes calculation somewhat more convenient. -We obtain for general $k\geq 1$ a nice expression of the $k$-fold sum - \begin{align*} -\sum_{i_1=0}^{n}\sum_{i_2=0}^{n-i_1}&\ldots\sum_{i_k=0}^{n-i_1-i_2-\ldots-i_{k-1}}i_1(i_1+i_2)\cdots(i_1+i_2+\ldots+i_k)\\ -\end{align*} - as sum of binomial coefficients . -We look at the difference of the $k$-fold sums when starting from $0$ and when starting from $1$. - - -Hint: If you are not familiar with generating functions you might want to look at the backstage info at the end of this answer. - -Part 1: Summation formula for $k=1,2,3$ and start index $0$ -The following is valid -\begin{align*} -\sum_{i=0}^ni&=\binom{n+1}{2}=\frac{1}{2}n(n+1)\tag{1}\\ -\sum_{i=0}^n\sum_{j=0}^{n-i}i(i+j)&=\binom{n+3}{4}+2\binom{n+2}{4}\\ -&=\frac{1}{24}n(n+1)(n+2)(3n+1)\tag{2}\\ -\sum_{i=0}^n\sum_{j=0}^{n-i}\sum_{k=0}^{n-i-j}i(i+j)(i+j+k)&=\binom{n+5}{6}+8\binom{n+4}{6}+6\binom{n+3}{6}\\ -&=\frac{1}{48}n^2(n+1)^2(n+2)(n+3)\tag{3}\\ -\end{align*} - -Note, the representation of binomial coefficients looks promising, cause a pattern for $k>3$ can easily be derived. We will see in Part 2, that the multiplicities of the binomial coefficients also follow a well-known pattern. -In order to show these formulae we need a repertoire of $\left(x\frac{d}{dx}\right)^k\frac{1}{1-x}$ for $k=1,\ldots,3$. -We obtain -\begin{align*} -\left(x\frac{x}{dx}\right)\frac{1}{1-x}&=\frac{x}{(1-x)^2} -=\sum_{n=0}^{\infty}nx^n\\ -\left(x\frac{x}{dx}\right)^2\frac{1}{1-x}&=\frac{x(1+x)}{(1-x)^3} -=\sum_{n=0}^{\infty}n^2x^n\\ -\left(x\frac{x}{dx}\right)^3\frac{1}{1-x}&=\frac{x(1+4x+x^2)}{(1-x)^4} -=\sum_{n=0}^{\infty}n^3x^n\\ -\end{align*} -It's also convenient to use the coefficient of operator $[x^n]$ to denote the coefficient of $x^n$ in a generating series. - -We obtain for $k=1$ - \begin{align*} -\sum_{i=0}^{n}i&=\sum_{i=0}^{n}i\cdot1\\ -&=\sum_{i=0}^n[x^i]\left(x\frac{x}{dx}\right)\frac{1}{1-x}[x^{n-i}]\frac{1}{1-x}\tag{4}\\ -&=\sum_{i=0}^n[x^i]\frac{x}{(1-x)^2}[x^{n-i}]\frac{1}{1-x}\\ -&=[x^n]\frac{x}{(1-x)^3}\tag{5}\\ -&=[x^n]x\sum_{k=0}^{\infty}\binom{-3}{k}(-x)^k\\ -&=[x^{n-1}]\sum_{k=0}^{\infty}\binom{k+2}{2}x^k\tag{6}\\ -&=\binom{n+1}{2}\\ -&=\frac{1}{2}n(n+1) -\end{align*} - and (1) follows. - -Comment: - -In (4) we use $$i=[x^i]\sum_{k=0}^{\infty}kx^x=[x^i]\left(x\frac{x}{dx}\right)\frac{1}{1-x}$$ and $$1=[x^{n-i}]\sum_{k=0}^{\infty}x^k=[x^{n-i}]\frac{1}{1-x}$$ -In (5) we use $[x^n]A(x)B(x)=\sum_{i=0}^{n}\left([x^i]A(x)\right)\left([x^{n-i}]B(x)\right)$ -In (6) we use the formula $\binom{-n}{k}=\binom{n+k-1}{k}(-1)^k=\binom{n+k-1}{n-1}(-1)^k$ - - -We obtain for $k=2$ - \begin{align*} -\sum_{i=0}^{n}&\sum_{j=0}^{n-i}i(i+j)\\ -&=\sum_{i=0}^ni^2\sum_{j=0}^{n-i}1+\sum_{i=0}^ni\sum_{j=0}^{n-i}j\\ -&= \sum_{i=0}^n[x^i]\left(x\frac{x}{dx}\right)^2\frac{1}{1-x}[x^{n-i}]\frac{1}{(1-x)^2}\\ -&\qquad+\sum_{i=0}^n[x^i]\left(x\frac{x}{dx}\right)\frac{1}{1-x}[x^{n-i}]\frac{1}{1-x}\left(x\frac{x}{dx}\right)\frac{1}{1-x}\\ -&= \sum_{i=0}^n[x^i]\frac{x(1+x)}{(1-x)^3}[x^{n-i}]\frac{1}{(1-x)^2}+\sum_{i=0}^n[x^i]\frac{x}{(1-x)^2}[x^{n-i}]\frac{x}{(1-x)^3}\\ -&=[x^n]\frac{x+2x^2}{(1-x)^5}\\ -&=\left([x^{n-1}]+2[x^{n-2}]\right)\sum_{k=0}^{\infty}\binom{-5}{k}(-x)^k\\ -&=\left([x^{n-1}]+2[x^{n-2}]\right)\sum_{k=0}^{\infty}\binom{k+4}{4}x^k\\ -&=\binom{n+3}{4}+2\binom{n+2}{4}\\ -&=\frac{1}{24}n(n+1)(n+2)(3n+1) -\end{align*} - and (2) follows. - -$$ $$ - -We obtain for $k=3$ - \begin{align*} -\sum_{i=0}^{n}&\sum_{j=0}^{n-i}\sum_{k=0}^{n-i-j}i(i+j)(i+j+k)\\ -&=\sum_{i=0}^{n}\sum_{j=0}^{n-i}\sum_{k=0}^{n-i-j}(i^3+2i^2j+i^2k+ij^2+ijk)\\ -&=\sum_{i=0}^{n}\sum_{j=0}^{n-i}\sum_{k=0}^{n-i-j}(i^3+4i^2j+ijk)\tag{7}\\ -&=\sum_{i=0}^{n}i^3\sum_{j=0}^{n-i}1\sum_{k=0}^{n-i-j}1 -+4\sum_{i=0}^{n}i^2\sum_{j=0}^{n-i}j\sum_{k=0}^{n-i-j}1\tag{8}\\ -&\qquad+\sum_{i=0}^{n}i\sum_{j=0}^{n-i}j\sum_{k=0}^{n-i-j}k\\ -\end{align*} - -Note, in (7) we use the symmetry -\begin{align*} -\sum_{i=0}^{n}\sum_{j=0}^{n-i}\sum_{k=0}^{n-i-j}i^2j -=\sum_{i=0}^{n}\sum_{j=0}^{n-i}\sum_{k=0}^{n-i-j}i^2k -=\sum_{i=0}^{n}\sum_{j=0}^{n-i}\sum_{k=0}^{n-i-j}ij^2 -\end{align*} -The calculation of the three sums in (8) is straight forward and can be done similarly to $k=1,2$. - -We obtain -\begin{align*} -\sum_{i=0}^{n}i^3\sum_{j=0}^{n-i}1\sum_{k=0}^{n-i-j}1&=[x^n]\frac{x(1+4x+x^2)}{(1-x)^7} -=\binom{n+3}{6}+4\binom{n+4}{6}+\binom{n+5}{6}\\ -\sum_{i=0}^{n}i^2\sum_{j=0}^{n-i}j\sum_{k=0}^{n-i-j}1&=[x^n]\frac{x^2(1+x)}{(1-x)^7} -=\binom{n+4}{6}+\binom{n+3}{6}\\ -\sum_{i=0}^{n}i\sum_{j=0}^{n-i}j\sum_{k=0}^{n-i-j}k&=[x^n]\frac{x^3}{(1-x)^7} -=\binom{n+3}{6}\\ -\end{align*} -Combining the three sums according to (7) results in -\begin{align*} -\sum_{i=0}^{n}\sum_{j=0}^{n-i}\sum_{k=0}^{n-i-j}(i^3+4i^2j+ijk) -&=6\binom{n+3}{6}+8\binom{n+4}{6}+\binom{n+5}{6}\\ -&=\frac{1}{48}n^2(n+1)^2(n+2)(n+3) -\end{align*} - and the claim (3) follows. - -$$ $$ - -Part 2: Summation formula for all $k\geq 1$ and startindex $0$. -In the following we don't give a proof for general $k$, but we provide some aspects which give strong evidence for the correctness of the claim. -When looking at -\begin{align*} -\sum_{i=0}^ni&=\color{blue}{1}\binom{n+1}{2}\\ -\sum_{i=0}^n\sum_{j=0}^{n-i}i(i+j)&=\color{blue}{1}\binom{n+3}{4}+\color{blue}{2}\binom{n+2}{4}\\ -\sum_{i=0}^n\sum_{j=0}^{n-i}\sum_{k=0}^{n-i-j}i(i+j)(i+j+k)&=\color{blue}{1}\binom{n+5}{6}+\color{blue}{8}\binom{n+4}{6}+\color{blue}{6}\binom{n+3}{6}\\ -\end{align*} - the shape of the binomial coefficients can be easily generalized. The coefficients - \begin{align*} -\color{blue}{1};\quad \color{blue}{1},\color{blue}{2};\quad \color{blue}{1},\color{blue}{8},\color{blue}{6} -\end{align*} - are part of the OEIS sequence A008517 and give the values of the Second order Eulerian Triangle $T(k,l),1\leq l\leq k$. - -The values $T(4,l), 1\leq l\leq 4$ are $\color{blue}{1},\color{blue}{22},\color{blue}{58},\color{blue}{24}$ and indeed, it is easy to verify that -\begin{align*} -\sum_{i=0}^n&\sum_{j=0}^{n-i}\sum_{k=0}^{n-i-j}\sum_{l=0}^{n-i-j-l}i(i+j)(i+j+k)(i+j+k+l)\\ -&= -\color{blue}{1}\binom{n+7}{8}+\color{blue}{22}\binom{n+6}{8}+\color{blue}{58}\binom{n+5}{8}+\color{blue}{24}\binom{n+4}{8} -\end{align*} - -We are now in the position to -Claim: The following formula of the $k$-fold sum is valid - \begin{align*} -\sum_{i_1=0}^{n}\sum_{i_2=0}^{n-i_1}&\ldots\sum_{i_k=0}^{n-i_1-i_2-\ldots-i_{k-1}}i_1(i_1+i_2)\cdots(i_1+i_2+\ldots+i_k)\\ -&=\sum_{l=1}^{k}\color{blue}{T(k,l)}\binom{n+2k-l}{2k}\tag{9} -\end{align*} - with $T(k,l)$ the numbers of the second order Eulerian Triangle (A008517). - -$$ $$ - -Part 3: Summation formula for all $k\geq 1$ and startindex $1$. -Finally some aspects about the summation formula with indices starting with $1$. -When looking at the differences of - \begin{align*} -\sum_{i=0}^ni&-\sum_{i=1}^ni=0\\ -\sum_{i=0}^n\sum_{j=0}^{n-i}i(i+j)&-\sum_{i=1}^n\sum_{j=1}^{n-i}i(i+j)=\frac{1}{6}n(n+1)(2n+1)\\ -\sum_{i=0}^n\sum_{j=0}^{n-i}\sum_{k=0}^{n-i-j}i(i+j)(i+j+k)&- -\sum_{i=1}^n\sum_{j=1}^{n-i}\sum_{k=1}^{n-i-j}i(i+j)(i+j+k)\\ -&\qquad=\frac{1}{12}(n+1)^2(n+2)^2(2n+3)\\ -\end{align*} -we can find them as A000330 ($k=2)$ and as A108674 ($k=3)$ in the OEIS database. -Some further elaboration could be to look for a difference formula for general $k$ and combine it with the result of (9) to obtain a nice expression for OPs $k$-fold sum with indices starting from $1$. - -Backstage info: -You might want to skip this section, if generating functions are already known. -When looking at the $2$-fold sum -\begin{align*} -\sum_{i=1}^n\sum_{j=1}^{n-i}i(i+j)=\sum_{i=1}^n\left(i^2\sum_{j=1}^{n-i}1\right)+\sum_{i=1}^n\left(i\sum_{j=1}^{n-i}j\right) -\end{align*} -the outer sum has the shape of a Cauchy product -\begin{align*} -\sum_{i=0}^{n}a_ib_{n-i} -\end{align*} -with index $i$ starting at $1$ instead of $0$. -Since we want to use generating functions $A(x)=\sum_{n= 0}^{\infty}a_nx^n$ to derive the summation formulae, and the product of generating function -\begin{align*} -A(x)B(x)=\sum_{k= 0}^{\infty}a_kx^k\sum_{l= 0}^{\infty}b_lx^l -= \sum_{n= 0}^{\infty}\left(\sum_{k=0}^{n-k}a_kb_{n-k}\right)x^n -\end{align*} - gives Cauchy products, we consider $k$-fold sums with index starting from $0$ instead. -This is interesting by its own and later we can look at the difference to $k$-fold sums with indices starting from $1$. -We can successively apply the $\left(x\frac{d}{dx}\right)$-operator to a generating function $A(x)$ to obtain -\begin{align*} - \left(x\frac{d}{dx}\right)A(x)&=\sum_{n=0}^{\infty}na_nx^n\\ - \left(x\frac{d}{dx}\right)^2A(x)&=\sum_{n=0}^{\infty}n^2a_nx^n - \end{align*} -Multiplication of $A(x)$ with $\frac{1}{1-x}$ results in summing up the coefficients $a_n$ -\begin{array}{crl} - (a_n)_{n\geq 0}\qquad &\qquad A(x)=&\sum_{n=0}^{\infty}a_nx^n\\ - \left(\sum_{k=0}^{n}a_k\right)_{n\geq 0}\qquad&\qquad\frac{1}{1-x}A(x)=&\sum_{n=0}^{\infty}\left(\sum_{k=0}^{n}a_k\right)x^n - \end{array}<|endoftext|> -TITLE: Qualitative behavior of critical point at the origin -QUESTION [6 upvotes]: Determine the qualitative behavior of the critical point at the origin for the following system for all possible values of $a$: -$\dot{x} = -y + ax(x^2+y^2)$ -$\dot{y} = x + ay(x^2+y^2)$ -My question: I attempted to use the Local Center Manifold theorem to show that the center manifold: $x = h(y) = a_0 + a_1y+a_2y^2 +...$ for $a_0, a_1, ...$ are parameters to be determined and $h(0) = h'(0) = 0$, must be $0$. To do this, assume $h(y)\neq 0$ for $y\neq 0$. Now, we replace $x$ by $h(y)$ from the sysem above, and from the identity: $\dot{x} = \dot{y}\ h'(y)$, we get the following equation for all values of $a$: -$-y + a(a_0 + a_1y+ a_2y^2 + ...) (a_0^2+a_1^2y^2 + 2a_0a_1y+2a_0a_2y^2 +...+y^2) = (a_1 + 2a_2y + 3a_3y^2 + ...)[a_0 + a_1y+ a_2y^2 + ... + ay(a_0^2 + a_1^2y^2 + 2a_0a_1y + 2a_0a_2y^2 + ... +y^2)]$ -Since $h(0) = h'(0) = 0$, we instantly get $a_0 = a_1 = 0$. But then the $-y$ term on the LHS of the equation above is never cancelled with anything, so the equation, after matching terms by terms, cannot be true for every $y\neq 0$. Thus $h(y)$ does not exist in this case. -Therefore, $h(y) = 0$ is the only choice, which implies $x = 0$. But if this is the case, then $0 = -y$, so $y = 0$ as well. Thus the critical point is a saddle point? Is this a correct conclusion? - -REPLY [2 votes]: I know this is not the method you were going for, but I could not resist offering another possible solution. -According to the Wikipedia article on Lyapunov Stability: - -Lyapunov, in his original 1892 work, proposed two methods for - demonstrating stability. The first method developed the solution in - a series which was then proved convergent within limits. The second - method, which is almost universally used nowadays, makes use of a - Lyapunov function $V(x)$ which has an analogy to the potential - function of classical dynamics. It is introduced as follows for a - system $\dot{x} = f(x)$ having a point of equilibrium at x=0. - Consider a function $V(x)$ : $\mathbb{R}^n \rightarrow \mathbb{R}$ - such that: - -$V(x)=0$ if and only if $x=0$ -$V(x)>0$ if and only if $x \ne 0$ -$\dot{V}(x) = \frac{d}{dt}V(x) = \sum_{i=1}^n\frac{\partial V}{\partial x_i}f_i(x) \le 0$ for all values of $x$ (negative - semidefinite). Note: for asymptotic stability, $\dot{V}(x)<0$ for $x -> \ne 0$ is required (negative definite). - - -In your example, a candidate for a Lyapunov function is $V(x,y) = x^2 +y^2$. -Note: the method in the article, as it stands, will give you the behavior for $a=0$ and $a<0$. Here is the additional case: - -Suppose $X$ is a $C^{1}$ vector field on an open set $\Omega \subset -> \mathbb {R}^n$, $0 \in \Omega$ is a critical point of $X$, and $V : -> \Omega \rightarrow \mathbb {R}$ is a continuous function such that - -$V (0) = 0$ -there exists $\Omega_{-} \subset \Omega$ such that $\Omega_{-} \cap B_{\delta}(0)= \emptyset$ for any $\delta >0$, $V (x) < 0 \ \forall -> x\in \Omega_{-}$, $V (x) = 0 \ \forall x\in \partial \Omega_{-} \cap -> B_{\epsilon} (0)$ for some $\epsilon >0$; -$V$ is strictly decreasing on the part of orbits that stay in $\Omega$. - -Then $0$ is unstable. - -In the case of $a>0$, let $V(x,y) = -x^2 -y^2$ and use the theorem above.<|endoftext|> -TITLE: Aside from the obvious stuff, do the partial functions that solve the quadratic equation have any interesting properties? -QUESTION [12 upvotes]: Let us define partial functions -$$f_+,f_- : \mathbb{R} \leftarrow \mathbb{R} \times \mathbb{R} \times \mathbb{R}$$ -so as to return the zeros of the quadratic $ax^2+bx+c$ whenever they exist, such that if $a > 0$, then $f_+(a,b,c)$ is the larger of the two roots, and $f_-(a,b,c)$ is the smaller of the two. -In particular, we define: -$$f_+(a,b,c) = \frac{-b + \sqrt{b^2-4ac}}{2a}, \qquad f_-(a,b,c) = \frac{-b - \sqrt{b^2-4ac}}{2a}$$ -So $f_+(a,b,c)$ and $f_-(a,b,c)$ are proper iff $b^2 \geq 4ac$ and $a \neq 0$. -For example, $f_+(1,-1,-1)$ is the golden ratio. - -Question. Apart from the obvious stuff, like - -$(2af_+(a,b,c)+b)^2 = b^2-4ac$ -$(2af_-(a,b,c)+b)^2 = -(b^2-4ac)$ -$a(f_+(a,b,c)+f_-(a,b,c))=-b$ - -whenever both sides of the equation are proper, do $f_+$ and $f_-$ satisfy any other interesting identities and/or relationships to each other and/or to addition and multiplication? For example, can we say anything interesting about $f_+(a+a',b,c)$ or $f_+(aa',b,c)$ or $f_+(f_+(a,b,c),d,e)$, etc? -Furthermore, does $\mathbb{R}$ equipped with these partial functions and possibly one or two others form an interesting partial algebraic structure in its own right, and live naturally in a well-behaved category of similar such structures? - -REPLY [3 votes]: With Viete's Formulas and some simple algebra, we see that $cx^2 + bx + a = 0$ has roots that are inverses of the roots of $ax^2 + bx + c = 0$, so -$$\left\{f_-(c, b, a), f_+(c, b, a)\right\} = \left\{\frac{1}{f_-(a, b, c)}, \frac{1}{f_+(a, b, c)}\right\}$$ -It should be possible to generate a large amount of such relations using similar processes (I'm sure textbooks pertaining to quadratic equations have many of such root-transforming problems). - -Another example I've worked out: -By finding an equation whose roots are the squares of the roots of $ax^2 + bx + c = 0$, -$$\left\{f_{\{-,+\}}(a^2, 2ac - b^2, c^2)\right\} = \left\{\left(f_{\{-, +\}}(a, b, c)\right)^2\right\}$$<|endoftext|> -TITLE: Give another proof of intermediate value theorem -QUESTION [6 upvotes]: Give another proof of intermediate value theorem by completing the - following argument: If $f$ is a continuous real-valued function on the - closed interval $[a,b]$ in $\mathbb{R}$ and $f(a)<\gamma < f(b)$ then - $$f(\sup \{x\in [a,b]: f(x)\le \gamma\})=\gamma \mbox{.}$$ - -Denote $S=\{x\in [a,b]: f(x)\le \gamma\}$. First note that $\sup S$ exists since $a\in S$ and $S$ is bounded from above by $b$. Since $S$ is closed (because of the continuity of $f$) it must be the case that $s=\sup S \in S$. If $f(s)<\gamma$ then there would exist some small $\epsilon>0$ such that $f(s + \epsilon)<\gamma$ and that would mean $s$ is not the supremum of $S$, contradiction. Thus $f(s)=\gamma$. -Is my reasoning correct? - -REPLY [3 votes]: You need to state that $s < b$, which follows from $\gamma < f(b)$ and the definition of S. That's what entitles you to say that there's a positive $\varepsilon$ such that $f(s+\varepsilon) < \gamma$, if $f(s) < \gamma$. -You should also address the possibility of $f(s) > \gamma$. It's easily handled: there is a sequence $(s_n)$ in S converging to $s$, and $f(s_n)\le \gamma$ for all $n$, so by continuity $f(s) = \lim_n f(s_n) \le \gamma$.<|endoftext|> -TITLE: Proving $\cot { A+\cot { B+\cot { C=\frac { { a }^{ 2 }+{ b }^{ 2 }+{ c }^{ 2 } }{ 4K } } } } $ -QUESTION [5 upvotes]: For any acute $\triangle ABC$, prove that $\cot { A+\cot { B+\cot { C=\frac { { a }^{ 2 }+{ b }^{ 2 }+{ c }^{ 2 } }{ 4K } } } } $, where $K$ is the area of $\triangle ABC$. - -Unfortunately I'm not able to progress in this problem. Any kind help will be appreciated. -Thank you. - -REPLY [3 votes]: Here is a straightforward solution -$\cot A+\cot B+\cot C = \frac{\cos A}{\sin A}+\frac{\cos B}{\sin B}+\frac{\cos C}{\sin C}=\frac{(b^2+c^2-a^2)}{2bc\frac{a}{2R}}+\frac{(a^2+c^2-b^2)}{2ac\frac{b}{2R}}+\frac{(a^2+b^2-c^2)}{2ab\frac{c}{2R}}$. -Now use $\frac{abc}{4R}=K$<|endoftext|> -TITLE: Why do martingales have same expectation? -QUESTION [5 upvotes]: Let $X_i, i \geq 1$ be a $(\mathcal{F}_i)_{i \geq 1}$ measurable sequence of Random variables. -It is a martingale if -$$E[X_{i+1} \mid \mathcal{F}_i] = X_i.$$ -But how do we conclude from this that $E[X_i]=E[X_j]$ for all $i,j$? -MY IDEA: -show it by induction on $i$. Let $\mu:=E[X_1]$. -Then $$E[X_2] = E[E[X_2 \mid \mathcal{F}_1]]=E[X_1]=\mu,$$ -and $$E[X_i]=E[E[X_i \mid \mathcal{F}_{i-1}]]= E[X_{i-1}]=\mu$$ -by induction, using the property that the conditional expectation and the random variable itself have the same expectation. -Is this correct? -In the same way one can show that for a submartingale one has $E[X_i] \geq E[X_j]$ for $i \geq j$ and for a supermartingale $E[X_i] \leq E[X_j]$ for $i \geq j$. Is this correct? - -REPLY [3 votes]: Note that an equivalent definition is $E[X_n \mid \mathcal F_m] = X_m$ if $n > m$. -The fact that they are equivalent can be proven in a very similar way as you did. In any case it's a little more intuitive now, as you can just take the expectation to both sides to conclude that $E[X_n] = E[X_m]$ for $n > m$ which is what we wanted<|endoftext|> -TITLE: Determining the structure of the quotient ring $\mathbb{Z}[x]/(x^2+3,p)$ -QUESTION [6 upvotes]: I'm interested in the following problem from Artin's Algebra text: - -Determine the structure of the ring $\mathbb Z[x]/(x^2 + 3,p)$, where (a) p = 3, (b) p = 5. - -I know that by the isomorphism theorems for rings we can take the quotients successively, and so -$$\mathbb{Z}[x]/(p) \cong (\mathbb{Z}/p \mathbb{Z})[x] $$ -as the map $\mathbb{Z}[x] \to (\mathbb{Z}/p \mathbb{Z})[x]$ defined by $\sum_{n} a_n x^n \mapsto \sum_{n} \overline{a_n} x^n$ is a surjective ring homomorphism with kernel $(p)$. Thus it remains to study the quotients -$$(\mathbb{Z}/p \mathbb{Z})[x]/(x^2+3) $$ -for $p \in \{3,5\}$. - -If $p=3$, $(x^2+3)=(x^2)$ in $(\mathbb{Z}/3 \mathbb{Z})[x]$, and by using polynomial division all distinct coset representatives can be reduced to the following list of 9 elements -$$\{0,1,2,x,1+x,2+x,2x,1+2x,2+2x\}. $$ -Moreover, it can shown that the list above gives 9 distinct cosets, as no difference of two distinct elements of the list is a multiple of $x^2$. Since $1$ and $x$ generate two distinct additive groups of order $3$, the additive group of our quotient ring is not cyclic. Elementary group theory then shows -$$(\mathbb{Z}/3 \mathbb{Z})[x]/(x^2)^+ \cong (\mathbb{Z}/ 3 \mathbb{Z})^2 $$ -as additive groups. -I was then about to conclude that the multiplication on the quotient is then compatible with the usual one in $(\mathbb{Z}/3\mathbb{Z})^2$, but this is wrong! -It can be seen that the quotient is not isomorphic to $(\mathbb{Z}/3\mathbb{Z})^2$ as a ring, because the former contains a nonzero element (represented by $x$) whose square is zero, while the latter contains no such elements. - -If $p=5$, a full list of coset representatives is of length 25 -$$\{0,1,2,3,4,x,1+x,2+x,3+x,4+x,2x,1+2x,2+2x,3+2x,4+2x,3x,1+3x,2+3x,3+3x,4+3x,4x,1+4x,2+4x,3+4x,4+4x \} .$$ -And once again, one can see that these represent 25 distinct cosets. Similarly to the $p=5$ case, I've managed to prove that the additive group of this ring is isomorphic to $(\mathbb{Z}/5 \mathbb{Z})^2$. - -My questions: - -Have I made any mistakes in my argument? -What exactly am I supposed to do in this question? determine the number of elements? Write down the tables for addition and multiplication? - -Any further information about these quotients will be appreciated, thanks! - -REPLY [3 votes]: I would guess the author just wants you to simplify the definition of the rings, $\mathbb Z[x]/(x^2+3,p)$, in the special cases that $p=3,5$. -If $p=3$, you've seen yourself that the ring is $ \mathbb F_3[x]/(x^2)$, where $\mathbb F_3$ denotes the field with three elements. This is a free abelian group with multiplication defined by $(a+bx)(c+dx)=ac+(ad+bc)x$. -For $p=5$, we find that the ring is isomorphic to $\mathbb F_5[x]/(x^2+3)$. Now you can check that $-3$ is not a square modulo $5$, so the polynomial is irreducible. Hence the ring is a quadratic extension of $\mathbb F_5$. But these are all isomorphic, so we have that the ring is isomorphic to $\mathbb F_{5^2}$ (in standard notation). Alternatively, it can be described as $\mathbb F_5[\sqrt{-3}]$, with multiplication defined by $(a+b\sqrt{-3})(c+d\sqrt{-3})=(ac-3bd)+(ad+bc)\sqrt{-3}$, with underlying abelian group isomorphic to $\mathbb F_5^2$.<|endoftext|> -TITLE: Why is the rational number system inadequate for analysis? -QUESTION [25 upvotes]: In the very first chapter of Principles of Mathematical Analysis, the author pointed out as follows: - -The rational number system is inadequate for many purposes, both as a field and as an ordered set. For instance, there is no rational $p$ such that $p^{2}=2$... - -However, considering the fact that the rational number is indeed a field, It seems unclear to me in what perspective it is inadequate for a satisfactory discussion of analysis. - -REPLY [5 votes]: Why is the rational number system inadequate for analysis? -Look at the picture below which shows the graph of a "continuous function" (intuitively, a function whose graph is an unbroken whole, without interruption). - -Fundamental Property (FP): The points $(a,f(a))$ and $(b,f(b))$ are connected by a "continuous curve". Since the point $(a,f(a))$ is above the $x$-axis and the point $(b,f(b))$ is below the $x$-axis, there is a ponit $(c,0)$ where the curve crosses the $x$-axis. - -Is the Fundamental Property true? Well, at least it should be true. According to Michael Spivak: - -If the pictures we draw have any connection with the mathematics we do, if our notion of continuous function corresponds to any degree with our intuitive notion, the Fundamental Property have got to be true in our theory. (Spivak's book). - -Unfortunate Fact: The rational number is indeed a (ordered) field, but the properties of an ordered field are insufficient to prove the Fundamental Property. Even more, we can prove that the Fundamental Property is not valid in the context of $\mathbb Q$. -Conclusion: The Fundamental Property is a result that we want to be true because our intuition of continuity says that it should be true. Building our theory of continuity on the basis of the rational numbers, the Fundamental Property fails. So, the rational number system is not adequate for analysis. -Of course, we can define the fundamental concepts of Analysis (limits, continuity, derivability, integrability) in the context of $\mathbb Q$ and prove some things. But there are a lot of results that depends on the Fundamental Property. If we abandon it, we can't move on; the development of theory stops. - - -The above discussion refers to the context of continuity. However, similar arguments applies to other analysis issues. Here is an illustration in the context of the integration: -The development of the Integration is based in (some variant of) the following idea: - -The area of a circular region can be approximated by an $n$-sided inscribed polygon. - -Let $A_n$ be the area of the $n$-sided inscribed polygon. As $n$ increases, $A_n$ becomes closer and closer to the area of the circle. (Larson, Stewart) - -If we want formalize this idea, we need ensure that there exists a number (which we will call "area of the circle") being approximated. So, we need of the -Second Fundamental Property (SFP): every increasing sequence bounded above tends to a limit. -Unfortunate Fact: the Second Fundamental Property is not valid in the context of $\mathbb Q$. -Conclusion: Building our theory of integration on $\mathbb Q$, many "well behaved" regions would not have a well defined area. This fact would be not satisfactory. So, $\mathbb{Q}$ is inadequate for a satisfactory discussion of analysis. - - -Remark: The FP and the SCP are equivalent and characterize the completeness of $\mathbb{R}$. As viewed above, is the absence of this characteristic that makes $\mathbb Q$ inadequate for analysis. The FP is know as the Intermediate Value Theorem and the SFP is know as the Monotone Convergence Theorem. -Remark 2: In short, my point is the following: The purpose of (elementary real) Analysis is to make the Calculus rigorous. The Calculus was developed on the basis of geometric intuition. In a world where there are only rational numbers, some of these intuitions fail (as shown above). So, in such world, these intuitions could not be made rigorous but would have to be abandoned. This is the reason why $\mathbb{Q}$ is not appropriate to do Analysis.<|endoftext|> -TITLE: Bound on variance of function of a random variable -QUESTION [5 upvotes]: Suppose $0\leq X\leq 1.$ Suppose we are given that $\mathrm{Var}(X)\leq a$ where $a$ is some small constant. -What are the best upper bounds we can provide on $\mathrm{Var}(f(X))$ if -a) $f:[0,1]\mapsto\mathbb{R}$ is a Lipschitz function with Lipschitz constant $L$, such as say, $f(x) = x^2$ which has Lipschitz constant $2.$ -b) $f:[0,1]\mapsto\mathbb{R}$ is not Lipschitz but is a Hölder continuous function such as say $f(x) = \sqrt{x}.$ -I am interested in upper bounds that go to zero as $a$ goes to zero. - -REPLY [7 votes]: In any cases, we utilize the following inequality: -$$ \operatorname{Var}(f(X)) \leq \Bbb{E}[(f(X) - f(\Bbb{E}X))^2]. $$ -This is easily proved from the inequality $\operatorname{Var}(Y) \leq \Bbb{E}[Y^2]$ with $Y = f(X) - f(\Bbb{E}X)$. Now assume that -$$|f(x) - f(y)| \leq C|x - y|^{\alpha} \qquad \forall x, y \in [0, 1]$$ -for some $\alpha \in (0, 1]$ and $C \in (0, \infty)$. Then -$$ \operatorname{Var}(f(X)) \leq C^2 \Bbb{E}[|X - \Bbb{E}X|^{2\alpha}]. $$ -From the Jensen's inequality, we have -$$ \Bbb{E}[|X - \Bbb{E}X|^{2\alpha}] \leq \Bbb{E}[|X - \Bbb{E}X|^{2}]^{\alpha} = \operatorname{Var}(X)^{\alpha}. $$ -Consequently we have -$$\operatorname{Var}(f(X)) \leq C^2 \operatorname{Var}(X)^{\alpha}. $$<|endoftext|> -TITLE: $A,B \in M_2(\mathbb C)$ be such that $AB-BA=B^2$ ; then is it true that $AB=BA$? -QUESTION [5 upvotes]: Let $A,B \in M_2(\mathbb C)$ be such that $AB-BA=B^2$. Then is it true that $AB=BA$ ? - -If we can show $\mathrm{tr}(B)=\det (B)=0$, then we are done by using $B^2-(\mathrm{tr}(B)) B+\det (B)=0$; but I don't know how to show even that. Please help. Thanks in advance. - -REPLY [7 votes]: We prove the following generalized claim: - -Claim. Assume that $AB - BA = B^p$ for some $p \geq 2$. Then $AB = BA$. - -This is equivalent to proving that $B^p = 0$. To this end, we notice that - -$\operatorname{tr}(B^p) = \operatorname{tr}(AB - BA) = 0$ since $\operatorname{tr}(AB) = \operatorname{tr}(BA)$. -We have $\det(B^p) = 0$. Indeed, if $\det(B^p) \neq 0$, then $B$ is invertible. Multiplying $B^{-p}$ to the left, we have -$$B^{-p}AB - B^{1-p}A = I_2.$$ -But this is impossible since -$$2 = \operatorname{tr}(I_2) \neq \operatorname{tr}((B^{-p}A)B - B(B^{-p}A)) = 0.$$ - -Applying Cayley-Hamilton theorem, we find that $B$ is nilpotent: -$$B^{2p} = \operatorname{tr}(B^p)I - \det(B^p)I_2 = 0.$$ -Therefore we must have $B^2 = 0$ and hence $B^p = 0$. //// - -Addendum: Counter-example for $p = 1$. For any $a_1, a_2, b \in \Bbb{C}$ with $b \neq 0$, put -$$ A = \begin{pmatrix} a_1 & a_2 \\ 0 & a_1 - 1 \end{pmatrix}, \qquad B = \begin{pmatrix} 0 & b \\ 0 & 0 \end{pmatrix}. \tag{*} $$ -Then $AB - BA = B$. -Conversely, if $A, B \in M_2(\Bbb{C})$ is such that $AB - BA = B$ and $B \neq 0$, then $A$ and $B$ can be written as $\text{(*)}$ up to unitary change of basis. -(Remark. Using the Jordan normal form, it is straightforward that this is true up to change of basis.) -Indeed, since $\det(B) = 0$, $B$ has rank 1 and we may write -$$ B = b \begin{pmatrix} u_1 \bar{v}_1 & u_1 \bar{v}_2 \\ u_2 \bar{v}_1 & u_2 \bar{v}_2 \end{pmatrix} $$ -for some $b \neq 0$ and for some unit vectors $u = (u_1, u_2), v = (v_1, v_2)$. Then $\operatorname{tr}(B) = 0$ is equivalent to $\langle u, v \rangle = 0$ and hence $u, v$ are orthogonal. In particular, $B$ corresponds to the linear map -$$ Bx = b\langle v, x \rangle u. $$ -(Here, the 2nd argument of the Hermitian inner product $\langle \cdot, \cdot \rangle$ is chosen to be linear.) Then the claim follows by rewriting everything w.r.t. the orthonormal basis $\{u, v\}$ and solving the equation $AB - BA = B$.<|endoftext|> -TITLE: Which of the following sets are compact: -QUESTION [5 upvotes]: Which of the following sets are compact: - -$\{(x,y,z)\in \Bbb R^3:x^2+y^2+z^2=1\}$ in the Euclidean topology. -$\{(z_1,z_2,z_3)\in \Bbb C^3:{z_1}^2+{z_2}^2+{z_3}^2=1\}$ in the Euclidean topology. -$\prod_{n=1}^\infty A_n$ with the product topology where $A_n=\{0,1\}$ has discrete topology. -$\{z\in \Bbb C:|\operatorname{Re} z |\leq a \}$ for some fixed positive real number $a$ in the Euclidean topology. - -$1$ is closed and bounded and hence compact,$2$ is closed but not bounded and hence not compact. -$3$ is compact by Tychonoff Theorem and $4$ is not bounded and hence not compact. -Are these correct? - -REPLY [3 votes]: That's correct. However, 1, 2 and 4 need a proof. -All three sets are closed, being inverse images of a closed set under a continuous function. -The set in 1 is bounded, because it is contained in $[-1,1]^3$. -The sets in 2 and 4 are not bounded, because they contain element with arbitrarily large norm; can you show them? -Set 4: - - Easy: you can take $z=a+bi$ with arbitrary $b$. - -Set 2: - - Consider $z_3=1$. Then you can take $z_2=iz_1$, for arbitrary $z_1$.<|endoftext|> -TITLE: Coproducts and products are same in any preadditive category -QUESTION [5 upvotes]: Here is the proof that coproducts and products are same in any preadditive category from the Stack project. - -I have few questions regarding the above proof. -I don't understand what do they mean by morphism corresponding to $(0,1)$? -Also, I don't see how the mapping they get from $Mor(x,w) \times Mor(y,w)$ to $Mor(z,w)$ is actually a bijection? - -REPLY [4 votes]: We have the projections $p:x\times y\to x$ and $q:x\times y\to y$, which satisfy the universal property that any pair $f:a\to x,\ g:a\to y$ correponds to a unique arrow $a\to x\times y$. Now this is applied with $(f,g)=(1,0)$ as for $i:x\to z$ and with $(f,g)=(0,1)$ as for $j:y\to z$ where $0$ is the zero morphism and $1$ is the identity. -By this definition, importantly, we have the following equations: -$$p\circ i=1_x,\quad q\circ i=0,\\ p\circ j=0,\quad q\circ j=1_y$$ -The mapping $\def\Mor{\mathrm{Mor}}\Mor(x,w)\times\Mor(y,w)\to\Mor(z,w)$ is just given above, call it $\Phi\ :=\ (a,b)\mapsto a\circ p\,+\,b\circ q$. - -Hint: Use the equations in 1. to find the inverse for $\Phi$.<|endoftext|> -TITLE: Relation between controllability and stabilization of a system -QUESTION [5 upvotes]: Suppose i have a control system which is described as: -$$ -\left\{ -\begin{array}{c} -\dot{x}(t)=Ax(t)+Bu(t)\\ -y(t)=Cx(t)+Du(t) -\end{array} -\right. -$$ -and i know it is controllable. I use a state feedback $u(t) = -Kx(t)+r(t)$ on it, which makes it: -$$\dot{x}(t)=(A-BK)x(t)+Br(t)$$ -From what i understand, even if $A$ is not stable (meaning not all the real parts of its eigenvalues are negative), if I use the state feedback it is always possible to make it stable because it is controlable. why is this true? and how can i see this for $x\in\mathfrak{R}^n$. (no full proof is needed, a general direction is good enough) -I understand that the requirement for stability is that $\hat{A}=(A-BK)$ would have only eigenvalues with real parts that are negative, but why is this always possible? - -REPLY [3 votes]: For single input systems you can directly calculate the $k^T \in \mathbb{R}^n$ vector such that $A-bk^T$ has a desired characteristic polynomial using Ackermann's formula, if the system is controllable. To see why it works there is a nice derivation here: -http://www.cambridge.org/us/features/chau/webnotes/chap9acker.pdf -For multiple input systems you can select an arbitrary fan-out vector $f \in \mathbb{R}^m$ and calculate $k^T$ for the system $(A, Bf)$, which is a single input system now. This is called dyadic approach and $f$ can be selected for some additional design criteria, such as robustness. Then the state feedback gain would be $K = fk^T$. -There are other approaches as well for selecting $K$ for multi-input systems.<|endoftext|> -TITLE: Doubt about neural networks that allegedly compute beyond the Turing limit -QUESTION [6 upvotes]: The question is about a new result by Cabessa & Siegelmann. http://binds.cs.umass.edu/papers/2014_cabessa.pdf -: -"In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power — as the static analog neural networks — irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating". -But it is my understanding that any neural net with rational weights can be simulated by an algorithm to any degree of precision we want (or perhaps this article shows this is wrong). I also always believed, as Martin David claimed, that hypercomputation is trivial in the sense that "... if non computable inputs are permitted then non computable outputs are attainable." . However this article seems to show this is not so. -My question is: Is this serious work? What am I missing? -The article uses too much jargon and logic, and is very difficult for me to follow (my background is physics). - -REPLY [4 votes]: Someone with more knowledge than I can weigh in here, but from my cursory reading the core concept is that one can encode an arbitrary oracle in the weights - in the real case this is done by using a real number to encode an infinite string of bits, whereas in the plastic rational case it's done using the time-varying weights to encode that same infinite string of bits. In either case, though, the expressive power is all in the weights, and using any computable set of weights will reduce the power of the NNs back into the realm of computability.<|endoftext|> -TITLE: Problem on martingale concentration inequality -QUESTION [6 upvotes]: Let $M_n$ be a $F_n$ measurable martingale where $F_n$ are family of increasing sub sigma fields. Let $D_n = M_{n}-M_{n-1}$, for $n\geq 2$. Let $B \in F_1$ and on the set $B$, $a_k \leq D_k\leq b_k$. Then I have to show that -for all $t > 0$ -$$P\left(\max_{1\leq k \leq n}|M_k| \ge t \mid B\right) \leq 2\exp{\left(\frac{-2t^2}{\sum_{k\leq n}(b_k -a_k)^2}\right)}$$ -I have tried to replicate through the Azuma Hoeffding inequality proof (https://matthewhr.wordpress.com/2012/12/06/azuma-hoeffding-inequality-2/) with the probability measure $P_B = P(A\cap B)/P(B)$, but stuck as with respect to this new measure the original martingale does not become a martingale. - -REPLY [2 votes]: Let us denote by $\mathbb E_B$ the (conditional) expectation with respect to $\mathbb P_B$. -We have for each $k\geqslant 2$ and each $G\in \mathcal F_{k-1}$, -\begin{align} -\mathbb E_B\left[\mathbf 1_G\mathbb E_B\left[M_k\mid\mathcal F_{k-1}\right]\right]&=\mathbb E_B\left[\mathbf 1_GM_k\right] &\mbox{by definition of conditional expectation}\\ -&=\mathbb E\left[\mathbf 1_{B\cap G}M_k\right]/\mathbb P(B)&\mbox{by definition of }\mathbb P_B\\ -&=\mathbb E\left[\mathbf 1_{B\cap G}\mathbb E\left[M_k\mid\mathcal F_{k-1}\right]\right]/\mathbb P(B)&\mbox{because }B\in \mathcal F_1\subset\mathcal F_{k-1}\mbox{ hence }B\cap G\in \mathcal F_{k-1}\\ -&=\mathbb E\left[\mathbf 1_{B\cap G}M_{k-1}\right]/\mathbb P(B)&\mbox{because }\left(M_n,\mathcal F_n\right)_{n\geqslant 1} \mbox{ is a martingale for }\mathbb P\\ -&= \mathbb E_B\left[\mathbf 1_{G}M_{k-1}\right]&\mbox{by definition of }\mathbb P_B. -\end{align} -This prove that $\mathbb E_B\left[M_k\mid\mathcal F_{k-1}\right]=M_{k-1}$, hence we can apply the non-conditional Azuma-Hoeffding inequality.<|endoftext|> -TITLE: Prove that $0 \leq ab^2-ba^2 \leq \frac{1}{4}$ with $0 \leq a \leq b \leq 1$. -QUESTION [6 upvotes]: Let $a$ and $b$ be real numbers such that $0 \leq a \leq b \leq 1$. Prove that $0 \leq ab^2-ba^2 \leq \dfrac{1}{4}$. - -Attempt -We can see that $ab^2-ba^2 = ab(b-a)$, so it is obvious that it is greater than or equal to $0$. But how do I show it is also less than or equal to $\dfrac{1}{4}$? - -REPLY [5 votes]: First proof -The part $0 \leq ab^2-ba^2$ is equivalent with $$0 \leq ab(b-a)$$ which is true because $b \geq a \geq 0$ . -For the other part use $b^2 \leq b \leq 1 $ and $a^2 \leq a$ as follows : -$$ab^2-ba^2 \leq ab-ba^2=b(a-a^2) \leq a-a^2 \leq \frac{1}{4}$$ the last part being equivalent with $\left (a-\frac{1}{2} \right )^2 \geq 0$ . -Second proof -Denote $f(a,b)=ab^2-ba^2=ab(b-a)$ . Now notice that if $d>0$ then : -$$f(a+d,b+d)=(a+d)(b+d)(b-a) \geq ab(b-a)=f(a,b)$$ -In this way we can increase $b$ to $1$ by choosing $d=1-b$ (this technique is usually called smoothing ). So it suffices to prove that : -$f(a+d,1) \leq \frac{1}{4}$ which is as in the previous proof equivalent with $$\left (a+d -\frac{1}{2} \right )^2 \geq 0$$ - -REPLY [3 votes]: There are more generic ways of solving such problems. -Since $f(a,b) = ab^2 - ba^2$ is continuous and differentiable, and the domain $0 \le a \le b \le 1$ is a compact simplex, it attains its max and min at one of the following points: - -A point where $0 < a < b < 1$ and $\nabla f (a,b) = 0$, i.e. $f_x(a,b) = f_y(a,b) = 0$ -A point where $a = 0$, $0 < b < 1$, and $f_y(a,b) = 0$ -A point where $0 < a < 1$, $b = 1$, and $f_x(a,b) = 0$ -A point where $0 < a = b < 1$, and $f_x(a,b) + f_y(a,b) = 0$ -A point where $a = b = 0$ or $a = b = 1$. - -Calculate that $f_x(a,b) = b^2 - 2ab$, $f_y(a,b) = 2ab - a^2$. -There are no points of type (1). -Type (2) points are anything on the line, and here $a = 0$ so $f(a,b) = 0$. -Type (3) points satisfy $b = 1$, so $1 - 2a = 0$, so $a = \frac12$, and here $f(a,b) = \frac12 - \frac14 = \frac14$. Type (4) points all have $f(a,b) = 0$. -Finally, the two type (5) points are $f(0,0) = 0$ and $f(1,1) = 0$. -Thus the minimum of $f$ is $0$ and the maximum is $\frac14$, on this domain.<|endoftext|> -TITLE: What's an example of a vector space that doesn't have a basis if we don't accept Choice? -QUESTION [12 upvotes]: I've read that the fact that all vector spaces have a basis is dependent on the axiom of choice, I'd like to see an example of a vector space that doesn't have a basis if we don't accept AoC. -I'm also interested in knowing why this happens. -Thanks! - -REPLY [10 votes]: Classically, they can be pretty simple: that is, - -We can have a model $M$ of ZFC, with an inner model $N$ of ZF, such that there is a $\mathbb{Z}/2\mathbb{Z}$-vector space $V\in N$ such that $(i)$ $N\models$"$V$ has no basis" and $(ii)$ $M\models$"$V\cong\bigoplus_{\omega}\mathbb{Z}/2\mathbb{Z}$". - -Of course, inside $N$ this characterization of $V$ won't be visible. - -I almost forgot the classic: $\mathbb{R}$, as a vector space over $\mathbb{Q}$! I'd argue this is "more complicated" than the one above in certain senses, but in others its more natural. - -As to why this happens: basically, consider a "sufficiently large" vector space $V$ with lots of automorphisms. Then, starting in a universe $M$ of ZFC which contains $V$, we can build a forcing extension $M[W]$, where $W$ is a "generic copy" of $V$. That is, $W$ is isomorphic to $V$, but all twisted around in a weird way. Now, we can take a symmetric submodel $N$ of $M[W]$ - this is a structure between $M$ and $M[W]$, consisting (very roughly) of those things which can be defined from $W$ via a definition which is invariant under "lots" of automorphisms of $W$ - specifically, invariant under every automorphism fixing some finite set of vectors! But as long as $W$ is sufficiently nontrivial, no basis (or, in fact, infinite linearly independent set) is so fixed. -Of course, I've swept a lot under the rug - what's a forcing extension? what exactly is $M[G]$? and why does it satisfy ZF? - but this is a rough intuitive outline. - -Actually, in a precise sense, this is the wrong answer: I've just argued that it's consistent with ZF that some vector spaces not have bases. But, in fact, Blass showed that "every vector space has a basis" is equivalent to the axiom of choice! See http://www.math.lsa.umich.edu/~ablass/bases-AC.pdf, which is self-contained. Blass' construction actually proves that "every vector space has a basis" implies the axiom of multiple choice - that from any family of nonempty sets, we may find a corresponding family of finite subsets (so, not quite a choice function); over ZF this is equivalent to AC (this uses the axiom of foundation, though). -Blass argues roughly as follows. Start with a family $X_i$ of nonempty sets; wlog, disjoint. Now look at the field $k(X)$ of rational functions over a field $k$ in the variables from $\bigcup X_i$; there is a particular subfield $K$ of $k(X)$ which Blass defines, and views $k(X)$ as a vector space over $K$. Blass then shows that a basis for $k(X)$ over $K$ yields a multiple choice function for the family $\{X_i\}$. -So now the question, "How can some vector spaces fail to have bases?" is reduced (really ahistorically) to, "How can choice fail?" And for that, we use forcing and symmetric submodels (or HOD-models, which turn out to be equivalent but look very different at first) as above. - -REPLY [6 votes]: If we only assume $\sf\neg AC$, then there is no hope for us to find a specific example, because it might be that such vector spaces have cardinalities well beyond what we can describe. -However, if we assume some stronger negation of $\sf AC$, for example axiom of determinacy, then one example would be $\Bbb R$ considered as a vector space over $\Bbb Q$. -Of course, $\sf AD$ is a bit of an overkill here. Nonexistence of Hamel basis for $\Bbb R$ follows already from "all sets of reals are measurable", and I believe even "all sets of reals have Baire property", the latter being equiconsistent with $\sf ZF$. Hence it is consistent with $\sf ZF$ that $\Bbb R$ as a vector space over $\Bbb Q$ has no basis. -As for "why this happens", let me show that under measurability assumption there is no such basis. Under this assumption, it's clear that every function $\Bbb R\rightarrow\Bbb R$ is measurable. Now it can be proven that every measurable function $f:\Bbb R\rightarrow\Bbb R$ which satisfies $f(x+y)=f(x)+f(y)$ for every $x,y$ must be in fact linear. This is a result due to Sierpiński I believe (I would be grateful if someone has posted a reference in a comment). However, if we had a Hamel basis, we could easily construct a function satisfying this condition but not linear: we can arbitrarily assign values of $f$ to elements of the basis and then uniquely extend it to whole $\Bbb R$, so e.g. if we define $f$ to be zero on all but one element of the basis, we get a desired function, which we, however, have proven can't exist.<|endoftext|> -TITLE: How to integrate $\int\limits_0^1 \left(-1\right)^{^{\left\lfloor\frac{1}{x}\right\rfloor}} dx$? -QUESTION [11 upvotes]: As my title says, I need help integrating with floor functions, -$$\int\limits_0^1 \left(-1\right)^{^{\left\lfloor\frac{1}{x}\right\rfloor}} dx$$ -What does this even mean exactly? How would approach this? - -REPLY [4 votes]: One may write -\begin{align*} -\displaystyle \int_{0}^{1} \left(-1\right)^{\large ^{\left\lfloor\frac{1}{x}\right\rfloor}} \mathrm{d}x -&= \sum_{k=1}^{\infty}\int_{\frac{1}{k+1}}^{\frac{1}{k}} \left(-1\right)^{\large ^{\left\lfloor\frac{1}{x}\right\rfloor}} \mathrm{d}x \\ -&= \sum_{k=1}^{\infty} \int_{k}^{k+1} \left(-1\right)^{\large ^{\left\lfloor u\right\rfloor}} \: \frac{\mathrm{d} u}{u^{2}} \\ -&= \sum_{k=1}^{\infty} \int_{k}^{k+1} \left(-1\right)^{{k}} \: \frac{\mathrm{d} u}{u^{2}} \\ -&= \sum_{k=1}^{\infty}\frac{\left(-1\right)^{{k}}}{k (k+1)} \\ -&= \sum_{k=1}^{\infty}\frac{\left(-1\right)^k}{k}+\sum_{k=1}^{\infty}\frac{\left(-1\right)^{k+1}}{k+1}\\ -&=-\log 2-\log 2+1 -\end{align*} where we have used the standard identity -$$ -\log(1+x)=-\sum_{k=1}^{\infty}\frac{\left(-1\right)^k}{k}x^k, \quad |x|<1, -$$ when $x \to 1^-$ (via Abel's theorem). -Finally, - -$$\int_{0}^{1} \left(-1\right)^{\large ^{\left\lfloor\frac{1}{x}\right\rfloor}} \mathrm{d}x -= 1-2 \log 2. -$$<|endoftext|> -TITLE: Past open problems with sudden and easy-to-understand solutions -QUESTION [126 upvotes]: What are some examples of mathematical facts that had once been open problems for a significant amount of time and thought hard or unsolvable by contemporary methods, but were then unexpectedly solved thanks to some out-of-the-box flash of genius, and the proof is actually short (say, one page or so) and uses elementary mathematics only? - -REPLY [3 votes]: The proof that if $f$ has absolutely convergent Fourier series and is never zero, then its inverse $\frac{1}{f}$ also has an absolutely convergent Fourier series. -Wiener gave a proof in 1932. Gelfand (1941) later developed the theory of Banach algebras to provide an elementary proof.<|endoftext|> -TITLE: Weak form of Dirichlet's theorem. -QUESTION [9 upvotes]: Dirichlet's Theorem on arithmetic progressions is often stated as something like: - -Every arithmetic progression where the first term and the difference are coprime contains an infinite amount of primes. - -But can be rewritten as: - -If $(a,m) = 1$ then there are infinite primes $p$ such that $p\equiv a\pmod m$. - -I'm trying (fruitlessly) to come up with an elementary proof of its weak version. - -If $(a,m) = 1$ then there is at least one prime $p$ such that $p\equiv a\pmod m$. - -Note that if this is stated in terms of arithmetic progressions then it would not be a weak version (once you find one prime, consider the sequence with difference $m$ and first term $p + m$). -Any ideas? - -REPLY [6 votes]: Let $a$ be a non-prime, and suppose we know that there is a prime $p$ congruent to $a$ modulo $n$ for every $n$ relatively prime to $a$. Then for every positive integer $k$, there is a prime congruent to $a$ modulo $m^k$. From this it follows that there are infinitely many primes congruent to $a$ modulo $m$. -So for non-primes $a$, the weak version is equivalent to Dirichlet's Theorem. For primes $a$ the weak version is trivial. But from the assumption that for all $n$ relatively prime to $a$, there is a prime other than $a$ congruent to $a$ modulo $n$, we can again derive Dirichlet's Theorem. And if we are allowed to let $a$ vary, we can find a non-prime $a'$ of the form $a+qm$, and again conclude that there are infinitely many primes congruent to $a$ modulo $m$.<|endoftext|> -TITLE: Calculate $\int_{0}^{\pi} \frac{x}{a-\sin{x}}dx , \quad a>1$ -QUESTION [7 upvotes]: I have trouble calculating this integral. -I tried integration by parts and trigonometric function. -$$\int_{0}^{\pi} \frac{x}{a-\sin{x}}dx , \quad a>1$$ - -REPLY [2 votes]: General Case -In these types of definite integrals, never forget to use this general identity -$$I=\int_{a}^{b}f(x)dx=\int_{a}^{b}f(a+b-x)dx$$ -Which can be proved by the substitution $x \to a+b-x$. Then one writes the definite integral as the average of the two expressions above to obtain -$$I=\int_{a}^{b} \frac{1}{2}[f(x)+f(a+b-x)]dx$$ -Next, the magic happens since you can find the primitive of $g(x)=\frac{1}{2}[f(x)+f(a+b-x)]$ but not that of $f(x)$ or $f(a+b-x)$. This is usually due to the simpler form of $g(x)$ in comparison with $f(x)$ or $f(a+b-x)$ as some expression has cancelled or disappeared in $g(x)$. - -Your Example -In your example we have -$$\begin{align} -a &= 0 \\ -b &= \pi \\ -f(x) &= \frac{x}{a-\sin(x)} \\ -f(a+b-x) &= \frac{\pi-x}{a-\sin(\pi-x)} = \frac{\pi-x}{a-\sin(x)} \\ -g(x) &= \frac{\pi}{2} \frac{1}{a-\sin(x)} -\end{align}$$ -Can you see the cancellation that is happening in $g(x)$? Then the integral becomes -$$I=\frac{\pi}{2}\int_{0}^{\pi} \frac{1}{a-\sin(x)} dx$$ - -Case $|a| \gt 1$ - -Then you can find by tangent half angle substitution $u=\tan(\frac{x}{2})$ that -$$F(x)=\int \frac{1}{a-\sin(x)} = \frac{2}{\sqrt{a^2-1}} \arctan\left(\frac{a \tan(\frac{x}{2})-1}{\sqrt{a^2-1}}\right) + C$$ -As you can see this formula is valid for $|a| \gt 1$. Hence, the final result will be - -$$I=\frac{\pi}{\sqrt{a^2-1}} \left(\arctan\left(\frac{1}{\sqrt{a^2-1}}\right)+\frac{\pi}{2}\right)$$ - - -Case $|a| \lt 1$ - -I will leave this case as an exercise for you. The procedure is the same but just the $F(x)$ will be different. However, $F(x)$ is obtained with the same technique for substitution. - -Case $|a|=1$ - -This is also another case, which should be handled separately. The $F(x)$ in this case is the simplest one and is obtained with the same techniques.<|endoftext|> -TITLE: Conformal mapping from square to disk as inverse of hypergeometric function -QUESTION [6 upvotes]: I'd like to write a little program that transforms a fractal generated in the square $(-1,1)^2\subset\mathbb C$ conformally to the unit disk $|z|<1$. I know that conformal mappings from the unit disk to polygons can be described by Schwarz-Christoffel-Transformations, and with the help of several articles I finally came up with the hypergeometric function $e^{\pi/4 i}z\; _2F_1(1/2,1/4,5/4,z^4)$ that maps - as said - the unit disk conformally to the square $(-1,1)^2$. I used Mathematica to plot this function, and the result is perfectly fine. -My question is now: What is the inverse of this function? -There is a wikipedia article that gives a similar definition of the inverse in terms of Jacobi's Elliptic Function $cn$, but this doesn't work with my plotting. I think I have to adjust this only a bit to get what I want, but I was not able to so far. -Thx for your help in advance! - -REPLY [2 votes]: The inverse can be expressed in terms of the Weierstrass $\wp$--function for the lattice with periods $1$ and $i$. See the discussion here. -If I may ask - which articles were you reading? I really like that expression for the SC function in terms of the hypergeometric function.<|endoftext|> -TITLE: Ordinary generating function of powers of 2 -QUESTION [5 upvotes]: Is there a good closed form expression for the generating function of the formal power series -$$ -A(z) := \sum_{n=0}^\infty z^{2^n} = z + z^2 + z^4 + z^8 + z^{16} + \cdots. -$$ -Is there a tractable way to retrieve the coefficient of $z^m$ in powers of $A(z)$, say in $A(z)^k$ for $k \geq 1$? Thanks. - -REPLY [5 votes]: The value $A(1/2)=\kappa$ is known as the Kempner number, and was proven transcendental in 1916. The paper "The Many Faces of the Kempner Number", by Adamczewski, may provide some insight for you.<|endoftext|> -TITLE: If a field $F$ is an algebraic extension of a field $K$ then $(F:K)=(F(x):K(x))$ -QUESTION [8 upvotes]: Suppose $K$ is a field and $F$ is an algebraic extension of some degree $n=(F:K)$. It is stated that the field of rational functions $F(x)$ is in fact an algebraic extension of the field $K(x)$ and moreover $(F(x):K(x))=n$. - -How do I approach this exercise? I'm new in this area so any help would be greatly appreciated! - -REPLY [4 votes]: Let $a_1,a_2,\dots,a_n$ be a basis for $F$ over $K$. Let's first show that they are linearly independent as elements of $F(x)$ over $K(x)$. So assume -$$ -\sum_{i=1}^n a_i\frac{f_i(x)}{g(x)}=0 -$$ -where $f_1,\dots,f_n,g\in K[x]$ (it's not restrictive to assume the denominators are the same). This implies -$$ -\sum_{i=1}^n a_if_i(x)=0 -$$ -If we have $f_i(x)=\sum_{j=0}^k b_{ij}x^j$, we deduce -$$ -\sum_{i=1}^n a_ib_{ij}=0,\quad j=0,1,\dots,k -$$ -so all polynomials are $0$. -Now the task is to show that $F(x)=K(x)[a_1,a_2,\dots,a_n]$ and we can reduce this to showing that, if $a$ is algebraic over $K$, then -$$ -K[a](x)=K(x)[a] -$$ -One inclusion is obvious, namely $K(x)[a]\subseteq K[a](x)$. In order to show the converse inclusion we just need to prove that if $f(x)\in K[a](x)$, then $f(x)\in K(x)[a]$, because the latter is a field. This is trivial, by considering polynomials of the form $cx^m$, where $c\in K[a]$. Just write $c=d_0+d_1a+\dots+d_ra^r$, where $r+1$ is the degree of $a$ over $K$ and then $cx^m=\sum_{j=0}^r a^j(d_jx^m)\in K(x)[a]$.<|endoftext|> -TITLE: Integral $\int_0^{1/2}\arcsin x\cdot\ln^2x\,dx$ -QUESTION [9 upvotes]: I'm interested in this integral -$$\int_0^{1/2}\arcsin x\cdot\ln^2x\,dx$$ -My idea was to first evaluate -$$\int_0^{1/2}\arcsin x\cdot x^a\,dx=\frac{2^{-a}\,\pi-6B_{1/4}\left(\frac{a}{2}+1,\frac{1}{2}\right)}{12\,(a+1)}$$ -in terms of the incomplete Beta function, and then find the second derivative at $a=0$, but it ended up with ugly derivatives of hypergeometric functions w.r.t. their parameters for which I did not know how to find a closed form expression. Could you suggest a different way to evaluate this integral? - -REPLY [8 votes]: There is a closed-form anti-derivative corresponding to this integral: - -$$\int\arcsin x\cdot\ln^2x\,dx= -\sqrt{1-x^2}\cdot\left(\ln^2x-4\ln x+6\right)\\+x\cdot\arcsin x\cdot\left(\ln^2x-2\ln x+2\right)-\ln^2\alpha+\left(\ln4-4\right)\cdot\ln\alpha-\operatorname{Li}_2\left(-\alpha^{-2}\right),$$ - -where -$$\alpha=\frac{1+\sqrt{1-x^2}}x,$$ -that can be proved by differentiation. It enables us to evaluate a definite integral over any interval.<|endoftext|> -TITLE: Functions such that $\sum \frac{1}{x_n}$ diverges $\Longrightarrow \sum \frac{1}{x_nf(x_n)}$ diverge -QUESTION [7 upvotes]: Is there a $f : \mathbb{R}_+ \to \mathbb{R}_+$ such that : - -$f$ is an increasing bijective map of $\mathbb{R}_+$ into itself. -For all $\displaystyle\sum_n -\frac{1}{x_n}$ where $(x_n)$ is increasing and potitive : -$$\sum \frac{1}{x_n} \; \text{diverge}\; \Longrightarrow \sum \frac{1}{x_nf(x_n)} \; \text{diverge}$$ - -(From a French oral examination) - -REPLY [10 votes]: Edit: What I wrote is not quite right. Or maybe it's right. See comment at bottom. -There is no such $f$. -If $f$ is an increasing bijection there exists $y_j>j$ such that $f(y_j)>j$. Let $N_j$ be the smallest positive integer with $$N_j\frac1{y_j}>\frac1j.$$Since $1/y_j<1/j$ it follows that $$N_j\frac1{y_j}\le \frac2j.$$ -Let $(x_n)$ be the sequence consisting of $y_1$ repeated $N_1$ times, followed by $y_2$ repeated $N_2$ times, etc. Then -$$\sum_n\frac1{x_n}=\sum_j N_j\frac1{y_j}>\sum_j\frac1j=\infty,$$while $$ -\sum_n\frac1{x_nf(x_n)}=\sum_jN_j\frac1{y_j\,f(y_j)}\le2\sum_j\frac1{j^2}<\infty.$$ -Comment: I missed the condition that the $x_n$ are supposed to be increasing. We can certainly make the $y_j$ increasing, in which case the $x_n$ are non-decreasing, which is what "increasing" often means. If we want the $x_n$ to be strictly increasing, start with $y_j$ strictly increasing, define $x_n$ as above, and then modify $x_n$ a tiny bit to make the sequence strictly increasing. If the modification is small enough this will not change the convergence or divergence of the two series. (For example, given $x_n$ as above we can certainly find a strictly increasing sequence $(x_n')$ such that $x_n\le x_n'\le 2x_n$; note that $f(x_n')\ge f(x_n)$.)<|endoftext|> -TITLE: How to interpret a line equation in 4-point geometry (affine plane of order 2). -QUESTION [5 upvotes]: I am currently reading "Basic Notions of Algebra" by Igor Shafarevich. In the first chapter example of a coordinatization of 4-point geometry is given. -Set of axioms: - -Through any two distinct points there is one and only one line. -Given any line and a point not on it, there exists one and only one other line through the point and not intersecting the line (that is, parallel to it). -There exist three points not on any line. - - -In this geometry we have 4 points A, B, C, D and 6 lines AB, CD; AD, BC; AC, BD. The families of parallel lines are separated by semicolons. -Let $\Bbb{0,1}$ be symbols with operations $+$ and $\times$ such that -$$ -\begin{array}{cc} -\begin{array}{c|cc} -\text{+} & 0 & 1\\ -\hline -0 & 0 & 1\\ -1 & 1 & 0\\ -\end{array} -& -\begin{array}{c|cc} -\times & 0 & 1\\ -\hline -0 & 0 & 0\\ -1 & 0 & 1\\ -\end{array} -\end{array} -$$ -The pair of quantities 0 and 1 with operations defined on them as above serve us in coordinatising the "geometry". For this, we give points coordinates (X, Y) as follows: A = (0, 0), B = (0, 1), C = (1, 0), D = (1, 1). -It is easy to check that the lines of the geometry are then defined by the linear equations: -$$ -\begin{array}{ccc} -& AB: 1X = 0; & CD: 1X = 1; & AD: 1X + 1Y = 0;\\ -& BC: 1X + 1Y = 1; & AC: 1Y = 0; & BD: 1Y = 1;\\ -\end{array} -$$ -The question is: how does one should interpret this equations? -Any suggestions will be appreciated. - -REPLY [3 votes]: The equations describe the lines, that is, if a point satisfies the equation then it is on the line. For example the line $X = 0$ is satisfied by all points $(X,Y)$ with $X = 0$ (with the coordinates $X$ and $Y$ coming from $\mathbb{F}_{2}$, the finite field of order 2). This gives an algebraic way to describe the lines. -You can think of the lines as having a slope of either $0$, $1$, or $\infty$ ($Y$ coefficient divided by $X$ coefficient), these slopes determine your parallel classes. This should let you go from a pair of points to the equation by determining a slope and then using a point to get the remaining value to describe the line as $AX+BY=C$.<|endoftext|> -TITLE: What are some Group representation of the rubik's cube group? -QUESTION [12 upvotes]: The Rubik's cube corresponds to valid sequences of moves of the Rubik's cube. What are some group representations of this group (with respect to finite dimensional vector spaces on finite fields)? -Ideally, I am looking for embeddings. I know that you can make a representation of the symmetric group 48 on a 48 dimensional vector space, and then embed the rubik's cube group into that. Can you make a representation based on a lower dimensional vector space? -(My group theory and linear algebra are a little rusty. Feel free to edit this to make more sense.) - -REPLY [7 votes]: I think $20$ is the smallest degree of a faithful representation of the Rubik cube group, certainly in characteristic $0$ or characteristic coprime to the group order, and probably over any field. As Henning Makholm commented, there exist faithful representations of degree $20$, so we just need to show that this is the smallest degree possible. -The Rubik cube group contains a subgroup $H = H_1 \times H_2$, where $H_1$ and $H_2$ have the structures $H_1 = 2^{11}:A_{12}$ and $H_2 =3^7:A_8$. -Now the only nontrivial proper normal subgroups of $H_1$ are its centre $M$ of order $2$, and an elementary abelian group $N$ of order $2^{11}$. In particular, $M$ is its unique minimal normal subgroup, so a minimal degree faithful representation of $H_1$ must be irreducible. Its restriction to $N$ cannot be homogeneous (since $N$ is abelian but not cyclic), and its homogeneous components are permuted by $A_{12}$, so there must be at least $12$ of them. -So the smallest degree of a faithful representation of $H_1$ is $12$ and similarly it is $8$ for $H_2$. By the theory of representations of direct products, the smallest degree of a faithful irreducible representation of $H$ is $12 \times 8 = 96$. Since $H$ has exactly two minimal normal subgroups, the only way we could improve on that is with a representation with two constituents having different minimal normal subgroups in their kernels, and doing that results in a faithful representation of degree (at least) $20$.<|endoftext|> -TITLE: Does logging infinitely converge? -QUESTION [7 upvotes]: Trying to evaluate $$\ln(\ln(\ln(\ln(\cdots\ln(x)\cdots))))$$For some fixed $x$ produces a complex answer that appears to converge, at least sometimes. -So I want a proof that this converges for either some $x$, no $x$, or all $x$. -If it converges for all $x$ or some $x$, what does it converge to? -If it diverges, is there a way we can evaluate it like we evaluate diverging sums? -And after all of that, does it appear to converge to the same value, no matter what $x$ value we start with? -I know $\ln(z)=\ln(|z|)+i\arg(z)$, but I can't repeat this process without a given $z$. (where $z$ is complex). -A similar post of mine found here does not answer my question and focuses more on the limits, calculus, and infinites. -This question asks for consideration from a complex-analysis point of view, considering convergence of value in the complex plane. - -REPLY [3 votes]: This answer is formed from numerical research only! Nevertheless I will try to prove my result. -Given sequence $a_n$, defined recursively by -$$\begin{cases}a_1=x\\a_n=\ln(a_{n-1})\end{cases}$$ -is convergent for all $x$ different than -$$0,\;1,\;e,\;e^e,\;e^{e^e},\;...$$ -And its limit is given by -$$\lim_{n\to\infty} a_n \approx 0.318132 + 1.33724i\quad\text{for }\Re(x)\geqslant0$$ -$$\lim_{n\to\infty} a_n \approx 0.318132 - 1.33724i\quad\text{for }\Re(x)<0$$ -These two constants are roots of the equation $$g=\ln(g)$$ which are equal to, respectively, $$-W_{-1}(-1)\quad\text{and}\quad-W_0(-1)$$ -where $W_k$ is the $k$-th branch of the Lambert W-function.<|endoftext|> -TITLE: Determinant of a finite order matrix -QUESTION [5 upvotes]: Let $M$ be a $5\times 5$ with real entries. Suppose $M$ has finite - order and $\det(M-I_5)\neq 0$. Find $\det(M)$. - -I am trying to do this old algebra qual problem. So far I know that since $M$ is of finite order, say $|M|=n$, then $1=\det(I_5)=\det(M^n)=[\det(M)]^n$ and so $\det(M)$ is a root of unity. $M$ has real entries and so the determinant is real, but the only real roots of unity are $1$ and $-1$. I don't know how to use the fact that $M$ is $5\times 5$ or that $1$ is not an eigenvalue. Can we perhaps say something about the value of $n$? - -REPLY [3 votes]: Since $M$ has real entries, its non-real eigenvalues must come in complex conjugate pairs. So the eigenvalues of $M$ are some number of copies of $-1$, and then some number of non-real roots of unity coming in conjugate pairs. Since the conjugate of a root of unity is its inverse, the non-real eigenvalues will cancel out in $\det(M)$. Furthermore, there must be an odd number of $-1$s, since $5$ is odd. So $\det(M)$ must be $-1$.<|endoftext|> -TITLE: Closed form to an interesting series: $\sum_{n=1}^\infty \frac{1}{1+n^3}$ -QUESTION [10 upvotes]: Intutitively, I feel that there is a closed form to -$$\sum_{n=1}^\infty \frac{1}{1+n^3}$$ -I don't know why but this sum has really proved difficult. Attempted manipulating a Mellin Transform on the integral solution: -$$\int_0^\infty \frac{\text{d}x}{1+x^3}=\frac{\pi}{3}\csc \frac{\pi}{3}$$ But to little avail. -Checking W|A gives the austere solution: $$\frac{1}{3}\sum_{\{x|x^3+1=0\}} x \space\text{digamma}(1-x) $$ -Which I completely don't understand. Thank you for any help. - -REPLY [9 votes]: Hint. You may use the following series representation of the digamma function -$$ -\psi(z+1) + \gamma = \sum_{n=1}^{\infty}\left( \frac{1}n - \frac{1}{n+z}\right).\tag1 -$$ Then your goal is to rewrite the general term of your series in a form allowing to use $(1)$. You may start with -$$ -\begin{align} -\frac{1}{1+n^3}=\frac{1}{(n+1)(n-z_0)(n-\bar{z}_0)} -\end{align} -$$ where $\displaystyle z_0=\frac{1+i\sqrt{3}}2$, then make a partial fraction decomposition giving - -$$ -\frac{1}{1+n^3}=a_1\left(\frac{1}n - \frac{1}{n+1}\right)+a_2\left(\frac{1}n - \frac{1}{n-z_0}\right)+a_3\left(\frac{1}n - \frac{1}{n-\bar{z}_0}\right). \tag2 -$$ - -By summing $(2)$ you get a closed form of your initial series.<|endoftext|> -TITLE: What is the largest set for which its set of self bijections is countable? -QUESTION [30 upvotes]: I recently came across a problem which required some knowledge about the self bijections of $\mathbb{N}$, and after looking up how to construct some different bijections I came across the result that the set of self bijections of $\mathbb{N}$ is uncountable. -And this got me wondering, what is the largest set for which its set of self bijections is countable? This obviously holds true for any finite set, but what is the last example of a set whose set of self bijections is countable? - -REPLY [2 votes]: Let's write $X!$ for the set of bijections $X\to X$. -It is true that -$$ -|X| < |Y| \implies | X! | \le | Y! |. -$$ -However, it is not true that strict equality always holds. This is independent of set theory (ZFC). See below for details. -First, though, to address the question: There is no "largest [size of] set for which its set of self bijections is countable". -For finite $X,Y$, clearly the set of bijections $X\to Y$ are a subset of $Y^X$, the set of all functions $X\to Y$; so the set of bijections is finite. -The next largest size is $\aleph_0$, the cardinality of $X = \Bbb N$. The bijections $X!$ from this set to itself have cardinal $2^{\aleph_0}$, as shown below, so already the number of bijections is uncountable. If $Y$ is any larger set, then $|X!| = 2^{|X|} \le 2^{|Y|} = |Y!|$, so the cardinality of the bijections of $Y$ is uncountable too. - -It is not true that $|X| < |Y|$ implies that there are more bijections $Y\to Y$ than there are $X\to X$, for infinite $X,Y$. This is independent of ZFC, so it's not likely to be "obvious";/ We have: -$$2^{|X|} \le (\text{# of bijections } X\to X) \le |X|^{|X|} = 2^{|X|},\tag{*} -$$ -To see the first inequality, consider the injection -$$f\mapsto \big(x\mapsto (f(x), x)\big) \colon 2^X \to (\text{bijections } X \to 2\times X).$$ -Similarly, (*) holds for $Y$. -However, in some models of ZFC, there are infinite $X,Y$ with $|X| < |Y|$ but $2^{|X|} = 2^{|Y|}$. In other models, there are no such $X,Y$ and the property is true. Assuming ZFC is consistent, neither is provable.<|endoftext|> -TITLE: minimum $x^2 + y^2$ on $\frac{(x-12)^2}{16} + \frac{(y+5)^2}{25} = 1 $ ellipse -QUESTION [5 upvotes]: Given $\frac{(x-12)^2}{16} + \frac{(y+5)^2}{25} = 1$. -Then minimum value of $x^2 + y^2 = ?$ -P.S. My solution: Suppose that $x = 4\cos{\theta}+12$and $y = 5\sin{\theta}-5$ -and expand $x^2 + y^2$ to find minimum value, but stuck in the end. -Thank you for every comment. - -REPLY [3 votes]: Another, maybe not as elegant, way is to solve for $y$, getting $$y=-5\pm\frac54\sqrt{16-\left( x -12\right)^2}$$ and then, since $$y_+(x):=\left(-5+\frac54\sqrt{16-\left( x -12\right)^2}\right)^2\le\left(-5-\frac54\sqrt{16-\left( x -12\right)^2}\right)^2=:y_-(x), $$ find the zero $x_0$ of the derivative of $x^2+(y_+(x))^2$ , which is $$-\frac98 x +\frac{25}{2}\left(3+\frac{x-12}{\sqrt{16-(x-12)^2}}\right) \tag{$\star$}$$ and compute $x_0^2+(y_+(x_0))^2$. -Note that $x_0$ is unique and is approximately $8.345$. The other solution mentioned by Mirko is produced by squaring, which is necessary to solve exactly $(\star)$, ending up with $$\frac{(x-12)^2}{16-(x-12)^2}=\left(\frac{9}{100}x-3\right)^2,$$ whose solutions are indeed the roots of the quartic found by Mirko.<|endoftext|> -TITLE: Decompose $x^4 + x^3 + 1$ into irreducible factors over $\mathbb{Z}_2$ -QUESTION [5 upvotes]: Decompose $x^4 + x^3 + 1$ into irreducible factors over $\mathbb{Z}_2$ - -I think that the given polynomial is already irreducible in $\mathbb{Z}_2$, therefore the only irreducible factors are $x^4 + x^3 + 1$ and $1$? Or am I missing something? - -REPLY [8 votes]: Yes, $x^4+x^3+1$ is irreducible. For it has no roots in $\mathbb{Z}_2$. And the only irreducible quadratic is $x^2+x+1$, which does not divide $x^4+x^3+1$.<|endoftext|> -TITLE: When can we apply the method of complexifying the integral. -QUESTION [5 upvotes]: Recently I was scrolling through YouTube, and saw the method of complexifying the integral https://m.youtube.com/watch?v=CpM1jJ0lob8. I tried some integrals out and it worked just fine. -However, I tried to take it up a notch, and tried finding. -$$\int \frac{e^x}{\mathrm{cos}x} dx$$ -Which didn't work out. My guess was that it didn't work out because the function we are integrating is discontinuous at some points. So my question is under what circumstances can we apply the method of complexifying the integral? -My work: -$$=Re{\int \frac{e^x}{e^{ix}} dx}$$ -$$=Re{\int e^{(1-i)x} dx}$$ -$$=Re{\frac{e^{(1-i)x}}{1-i}}+c$$ -With a little more algebra (and verified through wolphy I get): -$$\frac{1}{2}e^x(\mathrm{sin}x+\mathrm{cos}x)+c$$ -Which looks incorrect because it is the same as when I evaluated: -$$\int e^x \mathrm{cos}x dx$$ - -REPLY [2 votes]: $ e^x/\cos(x)$ does not have an elementary antiderivative. This can be shown using the Risch algorithm. -EDIT: The antiderivative can be expressed in terms of the Lerch Phi function: -$$ i{{\rm e}^{ \left( 1-i \right) x}}{\it LerchPhi} \left( -{{\rm e}^{-2 -\,ix}},1,1/2+i/2 \right) -$$ -where $${\it LerchPhi}(z,a,v) = \sum_{n=0}^\infty \dfrac{z^n}{(v+n)^a}$$<|endoftext|> -TITLE: $\sec\theta+\tan\theta=p$ and $\sec\theta\tan\theta=q$. Eliminate $\theta$ to form a equation between $p$ and $q$. -QUESTION [10 upvotes]: $\sec\theta+\tan\theta=p$ and $\sec\theta\tan\theta=q$. Eliminate $\theta$ to form a equation between $p$ and $q$. - -$\sec\theta+\tan\theta=p$ -$(\sec\theta+\tan\theta)^2=p^2$ -$\sec^2\theta+\tan^2\theta+2\tan\theta\sec\theta=p^2$ -$\sec^2\theta+\tan^2\theta+2q=p^2$ -$1+2\tan^2\theta+2q=p^2$ -I am stuck here. Please help me. Thanks. - -REPLY [2 votes]: $$\sec\theta+\tan\theta=p\iff\sec\theta-\tan\theta=\dfrac1p$$ -Now $(\sec\theta+\tan\theta)^2-(\sec\theta-\tan\theta)^2=4\sec\theta\tan\theta$ -Replace the values of $\sec\theta+\tan\theta,\sec\theta-\tan\theta, \sec\theta\tan\theta$<|endoftext|> -TITLE: Homotopy equivalent spaces have homotopy equivalent universal covers -QUESTION [15 upvotes]: A problem in section 1.3 of Hatcher's Algebraic Topology is - -Let $\tilde{X}$ and $\tilde{Y}$ be simply-connected covering spaces of - the path-connected, locally path-connected spaces $X$ and $Y$. Show - that if $X \simeq Y$ then $\tilde{X} \simeq \tilde{Y}$. [Exercise 11 - in Chapter 0 may be helpful.] - -Exercise 11 in chapter 0 says - -Show that $f:X \to Y$ is a homotopy equivalence if there exist maps - $g, h:Y \to X$ such that $fg \simeq 1$ and $hf \simeq 1$. More - generally, ,show that $f$ is a homotopy equivalence if $fg$ and $hf$ - are homotopy equivalences. - -These two questions on Stack ask about this problem, but one is unanswered and the solution to the other is unclear to me (and may be wrong). -What I have so far: Given universal covering maps $p: \tilde{X} \to X$ and $q: \tilde{Y} \to Y$, and homotopy inverses $f:X \to Y$ and $g: Y \to X$, we can find a lift $\tilde{f}: \tilde{X} \to \tilde{Y}$ such that $q\tilde{f} = fp$. (In fact, I think there are as many such $\tilde{f}$ as there are elements of $q^{-1}(y)$ for any basepoint $y \in Y$). Similarly we can find $\tilde{g}$ such that $gq = p\tilde{g}$. -From the homotopy $gf \simeq 1$ we have a unique lift of a homotopy $p\tilde{g}\tilde{f} = gfp \Rightarrow p$ starting at $\tilde{g}\tilde{f}$, but how do we know it ends at $1_{\tilde{X}}$? -I notice I haven't used exercise 11... -I'm guessing these are unbased homotopies, since that's generally how Hatcher uses the term. - -REPLY [3 votes]: While Andrew Hanlon gives in his answer what seems to me to be the correct idea (the lift is determined up to a deck transformation), here is an amusing (probably circular) alternative, just for fun. -We know that there is a lift $\tilde{f} : \tilde{X} \to \tilde{Y}$. So $\tilde{f}$ is a continuous map between two spaces, and since the map $f$ induced isomorphisms on all higher homotopy groups, being a homotopy equivalence, and since a covering map induces isomorphisms on the higher homotopy groups, it follows from the simply-connectedness of universal covers that $\tilde{f}$ induces isomorphisms on all homotopy groups. -So in the case that $X$ and $Y$ are connected CW-complexes, it follows from Whiteheads theorem that $f$ is a homotopy equivalence.<|endoftext|> -TITLE: Self-studying Information Geometry -QUESTION [12 upvotes]: I was recently exposed to the topic of Information Geometry by a friend of mine, and was looking for a good book to begin self-studying this topic. Any suggestions? -Also, what subject matter would one need to have a handle on to begin self-studying this? I have an undergraduate-level background in real analysis, some basic point-set topology, as well as algebra up to the level of Galois Theory. - -REPLY [3 votes]: The most famous book on the subject is probably: - -Amari, 2007, Methods of Information Geometry - -But there are few other ones that look quite nice: - -Amari, 2016, Information Geometry and Its Applications -Murray & Rice, 1993, Differential Geometry and Statistics -Ay et al, 2017, Information Geometry - -The last one mentioned is quite new. - -This question has been asked quite a few times in different ways on the SE network. Let me link to a few here: - -Applications of information geometry to the natural sciences [MathSE] -Research situation in the field of Information Geometry [MathOverflow] -Information geometry tutorial [CrossValidated] -What is the most beginner-friendly book for information geometry? [CrossValidated] -Does differential geometry have anything to do with statistics? [CrossValidated] - - -For the background/prerequisites, I would say they are: - -Differential geometry: manifold theory (differential forms, connections, etc) and Riemannian geometry (metric and curvature tensors, geodesics, etc) - -Probability and statistics: probability distributions, statistical estimation, basic measure theory, and information theory<|endoftext|> -TITLE: For $\mathfrak{m}$ maximal and principal, there's no ideal between $\mathfrak{m}^2$ and $\mathfrak{m}$ -QUESTION [5 upvotes]: Let $R$ be a commutative ring with unity. If a maximal ideal $\mathfrak{m}$ of $R$ is principal, prove that there is no ideal $I$ with $\mathfrak{m}^2\subsetneq I\subsetneq \mathfrak{m}$. - -I have no idea how to start this one - I can't see why we can say anything specific about $I$. - -REPLY [3 votes]: Hint: you may already know this, but if not show that $\mathfrak m$ is principal if and only if there exists a surjection of $R$-modules $\pi: R \twoheadrightarrow \mathfrak m$. What is $\pi(\mathfrak m)$?<|endoftext|> -TITLE: Martingales: Expectation of almost-sure limit -QUESTION [5 upvotes]: Let $X_n, n\geq 0$ be a martingale. We know that $E[X_n]=E[X_m]$ for all $m, n \geq 0$. Moreover suppose that $X_n \rightarrow X$ P-a.s. -What do we know about $E[X]$? Is it clear that $E[X]=E[X_0]$? -What if $X_n$ is a submartingale? Is it clear that $E[X] \geq E[X_0]$? And the analogous result for a supermartingale? -What if the convergence is not P-a.s. but in $L^p$ for some $p \geq 1$? - -REPLY [3 votes]: Consider the product-martingale -$$X_n = \prod_{1\le m \le n}Y_m$$ -with $Y$, $Y_1$, $Y_2$, $\dots$, be non-negative, non-degenerate i.i.d random variables with mean $1$. Then you can prove that $\lim\limits_{n\to\infty}X_n = 0$ almost surely (see here). But, obviously $$E[X_n]=E\left[\prod_{1\le m \le n}Y_m\right]=\prod_{1\le m \le n}E[Y_m]=1$$ by the independence of $Y_i$, for any $n\in \mathbb N$, so that $$\lim_{n\to \infty}E[X_n]=1=E[X_1]\neq0=E[X]$$ -Trivially $X_n$ is a submartingale (and a supermartingale), so that $E[X]=0\not\ge 1=E[X_1]$ even for submartingales. Now, if $$X_n\overset{\mathcal L^p}\longrightarrow X$$ (which is the case for example if the Martingale convergence theorem applies) then you know that $$\lim_{n\to \infty}E[|X_n|^r]=E[|X|^r]$$ for all $1\le r\le p$ (this is a direction implication of $\mathcal L^p$ convergence, see here).<|endoftext|> -TITLE: Simple Random Walk: Hitting time of 1 is a.s. finite -QUESTION [7 upvotes]: Let $X_i, i \geq 0$ be i.i.d. random variables with $P[X_i=1]=P[X_i=-1]=1/2$ and consider $S_n = X_1 + \dotsc + X_n$ for $n \geq 1$, $S_0=0$, the symmetric simple random walk on $\mathbb{Z}$. -Let $T_1:=\inf{\{n \geq 0 \,\colon \, S_n = 1\}}$ be the hitting time of $1$. -How can one see that $T_1 < \infty$ a.s.? - -REPLY [6 votes]: In this case, we can give an easy estimate on the tail probability of $T_1$. Notice that -$$ \{T_1 = 2n+1\} = \{S_1 \leq 0, \cdots, S_{2n-1} \leq 0, S_{2n} = 0, S_{2n+1} = 1\}. $$ -Using one of the equivalent characterization of Catalan number, we can explicitly compute the probability of this event as -$$ \Bbb{P}(T_1 = 2n+1) = \frac{1}{2^{2n+1}}C_n = \frac{1}{2^{2n+1}(n+1)} \binom{2n}{n}. $$ -From this, we explicitly compute the probability generating function of $T_1$ by -$$ |z| < 1 \quad \Rightarrow \quad \mathbb{E}[z^{T_1}] = \sum_{n=0}^{\infty} z^n \mathbb{P}(T_1 = n) = \sum_{n=0}^{\infty} \frac{C_n}{2^{2n+1}} z^{2n+1} = \frac{1-\sqrt{1-z^2}}{z}. $$ -Letting $z \to 1^-$ shows that, by the monotone convergence theorem, -$$ \mathbb{P}(T_1 < \infty) = \lim_{z \to 1^-} \mathbb{E}[z^{T_1}] = \lim_{z \to 1^-} \frac{1-\sqrt{1-z^2}}{z} = 1. $$ -Therefore $\Bbb{P}(T_1 = \infty) = 0$. - -Addendum. Using this, we can also show that -$$ \mathbb{E}[T_1 z^{T_1}] = \frac{1-\sqrt{1-z^2}}{z\sqrt{1-z^2}}, $$ -and so, $T_1$ infinite expectation $\Bbb{E}[T_1] = \infty$.<|endoftext|> -TITLE: Mathematically, how does one find the value of the Ackermann function in terms of n for a given m? -QUESTION [5 upvotes]: Looking at the Wikipedia page, there's the table of values for small function inputs. I understand how the values are calculated by looking at the table, and how it's easy to see that 5,13,29,61,125 is $2^{n+3}-3$, but how does one go about calculating this "iterative" formula without pattern identification? -I started by looking at 61 (Ackermann 3,3) as being $2*(2*(2*(2*1+3)+3)+3)+3$ -, which all I'm doing is expanding the recursive formula, but I have no idea that's simplified to create $2^{n+3}-3$ rather than just looking at patterns. This is not homework, just curiosity. - -REPLY [4 votes]: $$A(0,n) = n+1 \;\text{(by definition)}$$ - -$$A(1,n) \rightarrow A(0,A(1,n-1)) \rightarrow A(1,n-1)+1 \rightarrow A(1,n-2)+2\Rightarrow A(1,0)+n$$ -$$\rightarrow A(0,1)+n \rightarrow 2+n = \color{red}{2+(n+3)-3}$$ - -$$A(2,n) \rightarrow A(1,A(2,n-1)) \rightarrow A(2,n-1)+2 \rightarrow A(2,n-2)+4 \Rightarrow A(2,0)+2n$$ -$$\rightarrow A(1,1)+2n \rightarrow 2n+3 = \color{red}{2(n+3)-3}$$ - -$$A(3,n) \rightarrow A(2,A(3,n-1)) \rightarrow 2(A(3,n-1)+3)-3 \rightarrow 4(A(3,n-2)+3)-3 $$ -$$\Rightarrow 2^n(A(3,0)+3)-3 \rightarrow 2^n(A(2,1)+3)-3 = 2^n(2^3)-3 = \color{red}{2^{n+3}-3} $$ - -$$A(4,n) \rightarrow A(3,A(4,n-1)) \rightarrow 2^{A(4,n-1)+3}-3 \rightarrow 2^{2^{A(4,n-2)+3}}-3 \rightarrow 2^{2^{2^{A(4,n-3)+3}}}-3 $$ -$$\Rightarrow\,(^{n}2)^{A(4,0)+3}-3 \rightarrow (^{n}2)^{A(3,1)+3}-3 \rightarrow (^{n}2)^{2^3}-3 \,=\, \color{red}{{^{n+3}}2-3}$$ - -$$\text{Assume}\;A(m,n) = 2[m](n+3)-3,\; \text{and note} \;2[m]2=4 \;\forall m>0$$ -$$A(m+1,0) \rightarrow A(m,1) \rightarrow 2[m]4-3 = 2[m](2[m]2)-3 = \color{red}{2[m+1]3-3}$$ -$$A(m+1,n+1) \rightarrow A(m,A(m+1,n)) \rightarrow 2[m](2[m+1](n+3)-3+3)-3\\ -= 2[m](2[m+1](n+3))-3 = \color{red}{2[m+1](n+4)-3}$$ -$$\mathbf{QED}$$ - -Note: single right arrow represents a single iteration of Ackermann function, and a double arrow represents many (usually $n$ iterations)<|endoftext|> -TITLE: Prove that $abc$ is a cube of some integer. -QUESTION [10 upvotes]: Given three integers $a$, $b$ and $c$ such that $\frac{a}{b}+\frac{b}{c}+\frac{c}{a}$ is an integer too, prove that the product $abc$ is a cube. By the way: Merry Christmas! ;) - -REPLY [10 votes]: By dividing by a common factor if there is any, we can assume no prime number divides all of $a,b,c$. Our goal is to show that the exponent of any prime $p$ in prime decomposition of $abc$ is divisible by $3$. -Suppose $p$ divides one of the numbers, WLOG let $p\mid a$. Also, let $p^k$ be the greatest power of $p$ dividing $a$. -Our assumption on sum of fractions being integer is just saying that $abc\mid a^2c+b^2a+c^2b$. We see $p\mid abc$ and hence $p\mid a^2c+b^2a+c^2b$ and so $p\mid c^2b$ thus $p\mid b$ or $p\mid c$, but not both (as we assumed). Now we have two cases. -1) $p\mid b$. Let $p^l$ be the greatest power of $p$ dividing $b$. It's easy to see now that exponent of $p$ in $abc$ is $k+l$. -We have $p^{k+l}\mid abc$, so $p^{k+l}\mid a^2c+b^2a+c^2b$, hence $p^{k+l}\mid a^2c+c^2b$. The greatest power of $p$ dividing $c^2b$ is $p^l$ and the greatest power of $p$ dividing $a^2c$ is $p^{2k}$. If these exponents were different, then the greatest power of $p$ dividing sum of $c^2b$ and $a^2c$ would be $\min\{2k,l\} -TITLE: How to simplify $F(k)=\sum\limits_{n=1}^k\sum\limits_{d|n}\gcd({d},{\frac{n}{d}})$? -QUESTION [6 upvotes]: I have the following summation: -$$F(k)=\sum\limits_{n=1}^k\sum\limits_{d|n}\gcd\left({d},{\frac{n}{d}}\right)$$ -This is nearly impossible to compute (using coding) for large numbers, due to the time it'd take. -It's been suggested that the above summation can be simplified to this: -$$F(k)=\sum_{d=1}^{d^2\leqslant k}\ \sum_{n=1}^{nd\leqslant k}\gcd({d},{n})$$ -I've tried testing the simplification, and it doesn't work. For instance, F(10) gives an output of 22 instead of 32. -How do I simplify the first summation? -Stuff here might be relevant, but I'm not sure: Wikipedia: Divisor function. -EDIT: Algorithm for thefunkyjunky's suggestion: -long k = 10; - - for (long d = 1; d*d <= k; d++) { - for(long n = 1; n*d <= k; n++) { - if (d*d <= n) result = result.add(BigInteger.valueOf(GCD(d,n))); - } - } - -REPLY [2 votes]: I have found another way to calculate this, although I am not really answering the question of how to rearrange the original sum. -I started by considering for myself the possible pairs and the $gcd$s: -n= 1: (1,1) 1 f( 1)=1 F( 1)= 1 -n= 2: (1,2) (2,1) 1 1 f( 2)=2 F( 2)= 3 -n= 3: (1,3) (3,1) 1 1 f( 3)=2 F( 3)= 5 -n= 4: (1,4) (2,2) (4,1) 1 2 1 f( 4)=4 F( 4)= 9 -n= 5: (1,5) (5,1) 1 1 f( 5)=2 F( 5)=11 -n= 6: (1,6) (2,3) (3,2) (6,1) 1 1 1 1 f( 6)=4 F( 6)=15 -n= 7: (1,7) (7,1) 1 2 1 f( 7)=2 F( 7)=17 -n= 8: (1,8) (2,4) (4,2) (8,1) 1 2 2 1 f( 8)=6 F( 8)=23 -n= 9: (1,3) (3,3) (9,1) 1 3 1 f( 9)=5 F( 9)=28 -n=10: (1,10) (2,5) (5,2) (10,1) 1 1 1 1 f(10)=4 F(10)=32 - -I didn't find it obvious what was going on here, so I laid out the same results like this: - -I noticed that the table can be viewed as a set of inverted "V"s. -The entries for $(1,n)$ and $(n,1)$ make one such V: with all the $gcd$ equal to 1. -The entries for $(2,\frac n2)$ and $(\frac n2,2)$ make another V (marked in green): but the entries start with 2 and then alternate 1, 2, 1, 2, ... -It is important to note also that the $i$th V begins at $n=i^2$ with a single value $gcd = i$, then continues on with $2(i-1)$ values of $gcd=1$ and then 2 values of $gcd=i$, with this pattern being repeated. -By my method we have $F(n)=\Sigma_{i=1}^{i={\sqrt n}}V(i,n)$ -where $V(i,n)$ is the sum of the values in the $i$th V and is found by this algorithm: -Find positive integers $p$ and $q$ so that $n=pi^2+qi+r$. -Then $V(i,n)=i + 2(p-1)(2i-1)+2q$ -I think this might be better because there is no need to calculate any $gcd$.<|endoftext|> -TITLE: If $Y$ is path-connected, then there is only one homotopy class of maps $[0,1] \to Y$ -QUESTION [5 upvotes]: I have this exercise: - -If Y is path-connected, show that there is only one homotopy-class - of continuous functions from $[0,1]$ to Y. - -My attempt: -What I need to show is that if I have two continuous functions $f_1,f_2: [0,1]\rightarrow Y$ they are homotopic. I must find an F such that $f: I \times I\rightarrow Y$, so that F is continuous, and $F(t,0)=f_1(t), F(t,1)=f_2(t)$. -There is one way that seem very natural to construct F here, that is for every t, since Y is path-connected, there is a path from $f_1(t)$ to $f_2(t)$, this path can be written: $f_t(s): [0,1]\rightarrow Y$, where $f_t(0)=f_1(t), f_t(1)=f_2(t)$. -Then we just denote $F(t,s)=f_t(s)$. -By construction this will be a homotopy(not path-homotopy) between $f_1, f_2$ if we have that F is continuous. But how do I show that it is continuous? Or is it even continuous? What I know about continuoity is this: -$F(t,0), F(t,1)$ is continuous in t. $F(t,s)$ is continuous in s for all t. But still it is not enough, we need joint continuity. Any tips on how to show continuity? - -REPLY [7 votes]: As written, $F$ has absolutely no reason to be continuous. For example let $f_1, f_2 : [0,1] \to S^1$ ($S^1$ is the unit circle in $\mathbb{C}$) be given by $f_1(t) = 1$ and $f_2(t) = -1$, and for $s < 1/2$ I choose a path between $f_1(s)$ and $f_2(s)$ which goes through the upper half-circle and for $s \ge 1/2$ I choose a path which goes through the lower half-circle. The resulting $F$ will clearly not be continuous. -The easiest way to go as Arthur remarks in the comments) is to show that any map $f : [0,1] \to Y$ is homotopic to a constant map. Simply let $F(s,t) = f((1-s)t)$, then $F(0,t) = f(t)$ while $F(1,t) = f(0)$ is constant. Thus $f_1$ is homotopic to the constant map equal to $f_1(0)$, and $f_2$ is homotopic to the constant map equal to $f_2(0)$. (I didn't use the hypothesis that $Y$ is path-connected here.) -Then since $Y$ is path-connected, there's a path $\gamma : [0,1] \to Y$ such that $\gamma(0) = f_1(0)$ and $\gamma(1) = f_2(0)$. Let $F(s,t) = \gamma(s)$: this define a homotopy between the two constant maps $(t \mapsto f_1(0))$ and $(t \mapsto f_2(0))$. Since the relation "being homotopic" is an equivalence relation, you finally get a homotopy between $f_1$ and $f_2$: -$$f_1 \sim (t \mapsto f_1(0)) \sim (t \mapsto f_2(0)) \sim f_2.$$<|endoftext|> -TITLE: Xmas Maths 2015 -QUESTION [53 upvotes]: Simplify the expression below into a seasonal greeting using commonly-used symbols in commonly-used formulas in maths and physics. Colours are purely ornamental! -$$ -\begin{align} -\frac{ -\color{green}{(x+iy)} -\color{red}{(y^3-x^3)} -\color{orange}{(v^2-u^2)} -\color{red}{(3V_{\text{sphere}})^{\frac 13}} -\color{orange}{E\cdot} -\color{green}{\text{KE}} -} -{ -\color{orange}{2^{\frac 23}} -\color{green}{c^2} -\color{red}{e^{i\theta}} -\color{orange}{v^2} -\color{green}{(x^2+xy+y^2)}} -\color{red}{\sum_{n=0}^{\infty}\frac 1{n!}} -\color{orange}{\bigg/} -\color{orange}{\left(\int_{-\infty}^\infty e^{-x^2} dx\right)^{\frac 23}} -\end{align}$$ -NB: Knowledge of the following would be helpful: -Basic Maths: - -Taylor series expansion -Normalizing factor for the integral of a normal distribution -Rectangular and polar forms for complex variables -Volume of a sphere - -Basic Physics: - -Kinematics formulae for motion under constant acceleration -Einstein's equation -One of the energy equations - -REPLY [67 votes]: $$ -\begin{align} -&\frac{ -\color{green}{(x+iy)} -\color{red}{(y^3-x^3)} -\color{orange}{(v^2-u^2)} -\color{red}{(3V_{\text{sphere}})^{\frac 13}} -\color{orange}{E\cdot} -\color{green}{\text{KE}} -} -{ -\color{orange}{2^{\frac 23}} -\color{green}{c^2} -\color{red}{e^{i\theta}} -\color{orange}{v^2} -\color{green}{(x^2+xy+y^2)}} -\color{red}{\sum_{n=0}^{\infty}\frac 1{n!}} -\color{orange}{\bigg/} -\color{orange}{\left(\int_{-\infty}^\infty e^{-x^2} dx\right)^{\frac 23}}\\ -&= -\frac{ -\color{green}{(x+iy)} -\color{red}{(y^3-x^3)} -\color{orange}{(v^2-u^2)} -\color{red}{(3V_{\text{sphere}})^{\frac 13}} -\color{orange}{E\cdot} -\color{green}{\text{KE}} -} -{ -\color{red}{e^{i\theta}} -\color{green}{(x^2+xy+y^2)} -\color{orange}{\cdot2^{\frac 23}} -\color{green}{c^2} -\color{orange}{v^2} -} -\color{red}{\sum_{n=0}^{\infty}\frac 1{n!}} -\color{orange}{\bigg/} -\color{orange}{\left(\sqrt{\pi}\right)^{\frac 23}}\\ -&= -\color{green}{\left(\frac{x+iy}{e^{i\theta}}\right)} -\color{red}{\left(\frac{y^3-x^3}{x^2+xy+y^2}\right)} -\color{orange}{(v^2-u^2)} -\color{red}{\left(\frac {(3V_\text{sphere})^\frac 13}{\left(2\sqrt{\pi}\right)^{\frac 23}}\right)} -\color{orange}{\left(\frac{E}{c^2}\right)} -\color{green}{\left(\frac{\text{KE}}{v^2}\right)} -\color{red}{\sum_{n=0}^{\infty}\frac 1{n!}} -\\ -&= -\color{green}{\left(\frac{re^{i\theta}}{e^{i\theta}}\right)} -\color{red}{\left(\frac{(y-x)(y^2+xy+x^2)}{x^2+xy+y^2}\right)} -\color{orange}{(v^2-u^2)} -\color{red}{\left(\frac {3\cdot \frac 43 \pi r^3}{4\pi}\right)^\frac 13} -\color{orange}{\left(\frac{mc^2}{c^2}\right)} -\color{green}{\left(\frac{\frac 12 mv^2}{v^2}\right)} -\color{red}{(e)} -\\ -&= -\color{green}{\left(r\right)} -\color{red}{\left(y-x\right)} -\color{orange}{(2as)} -\color{red}{\left(r^3\right)^\frac 13} -\color{orange}{\left(m\right)} -\color{green}{\left(\frac 12m\right)} -\color{red}{(e)} -\\ -&= -\color{green}{\left(r\right)} -\color{red}{\left(y-x\right)} -\color{orange}{(as)} -\color{red}{\left(r\right)} -\color{orange}{\left(m\right)} -\color{green}{\left( m\right)} -\color{red}{(e)} -\\ -&= -\color{orange}{\left(m\right)} -\color{red}{(e)} -\color{green}{\left(r\right)} -\color{red}{\left(r\right)} -\color{red}{\left(y-x\right)} -\color{green}{\left(m\right)} -\color{orange}{(as)} -\end{align}$$ -Merry Christmas, everyone!! - -The following links might be helpful. -- Complex numbers and polar coordinates -- Difference of two cubes -- Kinematics formulae for constant acceleration in a straight line -- Volume of a sphere -- Einstein's mass-energy equivalence -- Kinetic energy -- Taylor/Maclaurin series expansion of $e$ -- Gaussian integral (normalizing factor for the normal distribution) - -REPLY [6 votes]: Notice: - -$$\sum_{n=0}^{\infty}\frac{1}{n!}=\lim_{m\to\infty}\sum_{n=0}^{m}\frac{1}{n!}=\lim_{m\to\infty}\left(1+\frac{1}{m}\right)^m=e$$ -$$\int_{-\infty}^{\infty}e^{-x^2}\space\text{d}x=\lim_{a\to\infty}\int_{-a}^{a}e^{-x^2}\space\text{d}x=\lim_{a\to\infty}\left[\frac{\text{erf}(x)\sqrt{\pi}}{2}\right]_{-a}^{a}=\sqrt{\pi}$$ -$$\text{V}_{sphere}=\frac{4\pi r^3}{3}$$ -$$\text{E}=mc^2$$ -$$\text{EK}=\frac{mv^2}{2}$$<|endoftext|> -TITLE: Calculate $\int_{-\infty}^{\infty}\;\left( \frac{x^2}{1+4x+3x^2-4x^3-2x^4+2x^5+x^6}\right) \;dx$ -QUESTION [22 upvotes]: Calculate $$\displaystyle \int_{-\infty}^{\infty}\;\left( -\frac{x^{2}}{1+4x+3x^{2}-4x^{3}-2x^{4}+2x^{5}+x^{6}}\right) \;dx$$ - -The answer given is $\pi$. How does one calculate this? - -REPLY [10 votes]: There is an inner structure that enable this integral to be evaluated into such nice form. - -Let $$f(x) = 1+4x+3x^2-4x^3-2x^4+2x^5+x^6$$ The first miracle is: $f(x)$ factorizes nicely in $\mathbb{Q}[i]$: -$$f(x) = \underbrace{\left(x^3+(1-i) x^2-(2+i) x-1\right)}_{g(x)} \underbrace{\left(x^3+(1+i) x^2-(2-i) x-1\right)}_{h(x)}$$ - -The second miracle is: the root of $g(x)$ all lie in the same half plane. In this case, all roots of $g$ are in the upper plane. Denote them by $\alpha_1, \alpha_2, \alpha_3$, by contour integration $$I:=\int_{-\infty}^\infty \frac{x^2}{f(x)}dx = 2\pi i\left[ {\frac{{{\alpha _1}^2}}{{g'({\alpha _1})h({\alpha _1})}} + \frac{{{\alpha _2}^2}}{{g'({\alpha _2})h({\alpha _2})}} + \frac{{{\alpha _3}^2}}{{g'({\alpha _3})h({\alpha _3})}}} \right]$$ -Now the right hand side is symmetric in $\alpha_i$, which are roots of $g$. Since $g,h\in \mathbb{Q}[i][x]$, we have -$$\frac{I}{\pi} \in \mathbb{Q}$$ -This explain the nice result of the integral. Note that the numerator $x^2$ can be replaced by any polynomial in $\mathbb{Q}[x]$, $I/\pi$ is still rational. - -Using similar construction, we obtain the analogous integrals: - -Let $$f(x) = 4 + 8x - 11{x^2} - 18{x^3} + 13{x^4} + 8{x^5} + {x^6}$$ then $f$ satisfies the above two "mircales" so we have - $$\int_{ - \infty }^\infty {\frac{1}{{f(x)}}dx} = \frac{{5\pi }}{6} \qquad \int_{ - \infty }^\infty {\frac{x}{{f(x)}}dx} = - \frac{\pi }{3} \qquad \int_{ - \infty }^\infty {\frac{{{x^2}}}{{f(x)}}dx} = \frac{\pi }{3}$$ - -Another example with - -$$f(x) = 4 + 12x - 6{x^2} - 26{x^3} + 11{x^4} + 8{x^5} + {x^6}$$ -$$\int_{ - \infty }^\infty {\frac{1}{{f(x)}}dx} = \frac{{3\pi }}{4} \qquad \int_{ - \infty }^\infty {\frac{x}{{f(x)}}dx} = - \frac{\pi }{4} \qquad \int_{ - \infty }^\infty {\frac{{{x^2}}}{{f(x)}}dx} = \frac{\pi }{4}$$ - -An octic example: - -$$f(x) = 13 + 12 x + 7 x^4 + 2 x^5 - 3 x^6 + x^8$$ -$$\int_{ - \infty }^\infty {\frac{1}{{f(x)}}dx} = \frac{{487\pi }}{4148} \qquad \int_{ - \infty }^\infty {\frac{x}{{f(x)}}dx} = - \frac{325\pi }{4148} \qquad \int_{ - \infty }^\infty {\frac{{{x^2}}}{{f(x)}}dx} = \frac{515\pi }{4148}$$<|endoftext|> -TITLE: On a recursive sequence exercise. -QUESTION [7 upvotes]: I have the following recursive sequence of which I want to prove the convergence: -$$x_{n+1} = \frac{x_n +1}{x_n +2 }$$ and $x_1 = 0$ -I have proved that it is bounded above by $1$ and that it is increasing by taking the derivative, but I am told to do it without using derivatives. -How could I show that it's bounded and increasing using only elementary methods? -In particular I would like to prove that it is bounded by $\frac{\sqrt{5} - 1}{2}$ (this is the limit), but any additional ways to solve it are obviously very appreciated. - -REPLY [2 votes]: Functions of the form $f:\; x \mapsto \dfrac{ax+b}{cx+d}$ are fractional linear transformations, and their iterates can be computed in closed form: -$f^n: x \mapsto \dfrac{a_n x + b_n}{c_n x + d_n}$ where the matrix -$\pmatrix{a_n & b_n\cr c_n & d_n} = \pmatrix{a & b\cr c & d\cr}^n$. -In your case $$\pmatrix{a & b\cr c & d\cr} = \pmatrix{1 & 1\cr 1 & 2\cr}$$ -and it's not hard to show that $$\pmatrix{a_n & b_n\cr c_n & d_n\cr} = \pmatrix{F_{2n-1} & F_{2n}\cr F_{2n} & F_{2n+1}}$$ where $F_n$ are the Fibonacci numbers. In particular, with $\phi = \dfrac{1+\sqrt{5}}{2}$ is the "golden ratio", we have $F_n \sim \phi^n$ as $n \to \infty$. With - $x_1 = 1$ we get $$x_n = f^{n-1}(1) = \dfrac{F_{2n-3} + F_{2n-2}}{F_{2n-2} + F_{2n-1}} = \dfrac{F_{2n-1}}{F_{2n}} \to 1/\phi = \dfrac{-1+\sqrt{5}}{2}$$<|endoftext|> -TITLE: Difference between collinear vectors and parallel vectors? -QUESTION [10 upvotes]: I can't understand the difference between the two. -The definitions are as written in textbook: -Parallel vectors are vectors which have same or parallel support. They can have equal or unequal magnitudes and their directions may be same or opposite. -Two vectors are collinear if they have the same direction or are parallel or anti-parallel. They can be expressed in the form - a= k b where a and b are vectors and ' k ' is a scalar quantity. - -REPLY [7 votes]: $\newcommand{\Reals}{\mathbf{R}}$In some settings, a vector in $\Reals^{n}$ comprises both a "tail" or "location" $p$ in $\Reals^{n}$, and a "displacement" $v$ in $\Reals^{n}$. The ordered pair $(p, v)$ is usually depicted as an arrow from $p$ to $p + v$. -If this is the setting of your question, the vectors $(p_{1}, v_{1})$ and $(p_{2}, v_{2})$ are: - -Parallel if $v_{1}$ and $v_{2}$ are proportional, i.e., if there exist scalars $k_{1}$ and $k_{2}$, not both zero, such that $k_{1} v_{1} + k_{2} v_{2} = \mathbf{0}$. -Collinear if they are parallel and in addition each displacement is proportional to the displacement $p_{2} - p_{1}$ between the vectors' locations, i.e., the arrows representing the two vector lie on a line in $\Reals^{n}$. - -In the diagram, all the vectors are (mutually) parallel, but not all are collinear. The blue vectors, for example, are mutually collinear, all lying along the dashed line.<|endoftext|> -TITLE: Prove the identity $\cosh(2x)=\cosh^2(x)+\sinh^2(x)$ using the Cauchy product. -QUESTION [5 upvotes]: Prove the identity -$$\cosh(2x)=\cosh^2(x)+\sinh^2(x)$$ -using the Cauchy product and the Taylor series expansions of $\cosh(x)$ and $\sinh(x)$. The relations involving the exponential function are not to be used. - -REPLY [10 votes]: Given two power series -$$f(x)=\sum_{n=0}^\infty{a_nx^n}, $$ -$$g(x)=\sum_{n=0}^\infty{b_nx^n}$$ -The Cauchy product is just their product -$$f(x)g(x)=\sum_{n=0}^\infty{a_nx^n}\sum_{n=0}^\infty{b_nx^n}=\sum_{n=0}^\infty\left(\sum_{k=0}^n{a_kb_{n-k}}\right)x^n$$ -So consider $f(x)=\cosh{x}=\sum_{n=0}^\infty{\frac{x^{2n}}{(2n)!}}, g(x)=\sinh{x}=\sum_{n=0}^\infty{\frac{x^{2n+1}}{(2n+1)!}}$. Now $f(x)^2=\cosh^2{x}$ is -$$\sum_{n=0}^\infty{\frac{x^{2n}}{(2n)!}}\sum_{n=0}^\infty{\frac{x^{2n}}{(2n)!}}=\sum_{n=0}^\infty\sum_{k=0}^n\frac{1}{(2k)!}\frac{1}{(2n-2k)!}x^{2n}=\sum_{n=0}^\infty\sum_{k=0}^n\binom{2n}{2k}\frac{x^{2n}}{(2n)!}$$ -For $g(x)^2=\sinh^2{x},$ rewrite the sum as -$$\sum_{n=0}^\infty{\frac{x^{2n+1}}{(2n+1)!}}=x\sum_{n=0}^\infty{\frac{x^{2n}}{(2n+1)!}}$$ -It is easier to take the $x$ out and put it back, as it will stay in line with the Cauchy Product definition (won't change the power of $x$ in the expansion). Thus -$$x\sum_{n=0}^\infty{\frac{x^{2n}}{(2n+1)!}}x\sum_{n=0}^\infty{\frac{x^{2n}}{(2n+1)!}}=x^2\sum_{n=0}^\infty\sum_{k=0}^n\frac{1}{(2k+1)!}\frac{1}{(2n-2k+1)!}x^{2n}$$ -$$=\sum_{n=0}^\infty\sum_{k=0}^n\frac{1}{(2k+1)!}\frac{1}{(2n-2k+1)!}x^{2n+2}=\sum_{n=1}^\infty\sum_{k=0}^{n-1}\frac{1}{(2k+1)!}\frac{1}{(2n-2-2k+1)!}x^{2n}$$ -$$=\sum_{n=1}^\infty\sum_{k=0}^{n-1}\frac{1}{(2k+1)!}\frac{1}{(2n-2k-1)!}x^{2n}=\sum_{n=1}^\infty\sum_{k=0}^{n-1}\binom{2n}{2k+1}\frac{x^{2n}}{(2n)!}$$ -Now you can add the two sums together, and notice, you are getting a sum of binomial coefficients up to $2n$! So -$$\sum_{k=0}^n\binom{2n}{2k}+\sum_{k=0}^{n-1}\binom{2n}{2k+1}=\sum_{k=0}^{2n}\binom{2n}{k}=2^{2n}$$ -So $$\cosh^2{x}+\sinh^2{x}=\sum_{n=0}^\infty{2^{2n}\frac{x^{2n}}{(2n)!}}=\sum_{n=0}^\infty{\frac{(2x)^{2n}}{(2n)!}}=\cosh{2x}$$<|endoftext|> -TITLE: Maximum of exponentials divided by sum -QUESTION [6 upvotes]: Let $X_1,\dots, X_n$ be i.i.d. exponential random variables. For large $n$, what is the probability distribution for the following? -$$\frac{\max X_n}{\sum X_n}$$ -I believe that the cdf of the maximum of $n$ i.i.d. exponential random variables is the $n$th power of the cdf of one when $n$ is fixed. - -REPLY [3 votes]: Write the statement as $\dfrac{\frac{\max X_n}{n}}{\frac{\sum X_n}{n}}$. Say the $X_n$'s are Exp($\lambda$). -The denominator $\frac{\sum X_n}{n}$ converges in probability (and distribution) to the mean of the $X_n$'s, $\frac{1}{\lambda}$ by the law of large numbers. -Now, you need to find the CDF of $\frac{\max X_n}{n}$ which is $P(\frac{\max X_n}{n}\leq c) = P(\max X_n \leq c n) = (1-e^{\lambda c n})^n$ for $c>0$, and $0$ otherwise (using the fact you noted). As $n \to \infty$, $P(\frac{\max X_n}{n}\leq c) \to 0$ if $c<0$ and $P(\frac{\max X_n}{n}\leq c) \to 1$ if $c>0$. Thus, $\frac{\max X_n}{n} \to 0$ in distribution (and thus in probability as well, since it converges to a constant). -Thus, by Slutksy's theorem, the limit is the ratio of the limits which is $0 / (1/\lambda) =0$ (i.e. a point mass at $0$).<|endoftext|> -TITLE: How to see that the eigenfunctions form a basis for the function space? -QUESTION [7 upvotes]: We have a Sturm-Liouville operator -$$ -L=\frac{1}{w(x)}\left[\frac{d}{dx}\left(p(x)\frac{d}{dx}\right)+q(x)\right] -$$ -and consider -$$ -\frac{\partial c}{\partial t}=Lc, -$$ -with homogeneous boundary conditions. -If we are now searching for solutions, the technique is to start with considering the homogeneous equation, i.e. $q(x)=0$ and solves the Eigenvalue problem -$$ -L\Phi=\lambda\Phi. -$$ -There are functions $\Phi$ - called eigenfunctions - that solve this eigenvalue problem, they exist since $L$ is self-adjoint under homogeneous boundary conditions. The eigenfunctions are orthogonal. - -Moreover, the eigenfunctions $\Phi$ form a basis for the function space consisting of functions that satisfy the boundary conditions, meaning that any such function can be expressed as a linear combination of the eigenfunctions. So we can find solutions of the inhomogeneous equation by making the approach $u(x,t)=\sum_n A_n\Phi_n$, put this into the equation and determine the constants. - -My question is how one can show/ see that the eigenfunctions form a basis of the function space consisting of functions that satify the boundary conditions. -More precisely, I think, the function space for which the eigenfunctions form a basis is supposed to be the function space containing all functions that -(i) are quadrat-integrable with respect to the weight function $w$ and -(ii) satisfy the boundary conditions. -Do not know exactly if we really need (i). -Wikipedia says that the proper setting is the Hilbert space $L^2([a,b], w(x)dx)$ and that in this space, $L$ is defined on sufficiently smooth functions that satisfy the boundary conditions. -Anyhow: How to show/ see that the eigenfunctions form a basis? - -REPLY [3 votes]: There are other conditions you need in order to guarantee a discrete basis $\{ \Phi_1,\Phi_2,\Phi_3,\cdots \}$. A typical case leading to discrete eigenvalues and Fourier expansions would be where (a) $p$ is continuously differentiable and strictly positive on $[a,b]$, (b) $w$ is continuous and strictly positive on $[a,b]$, and (c) $q$ is absolutely integrable on $[a,b]$. When you impose homogeneous endpoint conditions of the form -$$ - \cos \alpha f(a)+\sin\beta f'(a) = 0, \\ - \cos \beta f(b) + \sin\beta f'(b) = 0, -$$ -for some real $\alpha,\beta$, then there is an infinite sequence of eigenvalues -$$ - \lambda_0 < \lambda_1 < \lambda_2 < \cdots < \lambda_n <\cdots, -$$ -that tends to $\infty$ for which non-trivial solutions of $Lf_j = -\lambda_j f_j$ exist which satisfy the homogeneous conditions; and the solution space is one-dimensional for each $j$. These eigenfunctions $\{ f_j \}$ are mutually orthogonally and, when properly normalized, the set $\{ f_j \}$ is a complete orthonormal basis of $L^2_w[a,b]$. -The convergence of the generalized Fourier series for $f \in L^2_w[a,b]$ converges in the norm of $L^2$ to $f$. You don't necessarily get such convergence if $f \in L^1_w[a,b]$ instead; the failure of converge in $L^1_w[a,b]$ occurs for the simplest classical Fourier case $q=0$, $w=1$, $p=1$. -A proof of these facts is not short. It can't be trivial, because the simplest case includes the classical Fourier series. -Reference: M. A. Al-Gwaiz, Sturm-Liouville Theory and its Applications, 2008. (Springer-Verlag Undergraduate Series.) Pay particular attention to Chapter 2.<|endoftext|> -TITLE: Support of a distribution, what does it mean? -QUESTION [12 upvotes]: In my course notes the support of a distribution (continous lineair functional) is defined as follows: -Definitions -First it defines something like open annihilation sets: - -An open annihilation set $\omega$ of a distribution $T$ is an open set where $\langle T, \phi\rangle = 0$ if the compact support of $\phi$ is a subset of $\omega$. - -Then - -The support of a distribution $T$ is the complement of the open union of all open annihilation sets of $T$. - -There are some examples provided: ($\mathcal{D}$ is the function space of $\mathscr{C}^\infty$ functions with compact support) - -Choose a $\phi \in \mathcal{D}$ such that $0\not \in [\phi]$. Then $\langle \delta , \phi \rangle = \phi(0) = 0$. Which implies $[\delta]= \{0\}$. -Let $Y$ be the Heaviside distribution. Choose $\phi\in \mathcal{D}$ such that $[\phi]\subseteq ]-\infty, 0[$, then -$$\langle Y, \phi\rangle = \int_{-\infty}^{+\infty}Y(x)\phi(x)\operatorname d x = 0$$ -Which implies $[Y] = [0,+\infty[$ - -What does it all mean? -I find it hard to understand what support of a distribution really means. For example What does it mean for a distribution to have compact support? -If an ordinary function has compact support I can visualize this as some sort of bump function. But how should I look at the support of a distribution? - -REPLY [18 votes]: I will try to start from the notion of support of a function and obtain the definitions above in a natural way. -If $f : \mathbb{R}^n \to R$ then its support is defined as $S = \overline{\{x \in \mathbb{R}^n : f(x) \neq 0\}} $ For the purpose of discussion it's easier to talk about $S^c$ instead of $S$, namely $S^c$ is the largest open set where $f = 0$. -So far, so good, but distributions are not functions, so it doesn't make sense to say that the value of a distribution at a point is $0, -1, \pi$, etc. However, distributions are linear functionals, so it's not unreasonable to define that a distribution $T$ is zero on an open set $\omega$ if it "doesn't do anything there". In other words, for an arbitrary $\phi$ smooth, compactly supported in $\omega$ then $\langle T, \phi \rangle = 0$. Thus, we have arrived at the definition of open annihilation set that you mentioned. -Now, to define the support of $T$ we take the complement of the largest open set where $T$ vanishes: just like in the case for support of a function $f$: look at the disussion about $S$ and $S^c$ above. -I hope this helps. -Note: it's worth checking that if $T$ is induced by a (locally) integrable function $f$ in the standard way, then the support of $T$ will be the support of $f$, in other words the two definitions are consistent.<|endoftext|> -TITLE: Expected waiting time for next train -QUESTION [5 upvotes]: Let's say a train arrives at a stop in intervals of 15 or 45 minutes, each with equal probability 1/2 (so every time a train arrives, it will randomly be either 15 or 45 minutes until the next arrival). What is the expected waiting time of a passenger for the next train if this passenger arrives at the stop at any random time. This means that the passenger has no sense of time nor know when the last train left and could enter the station at any point within the interval of 2 consecutive trains. -I was told 15 minutes was the wrong answer and my machine simulated answer is 18.75 minutes. I just don't know the mathematical approach for this problem and of course the exact true answer. Sincerely hope you guys can help me. Thanks! - -REPLY [5 votes]: Your simulator is correct. Since 15 minutes and 45 minutes intervals are equally likely, you end up in a 15 minute interval in 25% of the time and in a 45 minute interval in 75% of the time. -In a 15 minute interval, you have to wait $15 \cdot \frac12 = 7.5$ minutes on average. -In a 45 minute interval, you have to wait $45 \cdot \frac12 = 22.5$ minutes on average. -This gives a expected waiting time of $$\frac14 \cdot 7.5 + \frac34 \cdot 22.5 = 18.75$$ - -REPLY [2 votes]: The exact definition of what it means for a train to arrive every $15$ or $4$5 minutes with equal probility is a little unclear to me. However here is an intuitive argument that I'm sure could be made exact, as long as this random arrival of the trains (and the passenger) is defined exactly. -Think about it this way. Mark all the times where a train arrived on the real line. The marks are either $15$ or $45$ minutes apart. So the real line is divided in intervals of length $15$ and $45$. Because of the 50% chance of both wait times the intervals of the two lengths are somewhat equally distributed. Now you arrive at some random point on the line. However your chance of landing in an interval of length $15$ is not $\frac{1}{2}$ instead it is $\frac{1}{4}$ because these intervals are smaller. -So when computing the average wait we need to take into acount this factor. The average wait for an interval of length $15$ is of course $7\frac{1}{2}$ and for an interval of length $45$ it is $22\frac{1}{2}$. And we can compute that -$$\frac{1}{4}\cdot 7\frac{1}{2} + \frac{3}{4}\cdot 22\frac{1}{2} = 18\frac{3}{4}$$<|endoftext|> -TITLE: Number of ways of forming 4 letter words using the letters of the word RAMANA -QUESTION [9 upvotes]: Question: Find the number of ways of forming 4 letter words using the letters of the word "RAMANA" - -This can be solved easily by taking different cases. - -All 3 'A's taken: remaining one letter can be chosen in $^3C_1$ ways. Total possibilities $=^3C_1\cdot\frac{4!}{3!}=12$ -Only 2 'A's taken: remaining two letters out of {R,M,N} can be chosen in $^3C_2$ ways. Total possibilities $= ^3C_2\cdot\frac{4!}{2!}=36$ -Only one A: Number of ways: $4!=24$ - -Total $=72$. -But my teacher solved it like this. He found the coefficient of $x^4$ in $4!\cdot(1+\frac{x}{1!})^3(1+\frac{x}{1!}+\frac{x^2}{2!}+\frac{x^3}{3!})$ which also came out to be $72$. -Why does this work? Also, if I avoid the factorials, I get number of combinations. That is, number of combinations $=$ coefficient of $x^4$ in $(1+x)^3(1+x+x^2+x^3)$ - -REPLY [6 votes]: Suppose you have a $4$ letter string composed of, say, $1$ distinct and $3$ identical letters -There would be $\frac{4!}{1!3!}$ permutations, also expressible as a multinomial coefficient, $\binom{4}{1,3}$ -Similarly, for $2$ distinct, $2$ identical, and $3$ distinct, $1$ identical, -it would be $\binom{4}{2,2}\;$ and $\binom{4}{3,1}$ respectively. -In the polynomial expression $4!(1+x/1!)^3(1+x+x^2/2!+x^3/3!)$, -the 4! corresponds to the numerator, whatever the combination; the first term in $( )$ corresponds to choosing one or more from $R,M,N$; and the other term corresponds to choosing $1,2,$ or $3 A's$ -It will become evident why this approach works if we expand the first term in ( ), and compare serially with your case approach by just using the appropriate coefficients to get terms in $x^4$ -$4!(1 + 3x + 3x^2 + x^3)(1 + x + x^2/2! + x^3/3!)$ -To find the coefficient of $x^4$, consider the three cases that produce $x^4$ -One from $R,M,N, 3A's : 4!\cdot3\cdot\frac{1}{3!} = 12$ -Two from $R,M,N, 2A's : 4!\cdot3\cdot\frac1{2!} = 36$ -Three from $R,M,N, 1A : 4!\cdot1\cdot 1 = 24$ -Coefficient of $x^4 = 12+36+24 = 72$ -We can now clearly see why the coefficient of $x^4$ in the expression automatically gives all possible permutations of $4$ letters<|endoftext|> -TITLE: The proofs of the fundamental Theorem of Algebra -QUESTION [17 upvotes]: There are many proofs of the fundamental theorem of algebra. -Which are the most beautiful proofs? - -REPLY [3 votes]: Via Galois Theory: -WLOG, let $p(x)\in\mathbb{R}[x]$ and let $F$ be the splitting field of $p$, embedded in some algebraic closure of $\mathbb{R}$. Since $F(i)$ is the splitting field of the square free part of $p(x)(x^2-1)$, it is Galois over $\mathbb{R}$. -Let $G$ be the Galois group of $F(i)$ over $\mathbb{R}$. If there was an odd number dividing $|G|$, by Galois correspondence there is an odd degree subextension. This contradicts the fact that the intermediate value theorem tells us that every odd degree polynomial over $\mathbb{R}$ has a root in $\mathbb{R}$, so it must be that $G$ has order $2^n$ for some $n$. -Let $G'$ be the Galois group of $F(i)$ over $\mathbb{C}$. This is the splitting field of some polynomial dividing $p(x)$ so it is Galois. By the above, $|G'|=2^k$, where we wish to show $k=0$. By Cauchy's Theorem for $p$-groups, if $k>0$ then there exists a subgroup of $G'$, and therefore a field extension of $\mathbb{C}$ that has order $2$, but that contradicts the quadratic equation, as the quadratic equation tells us that there are no degree $2$ irreducible polynomials over $\mathbb{C}$. Therefore $|G'|=1$. However, $2|G'|\geq |G|$ so $2\geq |G|$. -It follows by the Fundamental Theorem of Galois Theory that $[F(i):\mathbb{R}]\leq 2$. Since $$2=[F(i):\mathbb{R}]=[F(i):F][F:\mathbb{R}]$$ we have that $[F:\mathbb{R}]=2$ precisely when $F=\mathbb{C}$ and that $[F(i):F]=2$ precisely when $F=\mathbb{R}$ -However, $F$ was an arbitrary finite degree field extension, so we have that an arbitrary finite degree field extension of $\mathbb{R}$ is $\mathbb{C}$ precisely when it is non trivial, and we are done.<|endoftext|> -TITLE: Inscribing square in circle in just seven compass-and-straightedge steps -QUESTION [79 upvotes]: Problem Here is one of the challenges posed on Euclidea, a mobile app for Euclidean constructions: Given a $\circ O$ centered on point $O$ with a point $A$ on it, inscribe $\square{ABCD}$ within the circle — in just seven elementary steps. Euclidea hints that the first two steps use the compass, the third uses the straightedge, and the last four use the straightedge to draw the sides themselves of $\square{ABCD}$. -Definitions The problem is not considered solved until each line containing one of the four sides of the desired square are drawn; merely finding the vertices of the square does not suffice. Naming and creating points are of course allowed and fortunately do not count toward the seven elementary steps allowed in this problem. Other than these 'zero-step' steps, Euclidea permits only two elementary steps, each costing one step: - -Create an infinite line connecting two points using an unmarked, one-dimensional, infinitely long straightedge. (Even merely extending a given line segment costs one step.) -Create a circle using a compass that collapses immediately thereafter. - -Research of previous Mathematics Stackexchange questions I am aware that there is a seven-step process previously described at How can I construct a square using a compass and straight edge in only 8 moves?. Notwithstanding the post's title, it actually has just seven steps since its first corresponds to constructing the given $\circ O$. This solution fails, however, since the resulting inscribed square is neither inscribed in the given $\circ O$ nor inclusive of the given point $A$ As a vertex. -Attempt 1: 8-step solution using perpendicular bisectors - -Take one step to extend $\overline{AO}$ to the other side of $\circ O$. - - -Take as point $C$ the new intersection between said line and circle. - -Take three steps to define $L$, the perpendicular bisector of diameter $\overline{AC}$. - - -Take as points $B$ and $D$ the intersections of $L$ with $\circ O$. - -Take four steps to draw the sides themselves of $\square{ABCD}$. - -Attempt 2: 8-step solution using a 15-75-90 triangle It turns out that @Blue's successful 7-step solution uses much the same circles and 15-75-90 triangle as the one proposed here. - - -Take one step to create $\circ A$ with radius $AO$. - - -Take as point $E_1$ the 'left' resulting point of intersection. -Take as point $E_2$ the 'right' resulting point of intersection. - -Take one step to create $\circ P$ with radius $E_1O$. -Take one step to create $\circ Q$ with radius $E_1E_2$. - - -Take as point $C$ the intersection point between circle $Q$ and circle $O$. -Take as point $F$ the intersection point between circle $Q$ and circle $A$. - -Take one step to create $\overleftrightarrow{E_1F}$. - - -Take as point $G_1$ the resulting 'top' intersection point with $\circ P$. -Take as point $G_2$ the resulting 'bottom' intersection point with $\circ P$. - -Take one step to create $\overleftrightarrow {AG_1}$ to effectively draw $\overline{AB}$. - - -Take as point $B$ the resulting intersection between said line and $\circ O$. - -Take one step to create $\overline{BC}$. -Take one step to create $\overleftrightarrow{AG_2}$ to effectively draw $\overline{AD}$. - - -Take as point $D$ the resulting intersection between said line and $\circ O$. - -Take one step to create $\overline{CD}$. - - -This completes desired $\square ABCD$, albeit in one too many steps. - -REPLY [21 votes]: For what it's worth, the following solution also constructs an inscribed circle in seven elementary steps. Unfortunately, the original point is not on the square, so Euclidea does not recognize it, but it has a pleasing symmetry the other seven-step solution lacks.<|endoftext|> -TITLE: A way to combine the numbers from 1 to 9 and get a number in which any two consecutive digits e divisible by 7 or 13. -QUESTION [5 upvotes]: Find a way to write the digits from 1 to 9 in sequence, in such way that the numbers determined by any two consecutive digits is divisible by 7 or 13. -This is let $a_{1}=1,...,a_{9}=9$ find a way to write -$$a_{i_{1}}a_{i_{2}}\cdots a_{i_{9}}$$ such that $a_{i}a_{i+1}$ is divisible by 7 or 13 for $i=1,...,8$. -(Here we mean $a_{i_{1}}a_{i_{2}}\cdots a_{i_{9}}=10^{8}a_{1}+10^{7}a_{2}+\cdots+10a_{8}+a_{9}$, as well $a_{i}a_{i+1}=10a_{i}+a_{i+1}$.) -This problem is from OBM (Brazilian Mathematics Olympiad) - -REPLY [3 votes]: Multiples of $13$ with two digits are $13, 26, 39, 52, 65, 78, 91$. -Multiples of $7$ with two digits, without $0$ or repeated digits, are $14, 21, 28, 35, 42, 49, 56, 63, 84, 91$. It has to start with $7$, because noone ends with $7$. Hence -$$ -784913526. -$$ -Edit, beated in time ..<|endoftext|> -TITLE: Sum of factors of a huge number. -QUESTION [10 upvotes]: I recently appeared in a math olympiad and it had this one question which had me stumped. This was a few weeks back and I have been looking for a way to find its answer ever since, but with no success. Searched the internet for the solution, but couldn't find any on it too! Anyway, here's how the question goes: -The value of $2^{96} - 3^{16}$ has two factors between 60 and 70. What is the sum of these two factors? -BTW, I should add that I did use wolframalpha to actually find the answer so I am more interested in knowing how to work it out manually than just knowing the answer. Any feedback would be appreciated. -Thanks! - -REPLY [3 votes]: Modulo $61$: -$$2^6=64=3,2^{12}=9,2^{24}=81=20,2^{48}=400=34,\color{green}{2^{96}}=1156=\color{green}{58},$$ -$$3^2=9=2^{12},\color{green}{3^{16}}=2^{96}=\color{green}{58}.$$ -Similarly, modulo $67$ yields twice $25$. - -Actually there is no need to perform the whole computation. Just observe -$$2^{12}\equiv3^2\mod61,\\2^{12}\equiv3^2\mod67.$$ \ No newline at end of file