diff --git "a/stack-exchange/math_stack_exchange/shard_108.txt" "b/stack-exchange/math_stack_exchange/shard_108.txt" deleted file mode 100644--- "a/stack-exchange/math_stack_exchange/shard_108.txt" +++ /dev/null @@ -1,6491 +0,0 @@ -TITLE: Show $p(x)$ is a primitive polynomial -QUESTION [5 upvotes]: First the definition: - -Polynomial $q(x) \in \mathbb{Z}_p[x]$ of degree $n$ is called primitive, iff: - -$q(x) \mid x^{p^n-1}-1$ -$\forall k : 1 \leq k \leq p^{n}-1$ : $q(x) \nmid x^k - 1$ - - -Now the polynomial from my exam, where I should show that it is primitive: - -$p(x)=x^6+x^5+x^2+x+1, \quad p(x) \in \mathbb{Z}_2$ - -So in this case $n=6$ and $p=2$, hence I need to check $1+(2^6-1)=64$ cases. How is this feasible to do on the exam, where I have just few minutes for each task without access to computer? Am I missing some "trick" to show that $p(x)$ is primitive? - -REPLY [3 votes]: Assuming $p$ is irreducible it suffices to show that the order of a root $\alpha$ is at least $22$ (since the order divides $63$). This is easy to check with the linear recursion (in $\mathbb{F}_2$) $$a_n=a_{n-1}+a_{n-4}+a_{n-5}+a_{n-6}$$ which has period equal to the order of $\alpha$. Starting from $00000\,1$ this sequence continues as $$00000\,11110\,01001\,01010\,01\ldots$$ and so its period is more than $21$.<|endoftext|> -TITLE: Can the real part of an entire function be bounded above by a polynomial? -QUESTION [8 upvotes]: Let $f:\mathbb{C}\to \mathbb{C}$ be an entire function such that $Re(f)\le |p(z)|$ for some polynomial, can we derive that $f(z)$ is a polynomial. -If $p(z)$ is constant, then this can be shown by considering $e^f$. If we instead consider $|u(z)|\le |p(z)|$, then it can also be shown. But if we do not establish the lowerbound, then I cannot figure out how to generlize the proof. - -REPLY [6 votes]: The condition is equivalent to $\operatorname{Re} f\le K|z|^m$ for some $m$ and $K$. -Under this condition we can conclude that $f(z)$ is a polynomial of order less than or equal to $m$. -Let $f(z)=u+iv=\sum_{k=0}^\infty a_kz^k$ and $A(r)=\max _{|z|=r} u(z)$. -It is well-known that for $k\ge 1$ -$$ -a_kr^k=\frac{1}{\pi}\int_0^{2\pi} u(re^{i\theta })e^{-ik\theta }d\theta . -$$ -This leads $$ -|a_k|r^k+2u(0)\le \frac{1}{\pi}\int_0^{2\pi} \left(|u(re^{i\theta })|+u(re^{i\theta })\right)d\theta \tag{1} -$$ -since $u(0)=\frac{1}{2\pi}\int_0^{2\pi} u(re^{i\theta })d\theta .$ -If $A(r)\le 0$, then $|u|+u=0$ and we have $|a_k|r^k+2u(0)\le 0$ from $(1)$. -If $A(r)>0$, then we have -$$ -|a_k|r^k+2u(0)\le 4A(r),$$ -since $|u|+u\le 2A(r)$. -In both cases we have $|a_k|r^k\le \max\{4A(r),\, 0\}-2u(0).$ -Now suppose that $\operatorname{Re} f \le K|z|^m$. Then we have -$$ -|a_n|\le 4Kr^{m-n}-\frac{2u(0)}{r^n}\to 0 \quad (r\to \infty) -$$ - for $n>m$.<|endoftext|> -TITLE: $f'(x) = g(f(x)) $ where $g: \mathbb{R} \rightarrow \mathbb{R}$ is smooth. Show $f$ is smooth. -QUESTION [7 upvotes]: Suppose $f: \mathbb{R} \rightarrow \mathbb{R} $ is differentiable and $g: \mathbb{R} \rightarrow \mathbb{R} $ is infinitely differentiable, i.e. $ g \in C^{\infty}(\mathbb{R})$, where we know $f'(x) = g(f(x)) $ on $\mathbb{R}$. Show that $ f \in C^{\infty}(\mathbb{R})$. - -Thus I have to show that $f$ is infinitely differentiable, that is, derivatives of all orders exist. I can assume by induction that all derivatives of order less than, say $n$, exist, and have to show that the $nth$ derivative exists for $f$. -I came up with this: -$f^{n}(x) = (g \circ f)^{n-1}(x)$. I somehow have to show that the $(n-1)th$ derivative for this composite function exists. I tried using the chain rule, but it just seems to become more ugly as I continue taking more derivatives. Obviously, I have to use the fact that $g$ is infinitely differentiable as well as the inductive assumption, although I'm not sure how to complete this line of reasoning. Maybe induction isn't even the right way to proceed. -Ideas? - -REPLY [2 votes]: Note that this is a differential equation. The equivalent Picard integral equation is -$$ -f(x)=f(0)+\int_0^x g(f(t))\,dt -$$ -From here it is trivial to observe that if $f$ is $C^n$ or better, then the composition $g\circ f$ is also at least $C^n$ and thus the anti-derivative $C^{n+1}$. Which gives that $f$ is also $C^{n+1}$ and so on.<|endoftext|> -TITLE: How to understand conjugate points on a Riemannian manifold? -QUESTION [7 upvotes]: I'm having trouble grasping what it means for two points to be conjugate on a Riemannian manifold. Could someone provide a geometric or intuitive explanation for this? -For clarification: given a geodesic $\gamma: [0,a] \to M$, the point $\gamma(t)$ is conjugate to $p=\gamma(0)$ if there exists a Jacobi field $J$, not identically zero, along $\gamma$ such that $J(0)=J(t)=0$. - -REPLY [3 votes]: Let $p,q$ two points, and $c$ a path between them. The energy functional is a nice function on the space $\Omega_p^q$of paths from $p$ to $q$, and a geodesic $c$ is just a critical point of this functional. If you think of this $\Omega $ as a manifold, a Jacobi Field exists iff the second derivative $E"$ of $E$ is degenerate, and this Jacobi field is an element of the kernel of $E"$. A good way to produce this is to consider a one parameter family of geodesics $c_t$ such that at $t=0$, the geodesic is non degenerate with Morse index $i$ and at $t=t_0$ with Morse index $i+1$. Between, some geodesic must be degenerate. For instance, take a simple closed geodesic of positive index. Let $p$ be some point on $c$, and $c_t$ be the arc of $c$ between $p$ and a point at the distance $t$. If $t$ is small, the index is still $0$, and if $t=t_0$ is the length of $c$ the index is $1$, so somewhere in between there is a conjugate point to $p$<|endoftext|> -TITLE: Scaling a matrix to make its eigenvalues fall within a certain interval -QUESTION [7 upvotes]: Suppose I have a diagonalizable matrix $M$ which has all its eigenvalues between $a$ and $b$. Is it possible to scale $M$ to $M_S$ such that all the eigenvalues of $M_s$ lie in the interval $[-1,1]$? -One method I came across: -Scale such that -$$ -M_s=\frac{M-(b+a)/2}{(b-a)/2}. -$$ -But, this is not working. Does anyone know anything better? - -REPLY [2 votes]: So the added constraint to your problem is that eigenvalues $a$ and $b$ must be mapped to $-1$ and $1$. -One possible solution is what follows. -If T is the diagonalizing matrix of M, then -$M=TDT^{-1}$, where $D=\begin{bmatrix} -a&0&0&\dots\\ -0&b&0&\dots\\ -0&0&c&\dots\\ -\vdots&\vdots&\vdots&\ddots -\end{bmatrix} -$ -The rescaling transformation for a diagonal D matrix that maps eigenvalues $a$ and $b$ to $-1$ and $1$ would be $R_D$ such that the following holds (let's call $D_1$ the output of such a rescaling): -$D_1 = R_DD$, where $R_D = \begin{bmatrix} --1/a&0&0&\dots\\ -0&1/b&0&\dots\\ -0&0&1&\dots\\ -\vdots&\vdots&\vdots&\ddots -\end{bmatrix}$ -Now let's call $M_1$ the rescaling of M such that eigenvalues $a$ and $b$ are mapped to $-1$ and $1$. It must be: -$M_1 = TD_1T^{-1} = TR_DDT^{-1} = TR_DT^{-1}TDT^{-1} = R_MM$. -So the rescaling transformation you are looking for is given by the multiplication on the left by the matrix: -$R_M = TR_DT^{-1}$<|endoftext|> -TITLE: Quaternion - Angle computation using accelerometer and gyroscope -QUESTION [5 upvotes]: I have been using a 6dof LSM6DS0 IMU unit (with accelerometer and gyroscope) and am trying to calculate the angle of rotation around all three axes. I have tried many methods but am not getting the results as expected. -Methods tried: -(i) Complementary filter approach - I am able to get the angles using the formula provided in the link Angle computation method. -But the problem is that the angles are not at all consistent and drift a lot. Moreover, when the IMU is rotated around one axis, angles calculated over other axes wobble too much. -(ii) Quaternion based angle calculation: There were plenty of resources claiming the angles are calcluated very well using quaternion approach but none had a clear explanation. I have used this method in order to update the quaternion for every values taken from the IMU unit. But the link didn't explain how to calculate the angles from a quaternion. -I have used glm math library in order to convert the quaternion to Euler angles and also have tried the formula specified in the wiki link. With this method, since pitch calculation asin returns only $-90$ to $90$ degrees I am not able to rotate the object in 3D as shown in the link. -Has anyone tried the quaternion to angle conversion before? I need to calculate the angles around all the three axis in the range $0$ to $360$ degrees or $-180$ to $180$ degrees. -Any help would be really appreciated. Thanks in advance. - -REPLY [2 votes]: The simplest way to obtain the relative orientation is the integrating of kinematic equations Quaternion kinematics for the error-state KF (formula 107). All the explanations about quaternions are in the book. -Gyroscope measures angular velocity $\omega$, so the relative orientation you can evaluate by integrating (in real-time) the kinematic equation $\dot{q}=\frac{1}{2}q\circ\omega$, where normalized quaternion $q$ defines orientation of the body-frame relative to the initial frame. The disadvantage of this method is the result diverges with time (because of integration errors and precision of gyroscope). -There is a better approach uses other sensors. -If you want to represent relative orientation as a sequence of rotations around 3 axis you should learn a bit about the Euler angles. Actually the second angle $\beta$ should be always in range $-\frac{\pi}{2}..\frac{\pi}{2}$ or in $0..\pi$.<|endoftext|> -TITLE: How do you find $∠XPC$ + $∠XPB$ such that $PB+PC$ is maximum where $P$ is a point on $f(x) = (x-1)(x-3)(x-5)$? -QUESTION [5 upvotes]: Problem: -$f(x) = (x-1)(x-3)(x-5)$ intersects the x axis at $A(1,0)$, $B(3,0)$ and $C(5,0)$. A point $P(t,f(t))$ is selected on the curve such that $PB+PC$ is maximum and $t \in (3,5).$ Let $PX$ be tangent to $f(x)$ at $P$, then find $∠XPC$ + $∠XPB$. -My attempt: -Other than brute force (which I'm not sure will even work out in this case), all I can notice is that $PB + PC$ can be taken to be a constant $K$. Then we can say that $B$ and $C$ are the focii of an ellipse of having $a=K/2$ and $P$ is a point on that ellipse. However, I don't know how this helps me either. -Any help with my approach/an alternative approach to the question is appreciated. - -REPLY [5 votes]: Sketch of the solution: -Consider ellipse $\cal E$ with foci $B$ and $C$ which is tangent to the graph of $f$. Observe that the point of tangency is the point $P$. Indeed, each point $Q(s, f(s))$ lies inside the ellipse $\cal E$ which means that $QB+QC\le PB+PC$. -The tangent $\ell$ to the graph of $f$ at point $P$ is tangent to the ellipse $\cal E$. But this means that $\ell$ is bisector of exterior angle $BPC$ (this is well-known fact about ellipses). Using this fact, easy angle calculation gives $\angle XPC + \angle XPB = 180^\circ$.<|endoftext|> -TITLE: $m_p=\{f\in \mathcal{O}_{V,p}| f(p)=0\}$, ideal of $p$ in the local ring. What is $m_p/m_p^2$? -QUESTION [6 upvotes]: In Section 6.8 of Undergraduate Algebraic Geometry by Reid, the author proved the following Theorem: - -There is a natural isomorphism of vector spaces $(T_pV)^*\cong m_p/m_p^2$ where $^*$ denotes the dual of a vector space. - -Here $T_pV$ is the tangent space to $V$ at $p$, where $V$ is a variety in $\mathbb{A}^n$. $m_p$ is the ideal of $p$ in $k[V]$. For simplicity, $p$ is assumed to be $(0,\dots,0)$. -$M_p$ is the ideal $(x_1,\dots,x_n)$ in $k[x_1,\dots,x_n]$. I understand the part showing -$$M_p/M_p^2\cong (k^n)^*$$ -He then introduced a restriction map $(k_n)^*\rightarrow (T_pV)^*$ and states that -$$m_p/m_p^2=M_p/(M_p^2+I(V))\cong (T_pV)^*$$ -this is where I got lost. I understand why the quotient $M_p/M_p^2$ is the vector space of linear function since $M_p^2$ is the ideal generated by the second order functions. But I am not sure what $m_p/m_p^2$ is and why it is equal to $M_p/(M_p^2+I(V))$. -Thank you for your help! - -REPLY [2 votes]: $m_p$ is the maximal (irrelevant) ideal of $K[X_1,\dots,X_n]/I(V)$, that is, the ideal generated by the images of $X_1,\dots,X_n$ hence $m_p=M_P/I(V)$. Then $m_p^2=(M_P^2+I(V))/I(V)$. Now it's clear why $$m_p/m_p^2=M_P/(M_P^2+I(V)).$$<|endoftext|> -TITLE: Is $\arctan2$ irrational? -QUESTION [5 upvotes]: Is $\tan^{-1}2$ an irrational number or a rational number? How to show that? -Or generally how to show $\tan^{-1}3, \tan^{-1}4, \tan^{-1}5...$ is irrational or rational? - -REPLY [6 votes]: Trasform this into $\cos x = \frac{1}{\sqrt{5}}$. Now, you have an equation -$$e^{ix}+e^{-ix}=\frac{2}{\sqrt{5}}$$ -or, with $z=e^{ix}$, -$$z^2-\frac{2}{\sqrt{5}}z+1=0$$ -Now, above is an algebraic equation, so $z$, the solution of this equation, must be algebraic. By Lindemann-Weirstrass theorem, if $e^{ix}$ is algebraic, then $ix$ must be transcendental, except if $x=0$. -Of course, $\tan^{-1}1 = \frac{\pi}{4}$ is also transcendental (and therefore irrational). If you want to prove that it's an irrational multiple of $\pi$, you have to proceed a bit differently. - -Consider an equation $z+z^{-1}=2a$, $|a|<1$, which has solutions -$$z=a\pm i\sqrt{1-a^2}$$ -Now we require $z=e^{i\pi p/q}$. Raise this to the power of $q$: -$$e^{i\pi p}=\pm 1=(a\pm i\sqrt{1-a^2})^q$$ -There must be such $q$, so that the right hand side is an integer ($\pm 1$). If you are given $a$, then you can just check that particular case. In general, you are basically looking for roots of unity in terms of their cartesian components. For example, you can set $a=\sqrt{n}/2$ and check for which $n$ this has a solution for $q$.<|endoftext|> -TITLE: Set is Convex regardless of b -QUESTION [5 upvotes]: Let the function $f$ be convex, $f :\Bbb R^n \rightarrow \Bbb R$ and let -$$S = \{x : f(x) \le b\}$$ -The proposition states that the set $S$ is convex regardless of $b$. Can someone explain to me how this proposition holds for all $b$? - -REPLY [4 votes]: Fix $b$ any real number, then study $S$ (with $b$ fixed). - -If it is empty, it is convex, -Otherwise, if $x,y \in S$, and $\mu \in [0,1]$, then what about $\mu x + (1-\mu) y$ ? Just compute $f(\mu x + (1-\mu) y) <= \mu f(x) + (1-\mu)f(y)$ by convexity of $f$, and then because $x,y\in S$, $f(x),f(y) \leq b$ (by definition). So $\mu f(x) + (1-\mu)f(y) \leq \mu b + (1-\mu) b = b$. - -So regardless of the actual value for $b\in\mathbb{R}$, $S$ is convex (but $b$ is fixed).<|endoftext|> -TITLE: Roots of $x^p + x + [\alpha]_p \in \mathbb{F}_p[x]$ -QUESTION [6 upvotes]: Let $$g(x) = x^p + x + [\alpha]_p \in \mathbb{F}_p[x],$$ where $p$ is prime. - -For which $\alpha, p \in \mathbb{Z}$ does $g(x)$ have at least -one root? And for which $\alpha, p \in \mathbb{Z}$ does $g(x)$ -have exactly one root? - -Supposing that $\beta \in \mathbb{Z}$ is a root of $g(x)$, then, by Euler's Theorem (or Fermat's Little Theorem), I get that it is a root of $$h(x) = 2x + [\alpha]_p.$$ So we can say that for $p=2$ we have two roots (that is $0,1$) if $\alpha \equiv 0 \mod 2$ and no root if $\alpha \equiv 1 \mod 2$. What can we conclude for the other cases? - -REPLY [2 votes]: Consider -$$\gcd(x^p+x+\alpha,x^p-x)=\gcd(2x+\alpha,x^p-x),$$ -where $\gcd$ denotes the greatest common divisor in $\mathbb Z_p$. Observe that, if the degree of the gcdis positive, then it is equal to the number of distinct roots of $g(x)$ in $\mathbb Z_p$. -We have two cases: - -If $p\neq 2$ then $$\gcd(2x+\alpha,x^p-x)=x+2^{-1}\alpha,$$ and $-2^{-1}\alpha$ is the unique root in $\mathbb Z_p$, for every $\alpha$. -If $p=2$, then you already have the solution.<|endoftext|> -TITLE: Does every positive integer appear in the digits of $2\cdot 0.1234567891011… $? -QUESTION [6 upvotes]: Let $C = 0.1234567891011121314…$ the Champernowne constant. My question is : - -Does the real number $2 \cdot C \simeq 0.24691357820222426283032343638404244464850525456586062646668707274...$ contain every positive integer in its digits? - -For instance, $2022$ appears here : $0.246913578\underbrace{2022}...$ -Obviously this is true for $C$, and also for $0.246810121416...$. These number are known to be normal. -My question is to determine whether $2C$ is at least a disjunctive number (in base $10$). -More generally, I would like to know if a non-zero multiple $n \cdot x \; (n \in \Bbb Z)$ of a disjunctive number is also disjunctive (true if $n=10^k,k\in \Bbb N$). -I looked at some theorems about disjunctive/normal numbers (for instance, if $f$ is a non-constant polynomial with real coefficients which is always positive, then the "concatenation" of the integer parts $x=0.[f(1)][f(2)][f(3)]...$ is normal), but I wasn't able to conclude. -Any comment would be helpful. - -REPLY [6 votes]: In general, if you want to show that an $m$ occurs in $nC$, then write $m/n$ in decimal to a large amount of digits, then round up at the end. Then when that sequence of digits occurs in $C$ surrounded by "enough" zeros, then $m$ occurs in the same location of $nC$. -(When writing $m/n$, you'll have to keep the zeros that start the quotient in the case $n>m$.) -For example, given $m=73$, $n=6$ then $m/n=12.16666\dots$ so we pick the digits $12167$. Then where $10121670$ occurs in $C$, $6073002$ occurs in $6C$. -This is particularly easy for $n=2^k$ or $n=5^k$ because the decimals for $m/n$ terminate in these cases, so there is no question of "how many digits." I think, more generally, you are safe if you take more digits of $m/n$ after the decimal than there are digits of $n$, but I'm not 100% sure of that. -You have to start with $1$ then add as many digits of $0$s as digits of $n$, then add your sequence of digits from $m/n$, then again as many $0$s to the end as digits of $n$. -So if $m=84,n=13$ then $m/n \approx 6.461538462$ so we'll take the digits $10064700$ and find them in $C$. Where these digits occur, we'll get $308411xx$ in the same location of $13C$. -The only property of $C$ we are using is that every finite sequence of digits occurs in it.<|endoftext|> -TITLE: Non-Borel a.e limit of Borel functions -QUESTION [5 upvotes]: As a homework assignment I'm supposed to prove or disprove Borel measurability is closed under a.e convergence. I think this is not true because the Borel $\sigma$-field is not complete. However, I'm not sure how to construct or describe a counterexample. -The proof in case of a complete domain $X$ goes by observing that measurability is preserved under a.e equality and limits, so also by a.e convergence. I just don't see how to get out of this a counterexample to the claim. - -REPLY [6 votes]: As the Borel-$\sigma$-algebra is not complete, there is a non-measurable set $A\subseteq \mathbf R$ of outer measure zero. Let $f = \chi_A$ (the characteristic function of $A$) and $f_n = 0$. Then $f_n \to f$ almost everywhere (namely outside of $A$), then $f_n$ are Borel-measurable, but $f$ is not.<|endoftext|> -TITLE: Percentage greater than 2 standard deviations from the mean -QUESTION [5 upvotes]: A question reads: "The weights of $910$ young deer tagged and weighed in a research study are normally distributed with a mean of $86$ pounds and a standard deviation of $2.5$ pounds." -Approximately how many deer weigh more than $91$ pounds? -Since $34.1$% fall within the first standard deviation and $13.6$% fall within the second standard deviation, then $2.3$% will be greater than two standard deviations. $2.3$% of $910$ equals $20.93$. -Possible multiple choice answers are $21$ or $23$. I picked $21$. The test guide says $23$. I assume this is a typo. Anyone think I went awry? - -REPLY [10 votes]: The answer key may be using the rougher guide ('empirical rule') that about $95\%$ of the area under a normal curve is within $2$ standard deviations of the mean. So about $2.5\%$ of the data is more than $2$ standard deviations above the mean. And $2.5\%$ of $910$ is $22.75$, close to their answer of $23$. -However, your answer is more accurate.<|endoftext|> -TITLE: Which inequalities are there with stochastic integration? -QUESTION [7 upvotes]: Which inequalities can I use with stochastic integration? -For example, with the standard lebesgue integral we have $$\left|\int_\Omega f(x) dx\right| \le M |\Omega|$$ -(where $M$ is the maximum of $|f|$ on $\Omega$ is exists) -Also, $$\left|\int_\Omega f(x) dx\right| \le \int_\Omega |f(x)| dx$$ -Plus the ever-useful Holder and Jensen inequalities -Which of these inequalities are still valid if for $$\int_0^\infty H_s dM_s$$ where $M$ is a local martingale and $H \in L^2_{\text{loc}}(M)$? Or maybe there are other type of inequalities in this case? -Can you provide a good reference to this topic, which I imagine is pretty well known? Feel free to take as many additional restriction you want on $H$ and $M$ (also the extreme of integration can very well be taken finite). A special case of interest is $M_t = W_t$ the Brownian Motion - -REPLY [14 votes]: Neither of them holds, in general, for stochastic integrals. -The trouble already starts if you consider measures which need not to be non-negative, i.e. signed measures. For a signed measure $\mu: (\Omega,\mathcal{A}) \to \mathbb{R}$ we cannot expect that the triangle inequality -$$\left| \int f(x) \, \mu(dx) \right| \leq \int |f(x)| \, \mu(dx) \tag{1}$$ holds. To see this, just consider the case that $f$ is an elementary function, i.e. $f$ is of the form -$$f(x) = \sum_{j=1}^n c_j 1_{A_j}(x).$$ -Then $(1)$ reads -$$\left| \sum_{j=1}^n c_j \mu(A_j) \right| \leq \sum_{j=1}^n |c_j| \mu(A_j).$$ -Note the right-hand side does not even need to be non-negative (but the left-hand side is), so this doesn't make any sense. With the same reasoning, we find that -$$\left| \int f(x) \, \mu(dx) \right| \leq \|f\|_{L^{\infty}} \mu(\Omega) \tag{2}$$ -does, in general, not hold true for signed measures. However, one can show that -$$|\mu|(A) := \sup \left\{ \sum_{n \in \mathbb{N}} |\mu(A_n)|; A_n \in \mathcal{A} \, \text{disjoint}, \bigcup_{n \in \mathbb{N}} \subseteq A \right\}$$ -defines a non-negative measure, the so-called total variation norm, and that -$$\left| \int f \, d\mu \right| \leq \|f\|_{\infty} |\mu|(\Omega)$$ -and -$$\left| \int f \, d\mu \right| \leq \int |f| \, d|\mu|.$$ -These two are the natural generalizations of $(1)$ and $(2)$ for signed measures. -Since stochastic integrals are "randomized" signed measures, the situation becomes even more complicated. For example if $(M_t)_{t \geq 0}$ is a Brownian motion, then the stochastic integral -$$\int_0^t H_s \, dM_s$$ -is not a pointwise integral and this means that we cannot simply use the above considerations for fixed $\omega$. Additionally, there is the trouble that the Brownian motion has infinite total variation, so, as far as I can see, there is no chance to get such inequalities for stochastic integrals with respect to Brownian motion. -Very important inequalities for stochastic integrals (with respect to martingales) are e.g. - -Doob's inequality -the Burkholder-Davis-Gundy inequality - -but they don't provide any pointwise estimates. -The only exception I can think of are processes with bounded variation. In this case, we can define the stochastic integrals as a Riemann-Stieltjes integral and obtain similar estimates as for signed measures. This works in particular for processes with non-decreasing sample paths, e.g. subordinators.<|endoftext|> -TITLE: What are some modern books on Markov Chains with plenty of good exercises? -QUESTION [15 upvotes]: I would like to know what books people currently like in Markov Chains (with syllabus comprising discrete MC, stationary distributions, etc.), that contain many good exercises. Some such book on Stochastic Processes will also suffice. -I have recently come to notice that there are some new books (read: "non-classics") that are well written and have a large collection of really good exercises in Probability, for example, Gut's "Probability: A Graduate Course". I have found it to be absolutely remarkable for Analytic Probability. I have found that a large number of professors seem to like it very much, and why wouldn't they? -As a student, I should be exposed to good books that have a very good selection of problems. I know the classics like Hoel-Port-Stone, Ross, Norris. What are some new books that have blown you away? -Thing is, if we do not come in contact with newer authors and their books, we would be missing out a lot on how modern a subject has become, and how it can be presented. Potentially, some good authors are also good researchers, opening doors for further research under their guidance, probably. -You may also refer to very good collection of online notes/exercises by some professor in some university, if needed. - -REPLY [16 votes]: I believe the answer should depend on your background, aspirations, whether you want a theoretical or applied reference, -In my opinion, a very good book which basic measure theory and discusses various types stochastic processes such as Markov, Levy and Brownian motion is: E. Cinlar, Probability and stochastics, Springer editions, 2011. It also has exercises in almost every (no pun intended) section. I have found this book particularly helpful and comprehensive and this would be my #1 recommendation. -My second recommendation is a more advanced text which would be suitable for either advanced university students or graduate students. This is: D.A. Levin, Y. Peres and E.L. Wilmer, Markov Chains and Mixing Times, 2009. Although the material this book presents is quite advanced, the presentation is rather comprehensible accompanied by many examples. At the end of every section you can find exercises. -I also very much like the lecture notes of Prof. Oliver Knill, Probability and stochastic processes with applications, Harvard Math. Dept., 2008. These notes are replete of nice examples and exercises. Chapter 3 is devoted to discrete time stochastic processes and only a small part of it focuses on Markovian processes which are treated in a more general context and not as a standalone topic. -A good resource for exercises is the book: D. Gusak, A. Kukush, A. Kulik, Y. Mishura and A. Pilipenko, Theory of stochastic processes with applications to financial mathematics and risk theory, Springer 2010. In Chapter 10, "Markov chains: discrete and continuous time", they give 90 exercises and for lots of them they offer hints. In the whole book, they offer a very concise overview of the pertinent theory followed by a torrent of exercises. Markov chains aside, this book also presents some nice applications of stochastic processes in financial mathematics and features a nice introduction to risk processes. -In case you are more interested in stochastic control, there is an old book, from 1971 by H. Kushner which is considered a standard reference (I've seen it being cited in many papers). The citation is: Kushner, Introduction to stochastic control, Holt, Rinehart and Winston, 1971. It has many exercises and examples and the author focuses mainly on Markov models. -Although you have explicitly asked for a book with lots of exercises, I cannot help not mention: O.L.V. Costa, M.D. Fragoso and R.P. Marques, Discrete-time Markov Jump Linear Systems, Springer 2005. The book offers a rigorous treatment of discrete-time MJLS with lots of interesting and practically relevant results. -Finally, if you are interested in algorithms for simulating or analysing Markov chains, I recommend: Haggstrom, O. Finite Markov Chains and Algorithmic Applications, London mathematical society, 2002. There you can find many applications of Markov chains and lots of exercises.<|endoftext|> -TITLE: Proving that one has solved chess by exhibiting the zeroes of polynomials over finite fields? -QUESTION [10 upvotes]: My question is based on one of Scott Aaronson blog post which states that a God-like being could convinced the villagers, to any degree of confidence, that she has solved chess by answering a few questions about the zeroes of some polynomials over finite fields. I know it has to do with PSPACE = IP, but I am wondering how one would encode such games (as chess) into polynomials over finite field? - -REPLY [9 votes]: The answer to this question is nothing more or less than the proof of $IP=PSPACE$ --- see for example here or here. The proof works by taking an arbitrary TQBF formula $\phi$ (i.e., a propositional formula with universal and existential quantifiers, such as the formula that encodes "White has the win in chess"), and then constructing a multivariate polynomial $p:\mathbb{F}^n\rightarrow\mathbb{F}$ over a large finite field $\mathbb{F}$, such that -$$\sum_{x_1,\ldots,x_n\in \{0,1\}^n} p(x_1,\ldots,x_n) \ne 0$$ -if and only if $\phi$ is true. (With some additional complication caused by the need for "degree reduction" to handle the universal and existential quantifiers---but let's not go into that.) -You can then have a prover and verifier engage in an interaction, where in the kth round, the prover claims that, if you restrict the first $k-1$ variables $x_1,\ldots,x_{k-1}$ of $p$ to previously-agreed values $r_1,...,r_{k-1}$, and sum over the last $n-k$ variables $x_{k+1},\ldots,x_n$, then $p$ simplifies to some specific univariate polynomial $q_k(x_k)$. The verifier then challenges this claim by picking a uniformly random value $r_k$ for $x_k$, and the game continues. At each round, the verifier can check whether the $q_k$ that was claimed is consistent with what's claimed in the next round. Then, at the very end, the verifier can just evaluate $p(r_1,\ldots,r_n)$ for itself, and check whether it equals $q_n(r_n)$. By using the Fundamental Theorem of Algebra, you can prove that, if the original claim wasn't true (i.e., $\phi$ is false and $p$ sums to zero), then no matter what the prover does, with high probability at least one of the verifier's checks will uncover this. So we get a sound protocol---and since the TQBF problem is $PSPACE$-complete, it implies $PSPACE\subseteq IP$, and hence $IP=PSPACE$. -If you want, you can also see the entire interaction between prover and verifier as a two-player, perfect-information game: in this case, a transformed version of the original game of chess. But the new game has the amazing property that, in order to play optimally, the only thing the verifier ever needs to do is pick random $\mathbb{F}$-elements $r_1,\ldots,r_n$! The prover, on the other hand, needs to solve a $PSPACE$-complete problem in order to calculate the univariate polynomials $q_1,\ldots,q_n$ that will cause the prover to "win" the interaction (i.e., cause the verifier to admit that, yes, $\phi$ is true after all). -Again, for more details, read any of the proofs of $IP=PSPACE$ available on the web!<|endoftext|> -TITLE: Evaluation of $\lim_{n\rightarrow \infty}\sum_{k=1}^n\sin \left(\frac{n}{n^2+k^2}\right)$ -QUESTION [8 upvotes]: Evaluation of $\displaystyle \lim_{n\rightarrow \infty}\sin \left(\frac{n}{n^2+1}\right)+\sin \left(\frac{n}{n^2+2^2}\right)+\cdots+\sin \left(\frac{n}{n^2+n^2}\right)$ - -$\bf{My Try::}$ We can write the Sum as $$\lim_{n\rightarrow \infty}\sum^{n}_{r=1}\sin\left(\frac{n}{n^2+r^2}\right)$$ -Now how can I convert into Riemann Sum, Help me -Thanks - -REPLY [3 votes]: For $x\gt0$, repeatedly integrating from $0$ to $x$ gives -$$ -\cos(x)\le1\implies\sin(x)\le x\implies1-\cos(x)\le\frac{x^2}2\implies x-\sin(x)\le\frac{x^3}6 -$$ -Noting that both sides are odd, we get -$$ -\left|x-\sin(x)\right|\le\frac{\left|x^3\right|}6 -$$ -Since $\frac{n}{n^2+k^2}\le\frac1n$, -$$ -\begin{align} -\sum_{k=1}^n\sin\left(\frac{n}{n^2+k^2}\!\right) -&=\sum_{k=1}^n\frac{n}{n^2+k^2}-\sum_{k=1}^n\left[\frac{n}{n^2+k^2}-\sin\left(\frac{n}{n^2+k^2}\!\right)\right]\\ -&=\sum_{k=1}^n\frac1{1+\left(\frac kn\right)^2}\frac1n-\sum_{k=1}^nO\!\left(\frac1{n^3}\right)\\ -&\to\int_0^1\frac1{1+x^2}\,\mathrm{d}x-0\\[6pt] -&=\frac\pi4 -\end{align} -$$<|endoftext|> -TITLE: Given $k$, are there infinitely many $n$ so that $w(n) = w(n+k)$? -QUESTION [6 upvotes]: $w(n)$ denotes the number of distinct prime factors of $n$. I am wondering if any such result is known. - -REPLY [2 votes]: Yes, this was proven in a stronger form by Goldston, Graham, Pintz and Yıldırım (2011). Thanks to Gerry Myerson's answer here for the reference: -Daniel A. Goldston, Sidney W. Graham, Janos Pintz and Cem Y. Yıldırım, Small Gaps Between Almost Primes, the Parity Problem, and Some Conjectures of Erdős on Consecutive Integers, Int Math Res Not Volume 2011, Issue 7, Pp. 1439-1450, possibly available at http://imrn.oxfordjournals.org/content/2011/7/1439.short. -The preprint appears to be here: http://arxiv.org/abs/0803.2636. -In particular, if one combines Theorems 9 and 12, then we have that for any $k \in \mathbb N$ and any prescribed $A\ge 6$, there exist infinitely many $n$ such that -$$\omega(n) = \omega(n+k) = A.$$ -They also have similar results for most other "divisor-counting" arithmetic functions such as $\Omega(n)$ and $d(n)$.<|endoftext|> -TITLE: Decomposition of an algebraic number into a sum or product of algebraic numbers with smaller degree -QUESTION [17 upvotes]: An algebraic number can be identified by its minimal polynomial together with isolating intervals with rational bounds for its real and imaginary parts. The degree of an algebraic number is the degree of its minimal polynomial. There are known algorithms that allow to easily compute sum and product of algebraic numbers in this representation, raise them to a rational power, extract real and imaginary parts, compare them, or evaluate them numerically to an arbitrary precision. -Is there an efficient algorithm that given an algebraic number $\alpha$ in this representation can decide if $\alpha$ can be represented as a sum or product of algebraic numbers of smaller degree? - -REPLY [6 votes]: As in https://mathoverflow.net/a/26859/10423, if your number $x$ ($p(x)=0$) is a sum of two algebraic numbers $y, z$, and $[\mathbb{Q}(y): \mathbb{Q}]$ and $[\mathbb{Q}(z): \mathbb{Q}]$ are relatively prime, we would have -$$ \mathbb{Q}(x) = \mathbb{Q}(y, z). $$ -To use this, we might look for subfields $\mathbb{Q}(y) \subset \mathbb{Q}(x) $ generated by particularly simple elements $y$ ($q(y)=0$). Then $x$ would satisfy a polynomial equation $r(x)=0$ of degree $\deg(x)/deg(y)$ whose coefficients are themselves polynomials in $y$. -This can be done in sage. Define the number field generated by your number in the comment: -t = var('t') -K. = NumberField(t^6-12*t^5+54*t^4-116*t^3+132*t^2-120*t+92) - -then loop through all subfields $\mathbb{Q}(y)$ generated by sage's K.optimized_subfields: -xValue = K.polynomial().real_roots()[0] -abstol, reltol = 1e-8, 1e-8 -for K0 in K.optimized_subfields(0, name="y"): - print "K0.<%s>:" % K0[0].gen(), K0[0].polynomial() - L. = K.relativize(K0[1]) - Lp. = K0[0]["w"] - # print "L: ", L - Lrp = L.relative_polynomial(); - print "Lrp(x):", Lrp - print "Lrp(w+z):", Lrp(w+z) - if all([c.is_integer() for c in Lrp(w+z).coefficients()]): - print "+++FOUND IT+++" - print "Weight:", sum(c.global_height() for c in L.relative_polynomial().coefficients()) - # print "Heights:", (x.global_height(), y.global_height(), z.global_height()) - for emb in L.embeddings(ComplexField(200)): - if abs(emb(y) - xValue) > abstol + reltol * abs(xValue): continue - print "Embedding: %s: %s" % (z, emb(z)) - -This code considers, in turn, each subfield generated by $y$, and prints out the polynomials $q(y)\in\mathbb{Q}[y]$ and polynomial $r(x)\in\mathbb{Q}(y)[x]$, checking whether $r(x)$ can be written as a polynomial in $x-y=z$ with integer coefficients. -There are 12 subfields, snipping part of the output, some of the $y$'s are quite simple: -K0.: t^2 - 2 -Lrp(x): x^3 + (-3*y3 - 6)*x^2 + (12*y3 + 18)*x - 14*y3 - 22 -Lrp(w+z): w^3 - 6*w^2 + 12*w - 10 -+++FOUND IT+++ -Weight: 5.49783963670066 -Embedding: y3: --1.4142135623730950488016887242096980785696718753769480731767 - -$$ x = -\sqrt{2} + \mathrm{Root}_w(w^3-6w^2+12w-10) $$ -K0.: t^6 - 2 -Lrp(x): x - y12^3 - y12^2 - 2 -Lrp(w+z): w - y12^3 - y12^2 + y12 - 2 -Weight: 0.753631429508173 -Embedding: y12: --1.1224620483093729814335330496791795162324111106139867534404 - -$$ x = -2-y^2-y^3, \qquad y = -2^{1/6} $$ -One might also check for all simple linear expressions $x = z + \lambda y$ by expanding $r(z+\lambda y)$ and checking whether all coefficients of $x^{\geq0}y^{>0}$ can be set to $0$ by a particular choice of $\lambda\in\mathbb{Z}$. -In my testing, I used the root near $-72.5006$ of -333030430968457063019646779392 - 23571307899875281888190922752 * x + 11325756868205077014516072448 * x^2 + 2880637967760945947804168192 * x^3 + 782990884159596744735457280 * x^4 + 40070228472035777844367360 * x^5 + 10282486223601703758015488 * x^6 + 192715657601424647782400 * x^7 + 21281445409747778775040 * x^8 - 2726796545369319832704 * x^9 - 83259682551880061952 * x^10 + 4445241731011609472 * x^11 + 1435507363311897496 * x^12 + 355547636535862912 * x^13 + 37061676308129376 * x^14 + 2332115507866947 * x^15 + 238642514161488 * x^16 + 13466590646101 * x^17 + 1032053823392 * x^18 + 55985523438 * x^19 + 2712768664 * x^20 + 109992986 * x^21 + 2837824 * x^22 + 41695 * x^23 + 320 * x^24 * x^25 - -which is -$$ 7 \mathrm{Root}_{x\approx-1.214}(8 + 3 x^3 + x^5) + 8 \mathrm{Root}_{x\approx-8.000}(2 + 8 x^4 + x^5). $$ -So it works for $\deg(x)=25$, but in general it probably takes exponential time or worse, not polynomial. On the plus (?) side it generalizes the problem from looking for sums (not products, though) to looking for polynomial relations with algebraic coefficients.<|endoftext|> -TITLE: The magic of the morphisms -QUESTION [6 upvotes]: Given a set $X$. Let $S\subseteq X$ and consider $(X,S)$ as a very simple mathematical structure, lets call it a spotted set in analogy with pointed sets. Given two spotted sets, then a morphism $\alpha :(X,S)\longrightarrow(X^\prime,S^\prime)$ reasonably is a function -$\alpha :X\longrightarrow X^\prime$ such that $x\in S\Rightarrow \alpha(x)\in S^\prime$. -In topology there is a spotted set $\tau\subseteq \mathcal{P}(X)$. Then morphisms are functions $\mathcal{P}(X)\overset{\alpha}{\longrightarrow}\mathcal{P}(X^\prime)$ such that -$\mathcal{O}\in\tau \Rightarrow \alpha(\mathcal{O})\in \tau^\prime$. If there is a function -$f:X^\prime\longrightarrow X$ such that $\alpha = \mathcal{Q}(f)$, where $\mathcal{Q}$ is the contra-variant power set functor, this correspond to Top and $f$ is continuous with respect to the topologies. - -There are corresponding coincidences for several other structures, where the formulas of the morphisms can be derived, and my question is if there is an explanation to this correspondence? - -Examples: -Group-like structures as magmas and categories are characterized by relations -$R\subseteq (X\times X)\times X$ and can obviously be expressed as spotted sets. Morphisms are functions -$\alpha:(X\times X)\times X\longrightarrow(X^\prime\times X^\prime)\times X^\prime$ such that -$((x,y),z)\in R \Rightarrow \alpha((x,y),z)\in R^\prime$. -Functions -$\alpha_1,\alpha_2,\alpha_3:X\longrightarrow X^\prime$ exists such that -$\alpha((x,y),z)=((\alpha_1(x),\alpha_2(y)),\alpha_3(z))$ and if $\alpha$ is such that $\alpha_1=\alpha_2=\alpha_3$, then $\alpha_1$ correspond to group homomorphisms etc. -Action-like structures $R\subseteq (A\times X)\times X$. Here morphisms are functions -$(A\times X)\times X\overset{\alpha}{\longrightarrow}(A\times X^\prime)\times X^\prime$ such that $((a,x),y)\in R \Rightarrow \alpha((a,x),y)\in R^\prime$. It exists functions -$\alpha_0,\alpha_1,\alpha_2$ such that -$\alpha((a,x),y)=((\alpha_0(a),\alpha_1(x)),\alpha_2(y))$. If $\alpha_0=1_A$ and $\alpha_1=\alpha_2$ this correspond to morphisms of actions. -Uniform spaces with a set of entourages $\phi\subseteq\mathcal{P}(X\times X)$. Morphisms are functions -$\mathcal{P}(X\times X)\overset{\alpha}{\longrightarrow}\mathcal{P}(X^\prime\times X^\prime)$ such that -$\mathcal{U}\in\phi \Rightarrow \alpha(\mathcal{U})\in \phi^\prime$. The condition on the morphisms of spotted sets to correspond to a uniformly continuous function is similar as above. -Multigraphs. Function $\varepsilon \subseteq E\times V^2$. -Undirected graphs. $E\subseteq\mathcal{P}(X)$, $e\in E\Rightarrow \alpha(e)\in E^\prime$, -where $\alpha$ is a function $\mathcal{P}(X)\rightarrow\mathcal{P}(X^\prime)$. - -It might be a good idea to point out that the formula for a morphism only depend on some outer structures. For example in magmas all formulas are the same and doesn't depend of "inner" conditions as associativity or inverses, with the exception of certain selected elements as a unit element. -If $\tau\subseteq \mathcal{P}(X)$ isn't a topology but is an other structure, the same formula would still be valid: -$\alpha$ would be a morphism if there was a funcion $f:X^\prime\to X$ such that -$\alpha = \mathcal{Q}(f)$ and $\mathcal{O}\in\tau\implies f^{-1}(\mathcal{O})\in \tau^\prime$. - -There is an obvious an analogy with the Hom functor where magmas and universal algebras correspond to the covariant case, morphisms $\mathbb N\to X$, and the topological case correspond to the contravariant case with morphisms $X\to\mathbb N $. - -REPLY [4 votes]: If I understand correctly you call it a coincidence the fact that many concrete categories of structures can be encoded as some (sub-categories) of spotted sets (which by the way I would have called pair of sets, because they are very similar to pairs of spaces as studied in the framework of algebraic topology). -This does not come as a suprise to me and it is a consequence of the Bourbakian's belief that any mathematical structure can be encoded as a (family of) set(s) with operations and relations defined on them. -Since every operation/function is traditionally encoded in set theories (such as ZFC) as a relation satisfying certain conditions, one could refine the above mentioned belief saying that every structure is a (family of) set(s) with relations defined on them. -Clearly if you regard structures as sets with relations (that is subsets of cartesian products, that is spotted sets) it becomes clear why all the structures can be embedded in the category of spotted sets. -Hope this answer you question. - -Addendum: the following is just some additional material which could be skipped but I think it may be interesting because it somehow correlated to the subject considered (at least in my humble opinion). -You could also encode relations as operations (that is functions on sets): you could see any relation $R \subseteq A_1 \times A_2$ as a pair of functions -$$(\pi_i \colon A \to A_i)_{i=1,2}$$ -satisfying a condition, namely that the pair $(\pi_1,\pi_2)$ is jointly monoic. -In this way you can encode any relational structure as some sort of algebraic structure (over the family of sets $A_1,A_2$) and homomorphisms between such algebras correspond exactly to structure preserving morphisms. -This kind of algebraic construction is quite common, for instance it used to model many kind of dynamical systems such as finite state automata (and other kind of transition systems) as coalgebras in a opportune categories. -Basically the first kind of encoding (the one that uses relations) can be seen as the process of internalization of mathematical structures in the language of set theory while the second one (the one that uses operations) can be seen as the process of internalization of mathematical structures in the language of category theory.<|endoftext|> -TITLE: Does a map between topologies determine a map between sets? -QUESTION [10 upvotes]: Let $(X,\mathcal{A})$ and $(Y,\mathcal{B})$ be Hausdorff spaces. Consider a function -\begin{equation*} -\phi:\mathcal{B}\rightarrow \mathcal{A} -\end{equation*} -which preserves inclusion, arbitrary unions, finite intersections, and satifies $\phi(\emptyset)=\emptyset, \phi(Y)=\phi(X)$. -Does there exist $f: X\rightarrow Y$ such that $\phi= f^{-1}$ ? -I know that if such an $f$ exists it is uniquely determined by $\displaystyle f^{-1}(y)=\bigcap_{O\in \mathcal{B},y\in O} \phi(O)$. I also know this gives an effective definition for $f$ satisfying $f^{-1}=\phi$ if -\begin{equation*} -\bigcup_{y\in O}\left(\bigcap_{O'\in \mathcal{B},y\in O'}\phi(O')\right)=\phi(O) -\end{equation*} -for all open sets $O\subset Y$. But I don't know if this is necesarily the case. - -REPLY [2 votes]: I'm going to answer my own question, and I'm madly delighted to say the answer is yes, there always is such an $f$. -Note that for all $x\in X$ the set $\displaystyle N(x)=\bigcup_{O\in \mathcal{B}, x\notin \phi(O)}O$ has the form $Y\setminus\{y\}$. Indeed - -Suppose that $N(x)$ is all of $Y$. Then we would have $X=\phi(Y)=\phi(N(x))=\bigcup_{O\in \mathcal{B}, x\notin \phi(O)}\phi(O)$ and $x\notin X$ which is absurd. -Suppose there were distinct $y_{1},y_{2}$ not in $N(x)$. Then there are two disjoint sets $O_{1},O_{2}$ containing $y_{1}$ and $y_{2}$ respectively. The sets $\phi(O_{1})$ and $\phi(O_{2})$ are disjoint so they cannot both contain $x$. Without loss of generality $x\notin \phi(O_{1})$ so that $O_{1}\subset N(x)$ and $y_{1}\in N(x)$ which is again absurd. - -Define $f$ by letting $f(x)$ be the only element of $Y\setminus N(x)$. We have -\begin{align*} -&x\in f^{-1}(U) \\ -\iff &U \not\subset N(x) \\ -\iff &x\in\phi(U) -\end{align*} -hence $f$ has the desired property. -I don't know what interpretations there might be of this in terms or pointless topology, or topos theory (I suggest this because it seems to me the proof has a propositional logical flavour).<|endoftext|> -TITLE: How to do calculus with split-complex (hyperbolic) numbers? -QUESTION [5 upvotes]: TL;DR: how do I define a "split-holomorphic" function? -As far as I've heard, there is a notion of split-complex numbers: $z = x+uy$, with $u\not\in \Bbb R $ and $u^2 = 1$. One defines the conjugate as $\overline{z} = x - uy$ and the modulus as $|z| = \sqrt{|z\overline{z}|}$, where the absolute value inside the root is the one for real numbers, as usual. -Apparently this can be used to study the pseudo-Riemannian geometry of $\Bbb L^2 = \Bbb R^2_1$ in the same way thay $\Bbb C$ is identified with $\Bbb R^2$. -But these split-complex numbers form a ring only, and not a field: elements in the diagonals don't have multiplicative inverses. Incidentally, these diagonals correspond to lightlike directions in $\Bbb L^2$ (and such directions cause every sort of problems). -I don't know if there is a standard notation for this set of split-complex numbers, but given a function of said set to itself, how do I define things like "being holomorphic"? I can't make sense of the limit $$\lim_{h\to 0} \frac{f(z_0+h)-f(z_0)}{h}$$ since $h$ could approach zero from lightlike directions. Do I just ignore these directions? Most likely I'm overthinking this. -We could define continuity of $f$ by continuity of its components, and work formally with the above limit to obtain a revised version of the CR equations, and say that $f$ is split-holomorphic if the revised CR equations hold and if the partial derivatives of the components are continuous (copying the Looman-Menchoff theorem), but I am speculating too much and I am unsure about these things. - -What is the correct way to do calculus with split-complex numbers, if this is possible? More references are also welcome. - -P.s.: there is a nice book called "Geometry of Minkowski Space-Time", by Zampetti, et al, but they do a more algebraic approach and do not talk about the things I'm asking. - -REPLY [3 votes]: You've probably found an answer in the past 4 years, but just in case anyone else is curious split-complex analysis is typically referred to using the term "motor variable", and a notion of holomorphic functions can be found here: -https://en.wikipedia.org/wiki/Motor_variable#D-holomorphic_functions<|endoftext|> -TITLE: Spaces homotopy equivalent to finite CW complexes -QUESTION [7 upvotes]: I'm doing a project about Topological Complexity (it doesn't matter what it is for the questions I will ask) and I have proofs for a few results about the bounds of the topological complexity of spaces which are homotopy equivalent to finite CW complexes. -Now I would like to empathize the importance of the spaces which are homotopy equivalent to finite CW complexes, i.e, I try to show that there are a lot of spaces we care about that satisfy this property. Therefore i shoot my questions: -1) Are any sufficient conditions on Topological Manifolds to be homotopy equivalent to finite CW complexes. -2) Are any sufficient conditions on Smooth Manifolds to be homotopy equivalent to finite CW complexes. -3) Do you know sufficient conditions on other spaces to be homotopy equivalent to finite CW complexes. -Note that I only care about finite CW complexes. -Please I need references to papers or books since I will cite them. I won't prove any of the results since I will use this information only as an informal motivation to show that the results presented in that section of the essay are amazing and broadly useful. -Background: I'm an undergraduate, hence don't be very concise in your explanations and don't skip to many details please. -I have found some questions such as: -https://mathoverflow.net/questions/44021/which-manifolds-are-homeomorphic-to-simplicial-complexes?rq=1 -https://mathoverflow.net/questions/201944/topological-n-manifolds-have-the-homotopy-type-of-n-dimensional-cw-complexes -But they don't satisfy my curiosity. -And I also have seen the Corollary A.12 in Hatcher: - -A compact manifold is homotopy equivalent to a CW complex. - -But it doesn't say that the CW complex is finite so that doesn't work for me. -Maybe I should ask this at mathoverflow? -Thanks in advance and any help would be appreciated. - -REPLY [11 votes]: 1) Every compact topological manifold is homotopy equivalent to a finite CW complex; see here. As a (very sketchy) sketch: You know via Hatcher that they're dominated by a finite CW complex, hence you can apply Wall's obstruction theory to being homotopy equivalent to a finite CW complex: see here. This immediately implies that every simply connected compact manifold is homotopy equivalent to a finite CW complex, and with more difficulty, that this is true for manifolds with eg fundamental group $\Bbb Z^n$. The general case is incredibly hard; the reference given in the MathOverflow post is as far as I know essentially the only proof of this fact. I would only suggest reading if you've got sufficient spunk. (For reference: I don't.) -2) Every compact smooth manifold is homeomorphic to a finite CW complex. This follows from Morse theory, which on a smooth manifold actually gives you a triangulation. This is much more elementary than (1). -3) That'd be Wall's finiteness obstruction, mentioned in the first paragraph. The linked notes of Lurie are accessible given a first course in algebraic topology, some experience with homological algebra, and some patience. To use it, you need your spaces to be finitely dominated; this is equivalent to satisfying Lurie's Lemma 6, which is not particularly helpful in practice. Usually you'll start with a space you actually know is finitely dominated and start using the finiteness obstruction then. -If you're willing to possibly care about countable CW complexes, this brief article of Milnor's implies every mapping space of finite CW complexes has the homotopy type of a countable CW complex.<|endoftext|> -TITLE: Proving a norm is lipschitz -QUESTION [6 upvotes]: Let $M\in\mathbb{R}^{n\times n}$. Define the function $f\colon\mathbb{R}^n\to\mathbb{R}$ by $f(x)=\Vert Mx\Vert$. Show that $f$ is Lipschitz. - -Let $x,y\in\mathbb{R}^n$, then we want to find a $L>0$ such that -$$\Vert f(x)-f(y)\Vert \le L\Vert x-y\Vert$$ - -We have -\begin{align} -\Vert f(x)-f(y)\Vert &= \big\Vert \Vert Mx\Vert-\Vert My\Vert\big\Vert\\ -&\le \Vert Mx- My\Vert&\text{reverse triangle ineq.}\\ -&\le \Vert M\Vert\Vert x-y\Vert\\ -\end{align} -Taking $L=\Vert M\Vert$ we find $f$ is Lipschitz. - -I have a few questions here, some related to Lipschitz continuity and others related to norms. - -Firstly, is this working correct? -I notice that we take $L=\Vert M\Vert$ but could we take anything greater than our chosen $L$, in other words, is the Lipschitz constant unique? -On a side note, is $\Vert \Vert Mx\Vert \Vert=\Vert Mx\Vert$? In general can we say that $\Vert\Vert a\Vert\Vert = \Vert a\Vert$? - -REPLY [5 votes]: 1) A conditional yes. Your proof of the Lipschitz continuity of $f$ is correct, provided you play it completely safe by stating specifically that the matrix norm is induced by the vector norm, i.e. -\begin{equation} -\|M\| = \sup \{ \|Mx\| \: : \: \|x\| \leq 1 \} -\end{equation} -and that $\|z\|$ is merely the absolute value of $z$ when $z \in \mathbb{R}$. -2) Yes, any value larger than $L$ would also serve as a Lipschitz constant for $f$. But $L=\|M\|$ is the smallest value which will work. Simply pick $x \in \mathbb{R}^n$ such that $\|Mx\| = \|M\|$, $\|x\| = 1$ and $y = 0$. -3) It depends! $Mx \in \mathbb{R}^n$ whereas $\|Mx\| \in \mathbb{R}$, so you are overloading the notation for $\|\cdot\|$. Refer to the remark I made for point 1).<|endoftext|> -TITLE: If $f: \mathbb{C} \rightarrow \mathbb{C}$ is analytic and $\lim_{z \to \infty} f(z) = \infty$ show that $f$ is a polynomial -QUESTION [7 upvotes]: I'm learning about complex analysis and need some help with this problem: - -If $f: \mathbb{C} \rightarrow \mathbb{C}$ is analytic and $\lim_{z \to \infty} f(z) = \infty$ show that $f$ is a polynomial (hint: consider the function $g(z) = f(1/z)$). - -Recall that poles are points where evaluating the function would entail dividing by zero. Therefore, since $\lim_{z \to \infty} f(z) = \infty$ this means that $\infty$ is a pole of $f$. How do I continue from here and make use of the hint? - -I should mention that this problem has already been asked by other members but I could not find any solution using the given hint. - -REPLY [5 votes]: Without Laurent series and assuming $\;f(z)\;$ isn't zero (because then it is trivially true). -By the given information there exists $\;M\in\Bbb R^+\;$ such that $\;|f(z)|>1\;\;\;\forall\,z\in\Bbb C\;\;\text{with}\;\;|z|>M\;$ . -It must be that $\;f(z)\;$ has a finite number of zeros $\;z_1,...,z_n\;$, otherwise its set of zeros, which is in $\;C_M:=\{\,z\in\Bbb C\;;\;|z|\le M\}\;$, has an accumulation point by Bolzano-Weierstrass, and thus from the identity theorem this would mean $\;f(z)=0\;$ . -From here that $\;g(z):=\frac{f(z)}{\prod\limits_{k=1}^n(z-z_k)}\;$ is analytic and non-zero, and thus also $\;h(z)=\frac1{g(z)}\;$ is, and we have for $\;z\in\Bbb C\setminus C_M\;$: -$$|h(z)|=\frac{|z^n+A|}{|f(z)|}\le|z^n|+A\implies h(z)\;\;\text{is a polynomial without roots}\;(**)\;\implies h(z)=K$$ -a constant, by the Fundamental Theorem of Algebra, and thus also is $\;g(z)\;$ : -$$\frac{f(z)}{\prod\limits_{k=1}^n(z-z_k)}=g(z)=\frac1{h(z)}=\frac1K\implies f(z)=K(z-z_1)\cdot\ldots\cdot(z-z_n)$$ -and we've finished. -If you need a prove of $\;(**)\;$ ask back.<|endoftext|> -TITLE: Prove that $\sigma(F)=\Omega$ -QUESTION [5 upvotes]: Let $F=\{A_1,...,A_n\}\subset P(X)$; $F_a=A_1^{a_1}\cap A_2^{a_2}\cap\cdots \cap A_n^{a_n}$ $ a=(a_1,...,a_n)\in \{0,1\}^n$ -$$A^{a_i} = -\begin{cases} -A, & \text{if } a_i=0 \\ -A^c, & \text{if } a_i=1 -\end{cases}$$ -Define $\Omega=\{\bigcup_{a\in D} F_a : D\subset \{0,1\}^n\}$ (by convention $\bigcup_{a\in \varnothing} F_a =\varnothing$) -I need to prove that the sigma algebra generated by $F$, $\sigma(F)=\Omega$ -what I have done so far: -1)$\Omega \subset \sigma(F)$: We know that $F\subset \sigma(F)$ then for all $a=(a_1,...,a_n)\in\{0,1\}^n$ $F_a\in \sigma(F)$, hence $\bigcup_{a\in D} F_a \in \sigma(F)$ -2)$\sigma(F)\subset \Omega$: for this part I wanted to prove that $\Omega$ is a sigma-algebra that contains $F$: -$a) \varnothing \in \Omega$ -$b)$ Since $\Omega$ is finite ($|\Omega|\le 2^{2^n}$) then we consider just finite unions: Let $A,B\in \Omega$: $$A\cup B= (\bigcup_{a\in D_1}F_a)\cup (\bigcup_{a\in D_2}F_a)=\bigcup_{a\in D_1\cup D_2}F_a\in \Omega$$ -$c)$ I´m having trouble checking the complements: Let $A=\bigcup_{a\in D}F_a\in \Omega$; $A^c=(\bigcup_{a\in D}F_a)^c=\bigcap_{a\in D}F_a^c$ but from here how can I check that $A^c\in \Omega$? -I would really appreciate if you can help me with this problem (I hope this question won't be marked as a duplicate) - -REPLY [3 votes]: If $A=\bigcup_{a\in D}F_a$, then $A^c=\bigcup_{a\in \{0,1\}^n\setminus D}F_a$.<|endoftext|> -TITLE: Proving Holder's inequality for Schatten norms -QUESTION [10 upvotes]: Sticking to the finite dimensional case, Holder's inequality for Schatten norms is given by - -$$\left\|AB\right\|_{S^1}\leq\left\|A\right\|_{S^p}\left\|B\right\|_{S^q}$$ - -for $A,B$ $n\times n$ matrices, $p,q\in[1,\infty]$, and $\frac{1}{p}+\frac{1}{q}=1$. -So using Young's inequality, the expression I have in mind is the following -\begin{align} -\frac{\left\|AB\right\|_{S^1}}{\left\|A\right\|_{S^p}\left\|B\right\|_{S^q}}=\frac{1}{\left\|A\right\|_{S^p}\left\|B\right\|_{S^q}}\sum_{i=1}^n|\sigma_i(AB)|&\overset{?}{\leq}\frac{1}{\left\|A\right\|_{S^p}\left\|B\right\|_{S^q}}\sum_{i=1}^n|\sigma_i(A)||\sigma_{\pi(i)}(B)|\\ -&\overset{\text{YI}}{\leq}\frac{1}{p\left\|A\right\|_{S^p}^p}\sum_{i=1}^n|\sigma_i(A)|^p + \frac{1}{q\left\|A\right\|_{S^q}^q}\sum_{i=1}^n|\sigma_{\pi(i)}(B)|^q\\ -&=\frac{1}{p} + \frac{1}{q}\\ -&=1 -\end{align} -Focusing in on the inequality under question, -$$\sum_{i=1}^n|\sigma_i(AB)|\overset{?}{\leq}\sum_{i=1}^n|\sigma_i(A)||\sigma_{\pi(i)}(B)|$$ -we see that proving the Schatten version of Holder's inequality boils down to proving that there exists a permutation $\pi$ of the indices $\{1,...,n\}$ such that the above inequality holds. Of course maybe this isn't true, but it's the hurdle I ran into when trying to adapt the standard proof of Holder's inequality to the Schatten case. -Also I don't strictly need all the absolute values since singular values are always non-negative, but originally I was considering only Hermitian matrices, so I decided to include them. -Edit: -After doing some numerical tests it looks like permuting the indices isn't necessary. Thus to win the bounty I'm either looking for a proof that -$$\sum_{i=1}^n\sigma_i(AB)\leq\sum_{i=1}^n\sigma_i(A)\sigma_i(B)$$ -or if you have some other proof of the Schatten Holder's inequality different than the one I've tried to adapt above, then that's fine too. - -REPLY [3 votes]: An alternative proof which is based on more standard technology is as follows. First observe that for any $A$ and $B$, there exists a unitary matrix $U$ such that -$$ -||AB||_1=|\mathrm{Tr}((AU)^\dagger B)|. -$$ -Indeed consider the singular value decomposition -$$AB=\sum_i \sigma_i(AB)a_i(AB)b_i(AB)^\dagger,$$ -where the singular values $\sigma_i(AB)$ are positive and listed in decreasing order and $a_i$ and $b_i$ are orthonormal sets of vectors. We may then take $U$ to be any unitary such that $U^\dagger a_i(AB)=b_i(AB)$ for all $i$. Therefore, since $||AU||_p=||A||_p$ for any unitary $U$, the result will be proved if we can show that -$$ -|\mathrm{Tr}(A^\dagger B)|\leq||A||_p||B||_q -$$ -for all $A$ and $B$ (this inequality is also referred to as Holder's inequality for Schatten norms). The proof of this inequality follows first by applying von Neumann's trace inequality, which says that -$$|\mathrm{Tr}(A^\dagger B)|\leq\sum_i\sigma_i(A)\sigma_i(B),$$ -with the singular values again listed in decreasing order, and then the classical Holder's inequality for $L_p$ spaces, which says that for two complex vectors $n_i$ and $m_i$ we have -$$\sum_i|n_i||m_i|\leq (\sum_i |n_i|^p)^{1/p}(\sum_i|m_i|^q)^{1/q}.$$ -The classical Holder inequality is proven using Young's inequality -$$ab\leq a^p/p+b^q/q,$$ -which holds for all $a,b\geq 0$ and $p\geq 1$, $1/p+1/q=1$. The proof of von Neumann's inequality is more involved, one relatively accessible proof by Mirsky based on doubly stochastic matrices (which has some relation to the proof given in the other answer here) is given in: https://link.springer.com/article/10.1007/BF01647331.<|endoftext|> -TITLE: Reduction to a max flow problem from a sudoku like puzzle -QUESTION [6 upvotes]: Given an $n$ by $n$ grid of which some of the squares are black and some are white. I'm allowed to mark some of these squares and the question is to prove whether a given grid with given black squares can meet these conditions: -1) Each column has only one marked square. -2) Each row has only one marked square. -3) Only white squares can be marked. -This is similar to how sudoku can only have the same number on only one column and one row. In fact it's an easier problem. However... -I am struggling to figure out an algorithm that reduces this problem to a max flow network problem. -I'm thinking something along the lines of making each square in the grid a node in a graph. Then connect your source point to all the white nodes. -I also believe that in the end the way you prove whether such conditions can be met is by whether or not the max flow through this graph is exactly $n$. Because a solution to the above problem requires that there are exactly $n$ marked squares. Any less means that there was a row or column that doesn't have any white square or that there is no way of adding points that end up in distinct rows/columns. - -REPLY [3 votes]: Suppose you have $n$ rows and $m$ columns. Create $n$ nodes name $R_i$ to represent each row. -Each row can have at most 1 white square, so add 1 edge entering $R_i$ with a capacity of $1$. -If Table entry $T_{i,j}$ is white, add it to the graph. Connect an edge from $R_i$ to $T_{i,j}$ with a capacity $1$. This represents "The row has chosen this cell". -Now we need to represent "and no column is chosen more than once". So add $m$ nodes $C_j$ to the graph to represent the columns. Connect any existing $T_{i, j}$ to $C_j$ and give the column an input equal to how many cells are marked. -Each row can only have 1 marked cell, so add 1 edge to the output of $R_i$ and give it a capacity of $1$ so that it cannot input more than 1 cell. -Add a start node $S$ to feed the rows, and an end node $E$ to absorb the columns. In summary: -$$\text{Vertices} = \{S, R_1, R_2, \dots R_n, T_{1, 1}, \dots \text{For those cells that are white} \dots T_{n, m}, C_1, C_2, \dots C_m, E\}$$ -And the directed edges $(\text{From}, \text{To}, \text{Capacity})$ are -$$\begin{array} {c} -(S, R_1, 1), (S, R_2, 1), \dots, (S, R_n, 1)\\ -\\ -(R_1, T_{1,1}, 1), (R_1, T_{1, 2}, 1), \dots, (R_1, T_{1, m}) \\ -(R_2, T_{2,1}, 1), (R_2, T_{2, 2}, 1), \dots, (R_2, T_{2, m}) \\ -\vdots \\ -(R_n, T_{n,1}, 1), (R_n, T_{n, 2}, 1), \dots, (R_n, T_{n, m}) \\ -\\ -(T_{1,1}, C_1, 1), (T_{2,1}, C_1, 1), \dots, (T_{n, 1}, C_1, 1) \\ -(T_{1,2}, C_2, 1), (T_{2,2}, C_2, 1), \dots, (T_{n, 2}, C_2, 1) \\\ -\vdots \\ -(T_{1,m}, C_m, 1), (T_{2,m}, C_m, 1), \dots, (T_{n, m}, C_m, 1) \\ -\\ -(C_1, E, 1), (C_2, E, 1), \dots, (C_m, E, 1) -\end{array}$$<|endoftext|> -TITLE: solve for variable in combination -QUESTION [5 upvotes]: i have the combination ${n\choose 11}=12376$ and am looking to solve for $n$. it turns out to be $17$. of course can use brute force approach where just plug numbers in for $n$ but am looking for a cleaner method? -so ${{n!}\over {11!(n-11)!}}=12376$ -$n(n-1)(n-2)...(n-10)=11!(12376)$ -by the pigeonhole principle can eliminate $10!$ and be left with -$n(something)=11(12376)=11(2^3)(7)(13)(17)$ -thanks. - -REPLY [2 votes]: As an approximation, I would note that the AM-GM gives -$$ -\begin{align} -12376 -&=\frac{n(n-1)\cdots(n-9)(n-10)}{11!}\\ -&\lesssim\frac{(n-5)^{11}}{11!} -\end{align} -$$ -Therefore, -$$ -n\gtrsim5+(11!\cdot12376)^{1/11}=16.5629 -$$ -Trying $n=17$ gives $\binom{17}{11}=12376$<|endoftext|> -TITLE: Show that a connected regular space having more than one point is uncountable -QUESTION [6 upvotes]: Two questions on which I am stuck: -1.Show that a connected normal space having more than one point is uncountable. -2.Show that a connected regular space having more than one point is uncountable. -Let $X$ be a connected normal space.Then we can always find two disjoint closed sets in $X$ say $\{p\},\{q\}$.Then by Urysohn's Lemma there exists a continuous function $f:X\to [a,b]$ such that $f(p)=a,f(q)=1$. -As $X$ is connected so $f(X)$ is connected and connected subsets of $[a,b]$ are only $[m,n]$ and $(m,n);a\le m\le n\le b$ which are uncountable and hence $X$ is so. -Is the proof correct? -However I am stuck on the second question ?How should I use the fact that the space is regular ?Any help. - -REPLY [8 votes]: The first proof is quite correct: it's easier even to say that $f[X] = [0,1]$ (just use $a = 0, b= 1$, as in the standard formulation of Urysohn), as the only connected subset of $[0,1]$ that contains $0$ and $1$ equals $[0,1]$ (if it missed $a \in (0,1)$ we'd have an immediate disconnection). And $|X| \ge |f[X]|$ etc. -Suppose $X$ is countable. Then $X$ is Lindelöf (trivially) and a regular Lindelöf space is normal (see this answer, e.g., or search for other proofs if your text does not cover this). Now by the previous part, $X$ is uncountable, contradiction.<|endoftext|> -TITLE: Product of two transcendental numbers is transcendental -QUESTION [5 upvotes]: Let $\alpha,\beta$ be transcendental numbers. Which of the followings are true? -1)$\alpha\beta\ \text{ is transcendental}$. -2)$\mathbb{Q}(\alpha)\ \text{is isomorphic to }\mathbb{Q}(\beta)$ -3)$\alpha^\beta\ \text{is transcendental }$ -4)$\alpha^2\ \text{is transcendental}$ -I know option 4 is true. And I feel option 1 is also true, but I don't know the exact reason. While I have no idea for the remaining two. - -REPLY [4 votes]: Here is a proof that point 3 is false that doesn't use any difficult theorems. -As $\gamma$ ranges through all transcendental numbers, the number $2^\gamma$ takes on uncountably many values. Since only countably many of these values can be algebraic, there must exist a transcendental $\gamma$ for which $2^{\gamma}$ is also transcendental. -Now let $\alpha = 2^{\gamma}$ and $\beta = 1/\gamma$.<|endoftext|> -TITLE: Sectional Curvature, Gauss curvature -QUESTION [8 upvotes]: I have a problem with a computation which shows that the sectional curvature coincide with the Gauss Curvature in dimension 2. This is the definition of sectional curvature I am using: -$K_{XY}(p)=-\frac{R(X,Y,X,Y)}{||X||^2 ||Y||^2 - ^2}$ -Conceptually I understood that the previous one is a generalization of the Gauss curvature, but I don't understand how to recover the Gauss curvature from the sectional curvature, in particular I do not see how $-R(E_1,E_2,E_1,E_2)=eg-f^2$ where $eg-f^2$ is the determinant of the second fundamental form, and $E_1,E_2$ are the coordinate vector fields. - -REPLY [5 votes]: This is an interesting question. Jack gives a hint to the solution, but let me fill in the details. -What we want to show is that how $R(X,Y,X,Y)$ is related to the second fundamental form.The idea is regard the surface $M$ as a submanifold of $\mathbb R^3$, note that here the metric $g$ of $M$ is induced by the canonical metric of $\mathbb R^3$. -Now, the Levi-Civita connection $\nabla$ on $(M,g)$ satisfies $$(\nabla_XY)(p)=(\nabla_X^EY)(p)^{\top}$$ -where $(\nabla^E)$ is the Levi-Civita connection on $\mathbb R^3$, and $v^{\top}$ means projection onto the tangent space of $M$. Let's introduce Gauss's equation in the following theorem: -Gauss Theorem: The curvature tensor $R$ of a submanifold $M\subset \mathbb R^n$ is given by the Gauss equation $$\langle R(X,Y)W,Z\rangle=\alpha(X,Z)\cdot \alpha(Y,W)-\alpha(X,W)\cdot \alpha(Y,Z)$$ -Where $\alpha(X,Y)=(\nabla_X^EY)(p)-(\nabla_X^EY)(p)^{\top}=(\nabla_X^EY)(p)^{\bot}$, i.e., projecting vector onto normal space, and $(\nabla^E)$ is the canonical Levi-Civita connection on $\mathbb R^n$. One can see [this reference] 1 for details in section 1.8~1.10. -Specifically, in our case, $M$ is a two dimensional surface in $\mathbb R^3,$ where we have the parameterization $r=r(u,v)$, with spanning vector fields of tangent bundle $\{r_u, r_v\}$. Also in $\mathbb R^3$, the normal space of $M$ at a point is spanned by a unit normal vector $n$, and we can write $\alpha(X,Y)=\nabla_X^EY\cdot n$. And by Gauss theorem, we have $$-\langle R(r_u,r_v)r_u,r_v\rangle=\alpha(r_u,r_u)\cdot \alpha(r_v,r_v)-\alpha(r_u,r_v)\cdot \alpha(r_v,r_u)$$ -Now the second fundamental form is given by $L=r_{uu}\cdot n, M= r_{uv}\cdot n$, and $N=r_{vv}\cdot n$ (note here I use different notation of second fundamental form as yours). -$$\alpha(r_u,r_u)=\nabla_{r_u}^Er_u\cdot n=\frac{\partial}{\partial u}r_u\cdot n=r_{uu}\cdot n=L,$$ -and the others expressions are similar, thus we have -$$-\langle R(r_u,r_v)r_u,r_v\rangle=LM-N^2$$ -which gives us the second fundamental form as we desired.<|endoftext|> -TITLE: No simplifying identities for any single nonzero number under addition. -QUESTION [5 upvotes]: Consider the structure $(\mathbb{R}, +, r)$, where r is a nonzero real number. Are the commutative and associative identities already sufficient to derive all universally valid equations in that structure? Basically, is 0 the only number that behaves in a special manner under addition? - -REPLY [2 votes]: Yes, this is true. Suppose $s(x_0,x_1,\dots,x_n)$ and $t(x_0,x_1,\dots,x_n)$ are terms in the language of addition such that $s(r,x_1,\dots,x_n)=t(r,x_1,\dots,x_n)$ for all $x_1,\dots,x_n\in\mathbb{R}$. We can choose $a_1,\dots,a_n\in\mathbb{R}$ such that $r,a_1,\dots,a_n$ are linearly independent over $\mathbb{Q}$. Let $F\subset\mathbb{R}$ be the subsemigroup generated by $r,a_1,\dots,a_n$. The linear independence of $r,a_1,\dots,a_n$ implies that $F$ is freely generated by $r,a_1,\dots,a_n$ as a commutative semigroup (here we use the fact that the free commutative semigroup on a set $\{x_0,x_1,\dots,x_n\}$ is the set of formal expressions $\sum m_i x_i$ where $m_i\in\mathbb{N}$ and at least one $m_i$ is nonzero). Thus the identity $s(r,a_1,\dots,a_n)=t(r,a_1,\dots,a_n)$ implies that actually $s(x_0,x_1,\dots,x_n)=t(x_0,x_1,\dots,x_n)$ whenever $x_0,\dots,x_n$ are elements of any commutative semigroup.<|endoftext|> -TITLE: Explain "homotopy" to me -QUESTION [96 upvotes]: I have been struggling with general topology and now, algebraic topology is simply murder. Some people seem to get on alright, but I am not one of them unfortunately. -Please, the answer I need is ideally something very elaborate and extensive, possibly with easy-to-understand examples. Rewriting the definitions with concrete mathematical language and symbols won't help(those are readily accessible in my lecture notes). I want an explanation of what is happening and which part of the solid definitions are telling me so. -One thing I would like to make clear is, I know the definitions, but I don't understand them; I can re-iterate them upon request, consult my lecture notes. Issue is I am writing down something I don't know what it means. It's like writing ancient Greek. I can remember the shapes of each character, their order and write some sentence down. But that's it. Can I explain it to someone with my own words? Break it down? Absolutely not. That is why I am here to ask someone who fully understands these ideas to do exactly that for me: break it down. Go in slow motion. Show me the moves and codes behind it. -Here's the definition I have for homotopy - -A homotopy between maps $f,g:X \rightarrow Y$ is a map $h: X \times I \rightarrow Y$ such that -$$h(x,0)=f(x), h(x,1)=g(x) \in Y$$ where $x \in X$ and $I=[0,1]$. We say maps $f,g$ are homotopic. - -It adds that, - -A homotopy deforms the map $f$ continuously to $g$. - -So, I am given two sets $X,Y$ whatever they are and two maps $f,g$ that takes an element of $X$ to an element of $Y$. I don't know if $f,g$ are bijective, only injective or surjective or whatever. No information on that. Just maps. -And this "homotopy" is a ... "map between maps"? -And even if so, what exactly is it doing? All the explanation seems to be done with this single line - -$$h(x,0)=f(x), h(x,1)=g(x) \in Y$$ - -but no, I don't get what's happening. So $X$ has a bunch of elements $x$, and the product space $X \times I$ gives me elements of the form $\{x,t\} \in X \times I$. Okay. But this map $h$ qualifies as a homotopy as long as the above holds? Then why not just always define $h$ as $h(x,t)=f(x)$ for any $t \neq 1$ and $h(x,t)=g(x)$ for $t=1$, just as how we might define a piecewise function? Then I can define this "homotopy" on any maps. -Sure, it adds "deforms $f$ CONTINUOUSLY to $g$" but how is that stated in the definition itself? I see it nowhere. -Here is an example in my notes which didn't help me understand the definition, - -Take $X=\{x\}$ the space with a single element $x$. Then a map $f:X \rightarrow Y$ is the same as and element $f(x) \in Y$. A homotopy $h: f \simeq g: X \rightarrow Y $ is the same as a path $h: I \rightarrow Y$ with initial point $h(0)=f(x)$ and terminal point $h(1)=g(x) \in Y$. A homotopy $h: f \simeq f: X \rightarrow Y$ is the same as a closed path $h: I \rightarrow Y$. - -Well, first off what is a "path"? Intuition also doesn't make sense because when mapping one element to another, how can there be different "paths"? $x$ goes to $y$. Done. It's not like going from England to Singapore via either Amsterdam or Frankfurt (thus different paths) is it? Unless it's a map $X \rightarrow Z \rightarrow Y$ and telling me that $x \in X$ goes to $z_1 \in Z$ then to $y \in Y$ or $z_2 \in Z$ and then to $y \in Y$, that might be different paths from $x$ to $y$. But here, it is talking about $X$ and $Y$ only. -And why is this ignoring $x$? It says $h: I \rightarrow Y$? The problem I also have is, this is labelled "example" but it's not specific at all. "$h(0)=f(x)$ and $h(1)=g(x)$" over. So? What is this $h$ and how has it been defined? As a map, as a function of some sort? Maybe there are multiple homotopy thinkable, but then what are one or two of them? -It's like saying $f(1)=1$ and $f(2)=4$. Done. Well, to a newbie, maybe secondary school kids, it might be nice to give them an example be it linear $f(x)=ax+b$ or $f(x)=x^2$. To the eyes of the experienced, it might make perfect sense, a specific homotopy might pop out in their minds like popcorns but not in mine. -It's an utter nightmare. I know this is "abstract" math but can not some more specific-ness be put into it? Pictures and diagrams perhaps? -This is only the tip of the massive massive confusion and dumb-foundedness I am experiencing in this area of study. Maybe once it "clicks" it all goes down like an avalanche but so far it's nothing but counter-intuitive. -Can someone please make this possible for me to digest? Suggestions to good websites with examples and diagrams and extensive explanations are also welcome. Thank you - -REPLY [6 votes]: So the first thing that is going on is that somewhere in your lecture notes or in your text book, they defined "map" to mean "continuous function". -Now, I only know this because your quotes won't make sense if that isn't true. So I work backwards from the definitions you are providing me to work out the definitions of the terms used in your definitions. This is sub-optimal, but mathematics is often full of jargon: being able to guess at jargon helps. -A good way to approach this problem is whenever someone uses a term that isn't the totally typical one (ie uses map instead of function), put the words "sufficiently nice" in front of it, and suspend judgement on how nice it needs to be until later. You'll sometimes guess wrong, but it helps offload the work required to understand the definition until later. -And "nice" basically means "has just barely the properties needed to make the definition/theorem true, in a way that is consistent with the other theorems and definitions nearby". -This relies on the fact that mathematicians are lazy, and define away problems when they run into them. - -Now, let's apply this guess. - -A homotopy between maps $f,g:X \rightarrow Y$ is a map $h: X \times I \rightarrow Y$ such that -$$h(x,0)=f(x), h(x,1)=g(x) \in Y$$ where $x \in X$ and $I=[0,1]$. We say maps $f,g$ are homotopic. - -and it adds - -A homotopy deforms the map $f$ continuously to $g$. - -Replace map with continuous function: - -A homotopy between continuous functions $f,g:X \rightarrow Y$ is a continuous function $h: X \times I \rightarrow Y$ such that -$$h(x,0)=f(x), h(x,1)=g(x) \in Y$$ where $x \in X$ and $I=[0,1]$. We say continuous functions $f,g$ are homotopic. - -and it adds - -A homotopy deforms the continuous function $f$ continuously to $g$. - -which makes things make a bit more sense. -Next, I'll deconstruct this: - -$$h(x,0)=f(x), h(x,1)=g(x) \in Y$$ - -So $h|_{X \times \{0\}} = f$ and $h|_{X \times \{1\}} = g$. $h(?,0)$ is $f(?)$ and $h(?,1)$ is $g(?)$. -(For some function $z$, $z|_{Set}$ is $z$ restricted to $Set$, ie if we only talk about the part of $z$ that is on $Set$, what function do we get? Here I'm lazy again, and despite the fact that $h|_{X \times \{0\}}$ is a function from $X \times \{0\} \rightarrow Y$ and $f$ is a function from $X \rightarrow Y$, I say they are equal, because $X \times \{0\}$ is basically $X$.) -And, most importantly, it is continuous -- so its behavior at the end points restricts (to some degree) what it does in the middle! -The existend of a homotopy means $f$ and $g$ are homotopic. Often we do not care what the homotopy looks like, merely that it exists. -If it exists, then we can "continuously deform" $f$ into $g$ by looking at $h(?,t)$ where $t$ goes from $0$ to $1$. When $t=0$ we get $f$, when $t=1$ we get $g$, and in the middle we get a continuous deformation between the two. -You can think of $h:I \rightarrow (X \rightarrow Y)$, but that requires a friendly definition of continuity on continuous functions from $X \rightarrow Y$, and it might lead to some complications. $h$ in a sense maps $I$ (the unit interval) to a continuous deformation of functions that starts at $f$ and ends at $g$. - - -Take $X=\{x\}$ the space with a single element $x$. Then a map $f:X \rightarrow Y$ is the same as and element $f(x) \in Y$. A homotopy $h: f \simeq g: X \rightarrow Y $ is the same as a path $h: I \rightarrow Y$ with initial point $h(0)=f(x)$ and terminal point $h(1)=g(x) \in Y$. A homotopy $h: f \simeq f: X \rightarrow Y$ is the same as a closed path $h: I \rightarrow Y$. - -Now here we see a mathematican being lazy. If you have the space $I \times \{x\}$, that is basically the same space as $I$, because the cross product of a space with a set containing a single element is basically the same as the space. What the single element is is mostly unimportant. -So they silently and without comment drop the $\{x\}$. Lazy mathematician. -A "path" from a point $a$ to a point $b$ both in $Y$ is just a map $p : I \rightarrow Y$ where $p(0) = a$ and $p(1) = b$. And remember a map is a continuous function. -Each path from England to Sinapore is a different path. Go north 2 km, the strait to England? Yep. Travel in a spiral around the world 5 million times, then stop in England? Yep. Go to the moon? Yep. Travel into a black hole, exit out a white hole and then swing around to England? Yep. Use the cantor set so you only move on a set of measure zero and end up in England? Yep. All (cateogies of) paths to England. -But do remember that we don't have differentiability: just continuity at this point. Paths can be non-differentiable. -A path from 1 to 2 in R might be $f:I \rightarrow R$ such that $f(x) = x+1$. Another is $f(x) = x^2+1$. Or you could use the cantor set to define the function. -In topology, it is the exitence of any path you more often care about, not the definition of a given path. We get theorems about paths existing without having to construct them. -Now, what is worse is that pictures cannot work: they can give trivial examples (like the ones above), but a non-differentiable path cannot be drawn: every drawn line is differentiable.<|endoftext|> -TITLE: How many solutions for an equation with simple restrictions -QUESTION [9 upvotes]: I'm working on an assignment in which I have to count the number of solutions for this particular equation: $$x_1+x_2+x_3+x_4=20$$for non negative integers with $x_1<8 $ and $x_2<6$ -I'm aware that this kind of a task isn't that complicated, but I don't get combinatorics in general that well. -So I've tried two following approaches to get this done. -Firstly I tried to substitute the variable x: -$x_1+x_2+x_3+x_4=20 \Leftrightarrow y_1+y_2+y_3+y_4=34$ -in which $y_1=x_1+8$ and $y_2=x_2+6$ (casue $x_1=y_1-8$ and $x_2=y_1-6$) -Following this approach the total number of possible solutions would be -$${34+3 \choose 3} $$ -But I'm not sure if its the right solution. -The second approach is to sum all of the possible values that $x_1$ and $x_2$ could possibly take, also $x_1=0,1,2,3,4,5,6,7$ and $x_2=0,1,2,3,4,5,6$ -And then count all the possibilities for each of the variables $${20 -x_1-x_2+1\choose 1}$$ -and sum them together like this: -$${21\choose 1}+{20\choose 1}+{19\choose 1}+{18\choose 1}+... $$ -and so on... -I'm sure I'll get the correct number with this one, but I'm not feeling like summing all of this possibilities. There's got to be a better, more elegant way to deal with this. -My professor gave me a hint that I should do it using the complement. - -REPLY [9 votes]: To determine the number of solutions of the equation -$$x_1 + x_2 + x_3 + x_4 = 20 \tag{1}$$ -in the non-negative integers subject to the restrictions $x_1 < 7$ and $x_2 < 6$, we subtract the number of solutions in which $x_1 > 7$ or $x_2 > 5$ from the number of solutions of the equation. -A particular solution of equation 1 corresponds to the placement of three addition signs in a row of $20$ ones. For instance, -$$1 1 1 1 + 1 1 1 1 1 + 1 1 1 1 1 1 + 1 1 1 1 1$$ -corresponds to the solution $x_1 = 4$, $x_2 = 5$, $x_3 = 6$, and $x_4 = 5$, while -$$+ 1 1 1 1 1 1 1 1 1 + 1 1 1 1 1 1 1 + 1 1 1 1$$ -corresponds to the solution $x_1 = 0$, $x_2 = 9$, $x_3 = 7$, and $x_4 = 4$. Thus, the number of solutions of equation 1 is the number of ways three addition signs can be inserted into a row of $20$ ones, which is -$$\binom{20 + 3}{3}$$ -since we must choose which three of the $23$ symbols ($20$ ones and $3$ addition signs) will be addition signs. -Suppose $x_1 > 7$. Then $y_1 = x_1 - 8$ is a non-negative integer. Substituting $y_1 + 8$ for $x_1$ in equation 1 yields -\begin{align*} -y_1 + 8 + x_2 + x_3 + x_4 & = 20\\ -y_1 + x_2 + x_3 + x_4 & = 12 \tag{2} -\end{align*} -Equation 2 is an equation in the non-negative integers with - - $$\binom{12 + 3}{3} = \binom{15}{3}$$ - -solutions. -Suppose $x_2 > 5$. Then $y_2 = x_2 - 6$ is a non-negative integer. Substituting $y_2 + 6$ for $x_2$ in equation 1 yields -\begin{align*} -x_1 + y_2 + 6 + x_3 + x_4 & = 20\\ -x_1 + y_2 + x_3 + x_4 & = 14 \tag{3} -\end{align*} -Equation 3 is an equation in the non-negative integers with - - $$\binom{14 + 3}{3} = \binom{17}{3}$$ - -solutions. -If we subtract the number of solutions of equation 2 and equation 3 from the number of solutions of equation 1, we will have subtracted those solutions in which $x_1 > 7$ and $x_2 > 5$ twice. Thus, we must add the number of solutions in which $x_1 > 7$ and $x_2 > 5$. -Suppose $x_1 > 7$ and $x_2 > 5$. Then $y_1 = x_1 - 8$ and $y_2 = x_2 - 6$ are non-negative integers. Substituting $y_1 + 8$ for $x_1$ and $y_2 + 6$ for $x_2$ in equation 1 yields -\begin{align*} -y_1 + 8 + y_2 + 6 + x_3 + x_4 & = 20\\ -y_1 + y_2 + x_3 + x_4 & = 6 \tag{4} -\end{align*} -Equation 4 is an equation in the non-negative integers with - - $$\binom{6 + 3}{3} = \binom{9}{3}$$ - -solutions. -By the Inclusion-Exclusion Principle, the number of solutions of equation 1 in the non-negative integers subject to the restrictions that $x_1 < 7$ and $x_2 < 6$ is - - $$\binom{23}{3} - \binom{15}{3} - \binom{17}{3} + \binom{9}{3}$$<|endoftext|> -TITLE: Prime counting function; when is it true that $\pi(n) > \pi(2n) -\pi(n)$? -QUESTION [6 upvotes]: Let $\pi$ be the prime counting function. -Under what conditions is it proven true that $\pi(n) > \pi(2n) -\pi(n)$, if at all? - -REPLY [5 votes]: I've written the program Akiva Weinberger suggested above. This is just a straightforward interpretation of the sieve of Eratosthenes, in R. -n = 30092 -top = 2*n -isPrime = rep(TRUE, top) -isPrime[1] = FALSE - -nextprime = 2 -while (nextprime < sqrt(top)){ - isPrime[seq(2*nextprime, floor(top/nextprime)*nextprime, nextprime)] = FALSE - nextprime = min(which(isPrime[(nextprime+1):top])) + nextprime -} - -#isPrime[n] is now TRUE if n is prime and FALSE otherwise - -primePi = cumsum(isPrime) #prime counting function, denoted as pi above - -f = primePi[seq(2, 2*n, 2)] - 2*primePi[1:n] - -which(f>0) - -The output is the list [1]. That is, $\pi(2k) > 2\pi(k)$ for $k = 1$ and no other $k <= 30092$. As Barry Cipra showed above, we can prove the desired inequality for larger values from of $k$ from known bounds. -If we want to consider the possibility that $\pi(2k) = 2\pi(k)$, we can replace the last line with which(f>=0). The output here is [1, 2, 4, 10]. And in fact we have $2 = \pi(4) = 2 \pi(2), 4 = \pi(8) = 2 \pi(4), 8 = \pi(20) = 2\pi(10)$.<|endoftext|> -TITLE: To evaluate integral using Beta function - Which substitution should i use? -QUESTION [6 upvotes]: $$\int_{0}^{1} \frac{x^{m-1}(1-x)^{n-1}}{(a+bx)^{m+n}}dx = \frac{B(m,n)}{(a+b)^ma^n}$$ -I have to use some kind of substitution but i do not understand what i use and why ? -Thanks - -REPLY [3 votes]: Let's try the substitution -$$x=\frac{1-y}{1+cy}$$ -so that $1-x=(1+c)\frac{y}{1+cy}$. -Then, when $x=0$, $y=1$ and when $x=1$, $y=0$. We also have $dx=-(1+c)\frac{1}{(1+cy)^2}\,dy$. Then, we can write -$$\begin{align}\int_0^1\frac{x^{m-1}(1-x)^{n-1}}{(a+bx)^{m+n}}\,dx&=\int_0^1 \frac{\left(\frac{1-y}{1+cy}\right)^{m-1}\left((1+c)\frac{y}{1+cy}\right)^{n-1}}{\left(a+b\frac{1-y}{1+cy}\right)^{m+n}}\,(1+c)\frac{1}{(1+cy)^2}\,dy\\\\ -&=\frac{(1+c)^n}{(a+b)^{n+m}}\int_0^1\frac{y^{n-1}(1-y)^{m-1}}{\left(1-\frac{b-ac}{a+b}y\right)^{m+n}} -\end{align}$$ -Choosing $c=b/a$, we obtain -$$\begin{align} -\int_0^1\frac{x^{m-1}(1-x)^{n-1}}{(a+bx)^{m+n}}\,dx&=\frac{1}{a^n(a+b)^m}\int_0^1 y^{n-1}(1-y)^{m-1}\,dy\\\\ -&=\frac{1}{a^n(a+b)^m}B(m,n) -\end{align}$$ -And we are done!<|endoftext|> -TITLE: On the definition of double categories? -QUESTION [8 upvotes]: I'm trying to understand double categories but I'm having a hard time. -A preliminary definition is: -Definition. Let $\mathscr{C}$ be a category. We say $\mathscr{I}$ is an internal category to $\mathscr{C}$ if $\mathscr{I}=(\mathscr{I}_0, \mathscr{I}_1, s, t, u, \circ)$ where: -$(i)$ $\mathscr{I}_0$ is an object of $\mathscr{C}$; -$(ii)$ $\mathscr{I}_1$ is an object of $\mathscr{C}$: -$(iii)$ $s, t:\mathscr{I}_1\longrightarrow \mathscr{I}_0$ are morphisms of $\mathscr{C}$; -$(iv)$ $u:\mathscr{I}_0\longrightarrow \mathscr{I}_1$ is a morphism of $\mathscr{C}$; -$(v)$ $\circ:\mathscr{I}_1\times_{\mathscr{I}_0}\mathscr{I}_1\longrightarrow \mathscr{I}_1$ is a morphism of $\mathscr{C}$. -This data are subject to the properties which define a category whose class of objects is $\mathscr{I}_0$, whose set of morphisms is $\mathscr{I}_1$, and where $s$ is the source map, $t$ is the target map, $u$ is the identity assigning map and where $\circ$ is the composition. -In other words, we're just interpreting the definition of a category $\mathscr{I}$ inside the category $\mathscr{C}$. -Definition. A double category is a category internal to $\mathbf{Cat}$. -Above $\mathbf{Cat}$ stands for the category of all categories (maybe small??). -I'm having some trouble to understand this definition for I can't think about some example to have in mind. -Can anyone provide me some down to earth examples? Furtheremore, is there some modern reference which deals with double categories? -Thanks. - -REPLY [3 votes]: I see that this question is quite old never the less maybe this answer could be of help to someone, if not the OP. -Double categories are not as rare as one would expect. -For instance there is a double category of adjunctions as shown in MacLane's Category theory for the Working Mathematicians. -Another double category is the one having sets and functions forming the object-category and binary relations and relative morphisms forming the arrow-category. -Then there families of double categories that can be build for other categories. For instance for every category $\mathbf C$ you have a double category whose object-category is $\mathbf C$ itself and whose arrow-category is...well the arrow category of $\mathbf C$ (that is the category having arrows of $\mathbf C$ as objects and commutative squares as morphisms). -Of course the examples become even more if you relax the strictness of the composition functor and enter in the world of weak double categories: these are basically for bicategories are for strict 2-categories. -Many of these weak double categories arise from a general construction, the module category construction. -If you need additional details feel free to ask.<|endoftext|> -TITLE: Function $\Bbb Q\rightarrow\Bbb Q$ with everywhere irrational derivative -QUESTION [16 upvotes]: As in topic, my question is as follows: - -Is there a function $f:\Bbb Q\rightarrow\Bbb Q$ such that $f'(q)$ exists and is irrational for all $q\in\Bbb Q$? - -For the sake of completeness, I define $f'(q)$ as the limit of $\lim\limits_{h\rightarrow 0}\frac{f(q+h)-f(q)}{h}$ where $h$ ranges over rational numbers. I don't know of any different "reasonable" definition of derivative for function from $\Bbb Q$ to itself,, but if you can find an example of a function like in question, or prove that there is none, for some different notion of derivative, I would love to see it. -I can't provide much background on this question, it's just something I've been wondering about for the past few days. -Thanks in advance for all feedback. - -REPLY [11 votes]: Yes there are many of them. -Let $\alpha$ be any irrational number and let's build a function whose derivative is $\alpha$. -We pick an enumeration of the rationals $\{r_1,r_2,r_3,\ldots\}$ and we will choose each $f(r_n)$ in order. At the same time in order to make $f'(r_n) = \alpha$ we will decide how to squeeze the graph of $f$ near $r_n$ -Suppose we have chosen $n$ points and that we have restricted the remaining graph of $f$ to some open set $U_n \subset \Bbb R^2$ where $\pi(U_n) = \Bbb R \setminus \{r_1,\ldots,r_n\}$ ($\pi : \Bbb R^2 \to \Bbb R$ is the projection on the $x$-axis) . -First, we pick a rational value $y_{n+1}$ for $f(r_{n+1})$ such that $(r_{n+1},y_{n+1}) \in U_n$ ($U_n \cap \pi^{-1}(r_{n+1})$ is nonempty by the induction hypothesis, and $U_n$ is open, so we can find a rational value in there). -Next, we choose two parabolas tangent at $(r_{n+1},y_{n+1})$ with slope $\alpha$ (one of them upside down) and in particular we choose their leading coefficient large enough (in absolute value) so that the upper parabola doesn't meet the lower border of $U_n$ and the lower parabola doesn't meet the upper border of $U_n$ (those borders are a finite number of parabola pieces so this is possible). -Then we choose $U_{n+1}$ to be the interection of $U_n$ and the region between the two parabolas. Then $\pi(U_{n+1}) = \pi(U_n) \setminus \{r_{n+1}\}$, and any function whose graph stays in $U_{n+1}$ will have a derivative $\alpha$ at $r_{n+1}$. -Once we have done this for every $n$, we have a function $\Bbb Q \to \Bbb Q$ "differentiable" everywhere with derivative $\alpha$. -Though, it might not look good and it may not have a continuous extension to $\Bbb R$. Heck, you can even choose any function $g : \Bbb Q \to \Bbb R$ and force $f' = g$<|endoftext|> -TITLE: How to see that the prime gaps functions isn't eventually monotonic? -QUESTION [7 upvotes]: Let $g(n)$ be the distance between the $n$th prime and the next. -By elementary means we can see that $g(n)$ is not eventually constant and that $g(n)$ is not strictly monotonic. -Further we know that it isn't eventually monotonic (meaning $g(k) \le g(k+1) \le g(k+2) \le \cdots$) by extremely advanced theorems like Zhangs result that some finite gap occurs infinitely often. -It was hinted to me that there is an easier proof though, that doesn't depend extremely advanced results. Could anyone point me to one please? - -REPLY [4 votes]: In brief: You can tweak the linked to argument. All you need in addition is a reasonable bound on the number of consecutive primes that have the same difference. It is not hard to show a bound linear in the difference. -Then proceed as in the linked question, that is show that this leads to an upper bound on the number of primes below a certain threshold that contradicts well-known results. -There it is that the number of primes below $N^2$ is at most linear in $N$. Here it will be that the number of primes below $N^2$ is at most quadratic in $N$. Both are absurd. -In detail: -Let us first show that the number of consecutive primes with difference $d$ is at most $2d$. -A difference $d$ is fixed. Let $p$ be a an auxiliary prime that does not divide $d$, for example the next prime after $d$ (which by Bertrand's postulate is $<2d$). -We show that there can be at most $p+2$ consecutive primes with difference $d$. -Let us consider $p_0, p_0 + d , p_0 + 2d , \dots, p_0 + p d, p_0 + (p+1)d$. modulo $p$. We show at least one of these numbers is not prime. -Since $p \nmid d$, we get that $p_0, p_0 + d , p_0 + 2d , \dots,p_0 + (p-1)d$ covers all congruence classes modulo $p$. Thus in particular one of these is $0$ modulo $p$. This means it is not a prime unless it is equal to $p$ itself. However this is only possible for $p_0$ or $p_0 + d$, as $p < 2d$. In this case, $p_0 + pd$ or $p_0 + (p+1)d$, respectively, is divisible by $p$ too and thus not prime. -Thus we have shown that the difference $d$ can occur at most $p+1$ times successively, and thus if the sequence of differences is not decreasing it can occur at most $p+1$ times in total. Recall that $p+1 \le 2d$. So, the difference $d$ occurs at most $2d$ times. -This implies that the size of the $1+ \sum_{d=1}^D 2d$-th prime is at least -$2+ \sum_{d=1}^D (2d)d$. -Now the former sum is $1 + D(D+1)$ and the latter sum is $2 + \frac{D(D+1)(2D+1)}{3}$. This later expression is at least of size $D^3/2$ (leaving very small $D$ aside). -Yet by the prime number theorem we know that there are about $D^3/2/\log (D^3/2)$ primes below $D^3/2$. For sufficiently large $D$ this is much larger than $1 + D(D+1)$, and thus yields a contradiction to the $1 + D(D+1)$-th prime being greater than $D^3/2$.<|endoftext|> -TITLE: Linear independent set of functionals makes certain map surjective -QUESTION [5 upvotes]: Let $V$ be a finite $n$-dimensional vector space over a field $K$ and $\{\lambda_{1},\ldots, \lambda_{n}\}$ be a linearly independent set of functionals -Show that the linear map -$$\Lambda:V\to K^n$$ -$$v\mapsto (\lambda_1(v),\ldots, \lambda_n(v))$$ -Is surjective. -This is a detail in a proof of a proposition related to dual bases which I simply can't solve. -If we let $\{\lambda_1,\ldots, \lambda_n\}$ be a linearly independent then we have that if we remove one functional from that set -the span of the rest of the vectors does not contain the removed element, or equivalently if -$a_1\lambda_1 + \cdots + a_1\lambda_n = 0$ where not all functionals are nonzero, $a_j \in K$ and $0$ is the zero functional taking all elements of $V$ to $0\in K$ then $a_1 = \cdots = a_n = 0$ -If $\Lambda$ is surjective then the dimension of the image of $\Lambda$ but be equal to $n$ which implies that the kernel of the map is trivial -so my failed strategy was to show that that the kernel is trivial from which I would conclude that the map must be trivial. -If $v\in \ker(\Lambda) \implies av \in \ker(\Lambda) \forall a \in K$ since the the kernel is a subspace (closure). -$$\Lambda(av) = (0,\ldots, 0) \implies a(\lambda_1(v),\ldots, \lambda_n(v)) = (0,\ldots, 0)$$ -So $a(\lambda_1(v) + \cdots + a\lambda_n(v)) = 0$ But I don't get anyway from here and realize that I've probably done some misstake or thinking about the porblem in a stupid way. -Any hints would be appreciated. - -REPLY [3 votes]: Suppose $(\lambda_1(v),\ldots, \lambda_n(v)) = 0\in V$. Then $\lambda_1(v)=\cdots=\lambda_n(v)=0$. Therefore every linear combination $a_1\lambda_1(v)+\cdots+a_n\lambda_n(v)$ is $0\in V$. If $v$ is in the kernel of every linear combination of $\lambda_1,\ldots,\lambda_n$ then $v$ is in the kernel of every linear functional on $V$, since $\{\lambda_1,\ldots,\lambda_n\}$ spans the whole space of linear functionals on $V$. (Here I'm assuming you've seen it proved somewhere that the dimension of the space of linear functionals on $V$ is the same as the dimension of $V$.) The only vector in $V$ that is in the kernel of every linear functional on $V$ is $0$. (In other words if $0\ne v\in V$ then some linear functional maps $v$ to some nonzero scalar.) -Therefore, if $(\lambda_1(v),\ldots, \lambda_n(v)) = 0\in V$, then $v=0$. -So the kernel of $v\mapsto(\lambda_1(v),\ldots, \lambda_n(v))$ is trivial.<|endoftext|> -TITLE: Is it possible to write the Hadamard product of two matrices in tensor notation? -QUESTION [5 upvotes]: Say I have two $4 \times 4$ matrices $(A^{\alpha \beta})$ and $(B^{\mu\nu})$ and want to compute the Hadamard (entry-wise) product. Is there an elegant way of writing this down in the common component, i.e. tensor, notation? Would it be something like $A^{\alpha \beta} B^{\alpha \beta}$ or is that not sufficient? Would this lead to conflicts with Einsteins summation convention? - -REPLY [4 votes]: If you aren't using the summation convention then $C^{\alpha\beta}=A^{\alpha\beta}B^{\alpha\beta}$ is fine. -If you are using the summation convention then $A^{\alpha\beta}B^{\alpha\beta}$ means -$$\sum_{\alpha\beta}A^{\alpha\beta}B^{\alpha\beta}$$ -which is a scalar rather than a matrix. In this case the thing to do is to define a new tensor $\delta^\alpha_{\;\beta\gamma}$ such that -$$\delta^\alpha_{\;\beta\gamma}=\begin{cases} -1&\text{if}\;\alpha=\beta=\gamma\\ -0&\text{otherwise}\\ -\end{cases}$$ -in your basis. Then define $C^{\alpha\beta}=\delta^\alpha_{\;\gamma\delta}\delta^\beta_{\;\eta\phi}A^{\gamma\eta}B^{\delta\phi}$.<|endoftext|> -TITLE: Why is "$\pi^2= g $" where $g$ is the gravitational constant? -QUESTION [9 upvotes]: Some months ago a professor of mine showed us a 'proof' of why $g\approx 9.8 ~\text{m}/\text{s}^2$ (the gravitational acceleration at the surface of the Earth) is 'equal' to $\pi^2\approx9.86\dots$ Using a differential equation that I think is used to model the movement of a pendulum of something like that. -Does anyone know the DE I'm talking about? Or, has anyone heard such story? - -REPLY [18 votes]: Maybe this helps: link -Looks like some time ago the second was defined by $1/2$ of the oscillation time of a $1$ meter long pendulum. -The oscillation time of a pendulum is given by $T = 2\pi\sqrt{\frac{L}{g}}$. With $T = 2$ and $L = 1$ this gives $g = \pi^2$<|endoftext|> -TITLE: Is one-point compactification of a space metrizable -QUESTION [8 upvotes]: Let $X$ be a locally compact Hausdorff space.Let $Y$ be the one-point compactification of $X$. Two questions are: - -Is it true that if $X$ has a countable basis then $Y$ is metrizable? -Is it true that if $Y$ is metrizable then $X$ has a countable basis? - -My attempt:We know that every compact space which is metrizable has a countable basis.Thus in (2) we have $Y$ is $2^{nd}$ countable and a subspace of a $2^{nd}$ countable space being $2^{nd}$ countable so $X$ is $2^{nd}$ countable . -In (1) I could only figure out that $X$ is regular since it is locally compact Hausdorff space.Also $X$ has a countable basis so by Urysohn Metrization Theorem $X$ is metrizable. -But how can this help me conclude whether $Y$ is metrizable/not? -Any help will be helpful - -REPLY [3 votes]: The argument for 2 is correct. -For 1, you can show that $Y$ has a countable base as well: as $X$ is locally compact and second countable, it has a countable open base $\mathcal{B}$ such that $\overline{O}$ is compact for all $O \in \mathcal{B}$. -Then the point at infinity $\infty$ has a local base of the form $\{\infty\} \cup \{X \setminus C: C = \cup_{i=1}^n \overline{O_i}, O_i \in \mathcal{B}\}$, which is countable (little argument requireed). Show it is a local base for $\infty$ (which uses that $X$ is lcoally compact), and combine it with $\mathcal{B}$ to form a countable base for the whole compact Hausdorff $Y$, which is then metrisable by Urysohn again.<|endoftext|> -TITLE: ''Linear'' transformations between vector spaces over different fields . -QUESTION [7 upvotes]: Let $\mathbf{V}(\mathbb{K}_1,V),$ and $\mathbf{W}(\mathbb{K}_2,W)$ two vector spaces over different fields ( as an example: $\mathbb{K}_1=\mathbb{C}$ and $\mathbb{K}_2=\mathbb{R}$). -We can generalize the notion of a linear transformation $T:\mathbf{V}\to\mathbf{W}$ for such two space? -My idea is that we have no problems for additivity: -$$ -T(\mathbf{x}+\mathbf{y})=T(\mathbf{x})+ T (\mathbf{y}) -$$ -but we have some truble with homogeneity since for $T(\alpha \mathbf{x})$ we cannot define $\alpha T(\mathbf{x})$. -It seems that we must have some function $\lambda :\mathbb{K}_1 \to \mathbb{K}_2 $ so that we can write something as -$$ -T(\alpha \mathbf{x})=\lambda(\alpha)T(\mathbf{x}) -$$ -but there is some ''natural'' definition of such function $\lambda$ that preserve the intuitive meaning of linearity? - -REPLY [5 votes]: One thing you can write down is a pair consisting of a morphism $f : K_1 \to K_2$ of fields and a morphism $T : V \to W$ of abelian groups such that -$$T(ax) = f(a) T(x).$$ -This is sometimes useful in the case that $K_1 = K_2$ is a Galois extension of some field $k$ and $f$ is an element of the Galois group; $T$ is then called a "semilinear" map. More generally, $K_1, K_2$ can be arbitrary rings and $V, W$ can be modules over those rings; this is part of a useful fibered category.<|endoftext|> -TITLE: The complex version of the chain rule -QUESTION [6 upvotes]: I want to prove the following equality: -\begin{eqnarray} -\frac{\partial}{\partial z} (g \circ f) = (\frac{\partial g}{\partial z} \frac{\partial f}{\partial z}) + (\frac{\partial g}{\partial \bar{z}} \frac{\partial \bar{f}}{\partial z}) -\end{eqnarray} -So I decide to do the following: -\begin{eqnarray} -\frac{\partial}{\partial z} (g \circ f) = \frac{1}{2}[(\frac{\partial g}{\partial x} \circ f)(\frac{\partial f}{\partial x}) + \frac{1}{i}(\frac{\partial g}{\partial y} \circ f)(\frac{\partial f}{\partial y})] -\end{eqnarray} -but the thing is that I am doing something wrong here since I don't get any conjugate function and any derivative with respect to $\bar{z}$ so Can someone help me to see where I am wrong and fix it please? -In fact I don't see what to do next, so I appreciate your help. -Thanks a lot in advance. - -Edition: - -What I've got so far is the following: -$$\frac{1}{2}[(\frac{\partial g}{\partial x} \circ f + \frac{\partial g}{\partial y} \circ f)\frac{\partial f}{\partial z} ]$$ -but I'm still stuck. - -REPLY [3 votes]: Define $z=x+iy$, then$$\begin{cases} - x=\frac{z+\overline{z}}{2}\\ - y=\frac{z-\overline{z}}{2i} - \end{cases}.$$ We consider $x$ and $y$ are functions of $z$ and $\overline{z}$, thus we have -$$\frac{\partial }{\partial z}=\frac{1}{2}\left(\frac{\partial}{\partial x}+\frac{1}{i}\frac{\partial}{\partial y} \right) ;\quad \frac{\partial }{\partial \overline{z}}=\frac{1}{2}\left(\frac{\partial}{\partial x}-\frac{1}{i}\frac{\partial}{\partial y} \right)$$which shows $z$ is exactly what we have already known. We consider $h$, $f$, $g$ all are the functions of $z$ and $\overline{z}$, then we have$$dh=\frac{\partial h}{\partial z}dz+\frac{\partial h}{\partial \overline{z}}d\overline{z}.$$Also we can express$g\circ f$ as -$$g\circ f=g(f(z,\overline{z}),\overline{f}(z,\overline{z})).$$Then,$$d(g\circ f)=\left( \frac{\partial g}{\partial z}\frac{\partial f}{\partial z}+\frac{\partial g}{\partial\overline{z}}\frac{\partial \overline{f}}{\partial z}\right)dz+ - \left( \frac{\partial g}{\partial z}\frac{\partial f}{\partial \overline{z}}+\frac{\partial g}{\partial\overline{z}}\frac{\partial \overline{f}}{\partial \overline{z}}\right)d\overline{z}. $$Compared with $dh$ and $d(g\circ f)$, the complex version of chain rule is proved.<|endoftext|> -TITLE: Prove that if $\sum a_n$ converges, then $na_n \to 0$. -QUESTION [5 upvotes]: Let $a_n$ be a decreasing sequence of nonnegative real numbers. -Prove that if $\sum a_n$ converges, then $na_n \to 0$. -Hint: use that $n\, a_{2n} \le a_{n+1}+\cdots + a_{2n}$ - -I couldn't prove this using the given hint, could someone give me a few tips? -I also have two more questions: - -Suppose I have $2na_{2n},(2n+1)a_{2n+1}\to 0$ is that enough to say that $na_n\to 0$? - -Is there any easy way to show $n \, a_{2n}\le a_{n+1}+\cdots + a_{2n}$, there's probably a simple inductive proof I couldn't get. - -REPLY [8 votes]: Let $(R_N)_N$ be the sequence of remainders of your series, namely -$$\forall N\in\mathbb{N},\ R_N=\sum_{n=N+1}^{+\infty}a_n.$$ -Since your series converges, the sequence $(R_N)_N$ is well defined and -$$\lim_{N\to+\infty}R_N=0.$$ -Now, since your $a_n$'s are non-negative and the sequence is non-increasing, -$$\forall n\in\mathbb{N},\ na_{2n}\leq a_{n+1}+\cdots+a_{2n}\leq R_n.$$ -By the Squeeze Theorem, -$$\lim_{n\to+\infty}na_{2n}=0.$$ -For the odd subsequence: write, for $n\in\mathbb{N}$, -$$(2n+1)a_{2n+1}=2na_{2n+1}+a_{2n+1}\leq2na_{2n}+a_{2n+1}$$ -and conclude (by the Squeeze Theorem again) that -$$\lim_{n\to+\infty}(2n+1)a_{2n+1}=0.$$ -Finally, you have a sequence $(na_n)_{n\in\mathbb{N}}$ such that the odd and even subsequences have a nil limit: you can conclude that the sequence $(na_n)_{n\in\mathbb{N}}$ has a nil limit. - -Regarding your questions: - -Suppose I have $2na_{2n}\to0$ and $(2n+1)a_{2n+1}\to0$; is that enough to say that $na_n\to0$? - -Yes: a sequence has a limit $\ell$ if and only if its odd and even subsequences have the same limit, equal to $\ell$. Apply this result to the sequence $(na_n)_n$. - -Is there any easy way to show $na_{2n}\leq a_{n+1}+\cdots+a_{2n}$, there's probably a simple inductive proof I couldn't get. - -You don't need induction here. Since the sequence $(a_n)_n$ is non-decreasing, -$$\forall n\in\mathbb{N}^*,\ a_{n+1}\geq a_{n+2}\geq\cdots\geq a_{2n},$$ -hence -$$a_{n+1}+\cdots+a_{2n}=\sum_{k=n+1}^{2n}a_k\geq\sum_{k=n+1}^{2n}a_{2n}=na_{2n}.$$<|endoftext|> -TITLE: Simple random walk on $\mathbb Z^d$ and its generator -QUESTION [8 upvotes]: I'm still trying to figure out definitions and properties of random walks on $\mathbb Z^d$. My goal is to work up to understanding some large deviation principles for the local times of such random walks, but I'm having quite some trouble with the basics. -Let $(X_t)_{t\geq0}$ be a simple random walk on $\mathbb Z^d$ in continuous time. So the process starts in some point $x \in \mathbb Z^d$ at time $0$ and after a waiting time (exponentially distributed with parameter $1$) it jumps to each of its $2d$ neighbours with equal probability. $\mathbb P_x$ and $\mathbb E_x$ denote probability and expectation assuming the random walk starts in $X_0=x\in \mathbb Z^d$ at time $t=0$. -Next, the generator of a random walk is introduced as an operator on the space $\mathbb R^{\mathbb Z^d}$ of functions from $\mathbb Z^d$ to $\mathbb R$: -$$\Delta f(x) = \sum_{y:\ |x-y|=1} \left[f(y)-f(x)\right]$$ -for $x \in \mathbb Z^d$ and $f \in \mathbb R^{\mathbb Z^d}$. -Not to mention that I don't understand meaning and significance of this operator, my main problem right now is that I don't understand why this equality holds: -$$\Delta f(x) = \left. \frac{\partial}{\partial t} \right|_{t=0} \mathbb E_x \left[f(X_t) \right]$$ -for all $x \in \mathbb Z^d$ and all bounded functions $f: \mathbb Z^d \rightarrow \mathbb R$. -I'm stuck with the integral -$$\mathbb E_x \left[f(X_t) \right] -= \int f(X_t)\ \mathrm d\mathbb P_x -= \int_{\mathbb Z^d} f\ \mathrm d\mathbb P_x \circ X_t^{-1}.$$ -Basically this is just an integral over a discrete space, i.e. a sum, and I should be able to evaluate this as the distribution $\mathbb P_x \circ X_t^{-1}$ of $X_t$ is known, but I'm having trouble to do the calculation. -Can someone drop me a hint how to start? - -REPLY [8 votes]: Denote by $(\tau_j)_{j \in \mathbb{N}}$ the sequence of independent exponentially distributed (with parameter $1$) random variables. If we set -$$N_t := \sum_{j=1}^{\infty} 1_{\{\tau_j \leq t\}}$$ -then $N_t$ describes the number of jumps of $(X_t)_{t \geq 0}$ up to time $t$. It is well-known that $(N_t)_{t \geq 0}$ is a Poisson process (with intensity $1$); in particular we have -$$\mathbb{P}^x(N_t=0)=e^{-t} \qquad \mathbb{P}^x(N_t = 1) = t e^{-t} \qquad \mathbb{P}^x(N_t \geq 2)=1 - (1 + t) e^{-t}. \tag{1}$$ -Now fix $x \in \mathbb{R}^d$ and denote by $Z$ a random variable (independent of $(\tau_j)_{j \in \mathbb{N}}$) such that -$$\mathbb{P}^x(Z=y) = \begin{cases} \frac{1}{2d} & \text{if} \, |x-y| = 1, \\ 0, & \text{otherwise} \end{cases}. \tag{2}$$ -Then $X_t$ equals in distribution (with respect to $\mathbb{P}^x$) -$$x \cdot 1_{\{N_t=0\}} + Z 1_{\{N_t=1\}} + X_t 1_{\{N_t \geq 2\}}$$ -(that's exactly how the simple random walk is defined!). Consequently, we get -$$\begin{align*} \mathbb{E}^x f(X_t) &= f(x) \mathbb{P}(N_t = 0) + \mathbb{E}^x(f(Z) 1_{\{N_t=1\}}) + \mathbb{E}^x(f(X_t) 1_{\{N_t \geq 2\}}) \\ &= f(x) \mathbb{P}^x(N_t = 0) + \mathbb{E}^x(f(Z)) \mathbb{P}^x(N_t=1) + \mathbb{E}^x(f(X_t) 1_{\{N_t \geq 2\}}) \tag{3} \end{align*}$$ -for all bounded measurable functions $f$. Hence, -$$\begin{align*} &\quad \frac{d}{dt} \mathbb{E}^x f(X_t) \bigg|_{t=0} \\ &= \lim_{t \to 0} \frac{\mathbb{E}^xf(X_t)-f(x)}{t} \\ &\stackrel{(3)}{=} \lim_{t \to 0} \frac{1}{t} \left[ (\mathbb{P}^x(N_t=0)-1) f(x) + \mathbb{E}^x(f(Z)) \mathbb{P}^x(N_t=1) + \mathbb{E}^x(f(X_t) 1_{\{N_t \geq 2\}}) \right] \tag{4} \end{align*}$$ -We consider the three terms at the right-hand side separately. By $(1)$, we have -$$\lim_{t \to 0} \frac{1}{t} (\mathbb{P}^x(N_t=0)-1) f(x) = - f(x).$$ -On the other hand, it follows from $(1)$ and $(2)$ that -$$\mathbb{E}^xf(Z) = \frac{1}{2d} \sum_{|y-x| =1} f(y)$$ -and -$$\lim_{t \to 0} \frac{1}{t} \mathbb{P}^x(N_t=1) = 1.$$ -Finally, for the last term we note that -$$\frac{1}{t} |\mathbb{E}^x f(X_t) 1_{\{N_t \geq 2\}}) \leq \|f\|_{\infty} \frac{1}{t} \mathbb{P}^x(N_t \geq 2) \xrightarrow[t \to 0]{(1)} 0.$$ -Plugging this into $(4)$, we conclude -$$ \frac{d}{dt} \mathbb{E}^x f(X_t) \bigg|_{t=0} = -f(x) + \frac{1}{2d} \sum_{|y-x|=1} f(y) = \frac{1}{2d} \sum_{|y-x|=1} (f(y)-f(x)).$$ -Regarding relevance and importance of the generator see e.g. this question.<|endoftext|> -TITLE: Use Gröbner bases to count the $3$-edge colorings of planar cubic graphs... -QUESTION [5 upvotes]: I found a nice introduction on how to Use Gröbner bases to construct the colorings of a finite graph. -Now my graphs $G=(V,E)$ are the line graphs planar of cubic graphs, so they are $3$-regular. The corresponding edge-adjacency matrices can be constructed, as shown here (in a crude way, I admit...). -The existence of $3$-colorings on the edges of $G$ is quaranteed by planarity (On surfaces with higher genus only bipartite cubic graphs have chromatic index of $3$.) -Now let there be a field $F = \mathbb{Z}/3\mathbb{Z}$. Let's define two types of polynomials on $F$: - -$f(x) = x(x-1)(x-2)= x^3-x=0$, which asks for one of three colors at edge $x_c$ and -$g(x,y) = y^2+yx+x^2-1=0$, which asks for colors being different for the adjacent edges $y$ and $x$. - -Let $I$ be the ideal $I = (x_c^3-x_c \ |\ x_c \in E) + (x_r^2+x_rx_s+x_s^2-1 \ |\ x_{r,s} \in E)$. -Note ${\displaystyle {\mathfrak {a}}+{\mathfrak {b}}}$ is the smallest left/right ideal containing both ${\displaystyle {\mathfrak {a}}}$ and ${\displaystyle {\mathfrak {b}}}$ (or the union ${\displaystyle {\mathfrak {a}}\cup {\mathfrak {b}}}$. - -Moreover, every solution to this system yields a coloring and can be calculated by using the reduced Gröbner bases for the ideal $I \subseteq F[x_1, \ldots, x_{|E|}]$ - -Is it possible to calculate the number of solutions of the system of equations using Gröbner bases and if so how to do that? - -REPLY [2 votes]: To count the number of solutions to a polynomial system over the algebraic closure of the field: -1) Compute a Groebner basis with respect to any ordering -2) For each variable $x_i$ there should exist a polynomial in the basis with leading monomial $x_i^{a_i}$. If not, you have an infinite number of solutions. -3) You now need to count the number of monomials not divisible by a leading monomial from the Groebner basis. -For example, let $J := < x^2 + y + z, 2xy - z, xz - 5 >$; -We'll first compute a Groebner basis in lex order with $x > y > z$. This is $\{z^4 - 10z^3 + 250, 10y - z^2, 50x + z^3 - 10z^2\}$. The leading monomials are $\{z^4, y, x\}$ and the monomials which are not reducible are $\{1, z, z^2, z^3\}$. The system has four solutions. It works the same in any order. -In Maple: -with(PolynomialIdeals): -J := < x^2 + y + z, 2*x*y - z, x*z - 5 >; -NumberOfSolutions(J); - -For your systems, you will want to add the option $(characteristic=3)$ when you construct the ideal. -Update #1 -You could also use Singular from https://www.singular.uni-kl.de. Here is the example in Singular: -ring R = 0,(x,y,z),lp; -ideal J = [x^2 + y + z, 2*x*y - z, x*z - 5]; -J = groebner(J); -kbase(J); - -For your problem, you can change the ring definition to use characteristic 3 instead of 0 and replace "lp" (lex) with "dp" (grevlex).<|endoftext|> -TITLE: Calculate $\frac{1}{\sin(x)} +\frac{1}{\cos(x)}$ if $\sin(x)+\cos(x)=\frac{7}{5}$ -QUESTION [30 upvotes]: If -\begin{equation} - \sin(x) + \cos(x) = \frac{7}{5}, -\end{equation} -then what's the value of -\begin{equation} - \frac{1}{\sin(x)} + \frac{1}{\cos(x)}\text{?} -\end{equation} -Meaning the value of $\sin(x)$, $\cos(x)$ (the denominator) without using the identities of trigonometry. -The function $\sin x+\cos x$ could be transformed using some trigonometric identities to a single function. In fact, WolframAlpha says it is equal to $\sqrt2\sin\left(x+\frac\pi4\right)$ and there also are some posts on this site about this equality. So probably in this way we could calculate $x$ from the first equation - and once we know $\sin x$ and $\cos x$, we can calculate $\dfrac{1}{\sin x}+\dfrac{1}{\cos x}$. Is there a simpler solution (perhaps avoiding explicitly finding $x$)? - -REPLY [2 votes]: initially i have -$\sin x+\cos x={7\over5}$ -took the squares, -$\sin^2x+\cos^2x+2\sin x\cos x={49\over 25}$ -$1+2\sin x\cos x={49\over 25}$ -$\sin x\cos x={49\over 50}-{1\over 2}={12\over 25}$ -$$\frac{1}{\sin x}+\frac{1}{\cos x}$$ -$$\frac{\sin x+\cos x}{\sin x\cos x}$$ -$$\frac{{7\over 5}}{{12\over 25}}$$ -$${35\over 12}$$<|endoftext|> -TITLE: Help to solve $\displaystyle \frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty} \frac{\zeta(2s)\zeta(s-1)}{\zeta(2s-2)} \frac{x^{s}}{s} dz $ -QUESTION [6 upvotes]: I need help in evaluating the following contour integral: -$$\frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty} \frac{\zeta(2s)\zeta(s-1)}{\zeta(2s-2)} \frac{x^{s}}{s} ds $$ -It looks like a complicated version of Mertens function: -$$ - \frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty} \frac1{\zeta(s)} \frac{x^{s}}{s} \, ds = M(x) -$$ -but I don't have the skills to solve it... - -REPLY [6 votes]: The Dirichlet series -$$ \frac{\zeta(2s)\zeta(s-1)}{\zeta(2s-2)}$$ -isn't a complicated version of $\zeta(s)^{-1}$, and has arithmetic meaning. The coefficients of the Dirichlet series include primes whose powers have odd parity. -Your contour integral is an inverse Mellin Transform, a form of Laplace/Fourier transform. This question is really asking to use Perron's Formula to understand the Dirichlet series above. -It is relatively simple to get a poor estimate, and extremely hard (to be read, impossible at the moment) to get a good estimate. But let's see what we can say. We consider -$$\frac{1}{2\pi i}\int_{(c)} \frac{\zeta(2s)\zeta(s-1)}{\zeta(2s-2)} \frac{X^{s}}{s} ds,$$ -where initially $c > 10$ is in the region of absolute convergence of the Dirichlet series. The first question one needs to ask is: where are the poles of this object? -The numerator has poles at $s = 2$ and $s = \frac{1}{2}$. The denominator contributes poles at $s = 0$ (which we understand) and at $2s - 2 = \rho$, where $\rho$ is a zero of the zeta function. Rewriting, these are at $s = 1 + \frac{\rho}{2}$. With current understanding of the zeroes of the zeta function, we only know that these additional poles have real part less than $\frac{3}{2}$. If we assumed the Riemann Hypothesis, then these poles have real part $\frac{5}{4}$. -There is no hope of moving the line of integration past the poles coming from zeroes of the zeta function, as there are simply too many and we don't have very much decay. -Let's shift the line of integration to $c = \frac{3}{2} + \delta$ for a $\delta$ to be specified later. We pick up a pole at $s = 2$ with residue -$$\operatorname{Res}_{s = 2} \frac{\zeta(2s)\zeta(s-1)}{\zeta(2s-2)} \frac{x^{s}}{s} = \frac{\zeta(4)}{\zeta(2)} \frac{X^2}{2}.$$ -As we pass no additional poles, we now ask about the convergence of the integral at $\sigma = \frac{3}{2} + \delta$. - -The $\zeta(2s)$ in the numerator converges absolutely, and doesn't contribute. -The $\zeta(s-1)$ is at real part $\frac{1}{2} + \delta$. Let $s = \sigma + it$. Then at height $t$, we can bound $\zeta(s-1)$ by approximately $\lvert t \rvert^{\frac{1}{4} - \frac{\delta}{2}}$ by using the Phragmen-Lindelof Principle (a sort of maximum modulus principle applied to vertical strips). It is worth noting that if we use the Lindelof Hypothesis, then there is no growth here. -The $\zeta(2s - 2)$ in the denominator converges absolutely, as we are deliberately staying away from its critical strip. It does not contribute. -The $\frac{1}{s}$ contributes growth that looks like $\lvert t \rvert^{-1}$. -The $X^s$ can be bounded by $X^{\frac{1}{2} + \delta}$, although the $X^{it}$ is important and we'll return to it in a moment. - -Altogether, initial estimates indicate that the shifted integral looks like -$$ X^{\frac{1}{2} + \delta} \lim_{T \to \infty} \int_{\sigma - iT}^{\sigma + iT} \lvert t \rvert^{-\frac{3}{4} - \frac{\delta}{2}} dt.$$ -This converges when $\delta > \frac{1}{2}$... which corresponds to us not moving the line of integration past $2$ at all. That's too bad. -Getting anything better requires much more work. But I can tell you what should be possible, if one were to put a lot of work in. -Assuming the Lindelof Hypothesis puts the integral right on the edge of absolute convergence. Heuristically, any amount of oscillation should make the integral converge. There is exactly one source of oscillation, which is the $X^{it}$ part that is lost when we take absolute values. It is extremely likely that this oscillation, if measured and approximated correctly, actually makes the integral converge. [In fact, it's almost certain]. -This would lead to the following estimate. Denote your Dirichlet series by $$ \frac{\zeta(2s)\zeta(s-1)}{\zeta(2s - 2)} = \sum_{n \geq 1} \frac{a(n)}{n^s}.$$ -Then this would show that -$$ \sum_{n \leq X} a(n) = \frac{\zeta(4)}{2\zeta(2)} X^2 + O(X^{3/2}).$$ -I should also mention that one can use Mellin Transforms with stronger convergence to give slightly weaker results, but which are more tenable. For instance, without doing any handwaving, oen can use the related Mellin Transform -$$\frac{1}{2\pi i}\int_{(c)} \frac{\zeta(2s)\zeta(s-1)}{\zeta(2s-2)} \frac{X^{s}}{s(s+1)} ds$$ -and the same back-of-the-envelope calculations as above to show that -$$ \sum_{n \leq X} a(n)(1 - \frac{n}{X}) = \frac{\zeta(4)}{6\zeta(2)} X^2 + O(X^{3/2}).$$ -This transform and resulting weight is sometimes called a Césaro weighted transform. Although I couldn't find a source to link you to, I know that these appear in Murty's Problems for Analytic Number Theory, as well as very many classical analytic number theory papers.<|endoftext|> -TITLE: What's the equation for a rectircle? (Perfect rounded-corner rectangle without stretching on only one dim) -QUESTION [11 upvotes]: The equation for a rounded square seems to be: -$x^4 + y^4 = 1$ -You can make the radii smaller by increasing (over the even integers) the exponents in the equation. -Here's a picture: -Wolfram Alpha squircle. -If you try to make a rectangle out of it by simply throwing in constant scalars: -$(x/a)^4 + (x/b)^4 = 1$ -You do get a more typical rectangular shape: -naively rectangularized squircle -However the round corners seemed to be stretched on the $x$-axis and non-stretched on the $y$-axis. In other words it doesn't look like a typical rounded rectangle composed by rounding (with a circle, non-stretched on either dim) a rectangle: -Google search: rounded rectangle. -So I'm wondering how you'd modify the squircle equation to get flat tops, flat sides, except circles on the corners - not circles stretched on one dimension. -What I've tried: -adding terms to the equation and looking at the produced image on Wolfram alpha. -Any idea how to make a perfect rounded rectangle with an equation? - -REPLY [11 votes]: A rounded rectangle of size $2a\times2b$ with rounding radius $r$ is given by -$$f(x;a,r) + f(y;b,r) = 1$$ -where -$$f(x;a,r)=\begin{cases}\left(\frac{|x|-(a-r)}r\right)^2&\text{if $|x|\ge a-r$,}\\0&\text{otherwise.}\end{cases}$$ -You want to approximate this with some function of the form $(|x|/a)^p$. Compare derivatives at $x=a$ and you get $p=2a/r$. So a "rectircle" of size $2a\times2b$ with rounding radius $r$ is given by -$$\left(\frac{|x|}a\right)^{2a/r} + \left(\frac{|y|}b\right)^{2b/r}=1.$$<|endoftext|> -TITLE: Decimals of the square root of $n$. -QUESTION [8 upvotes]: Let $a_1, \ldots, a_k$ be any sequence of digits (i.e., each $a_i$ is between 0 and 9). Prove that there exists an integer $n$ such that $\sqrt{n}$ has its first $k$ decimals after the decimal point precisely the string $a_1\ldots a_k$. A possible solution of this problem (but by no means the only solution!) uses the fact that -$$\sqrt{n+1} - \sqrt{n} < \frac1{2\sqrt{n}}\,\,\,\text{ for all }n\geq 1$$ -Can anybody please help me to approach the solution? -I am not getting what the inequality has to do with decimals of root $n$. - -REPLY [8 votes]: The hint is misleading: it shouldn't use the same symbol $n$. So let's use the hint -$$\sqrt{i+1}-\sqrt i < \frac{1}{2\sqrt i} \text{ for all }i \ge 1$$ -Choose $i$ large enough so that $\dfrac{1}{2\sqrt i} < 10^{-k}$. Then successive square roots $\sqrt i, \sqrt{i+1},\sqrt{i+2},\ldots$ differ by less than $10^{-k}$. Now choose $j$ large enough so that $\sqrt{i+j} > \sqrt i + 1$. Then the square roots from $\sqrt i$ to $\sqrt{i+j}$ will contain all possible sequences of $k$ digits after the decimal point. So one of them must equal $a_1\ldots a_k$. - -REPLY [3 votes]: To use the relation you are given, $\sqrt{n+1}-\sqrt n \lt \frac 1{2\sqrt n}$, note that the positional value of $a_k$ is 10^{-k}. If $\frac 1{2\sqrt n} \lt 10^{-k}$ , which means $n \gt 10^{2k}/4$ the step between square roots will be less than $10^{-k}$ so you will hit every string of $k$ decimals. You can then take any $n\gt 10^{2k}/4$ and $m=\lfloor \sqrt{n+0.a_1a_2a_3\dots (a_k+1)}\rfloor$ is the integer you seek.<|endoftext|> -TITLE: Is there a consistent arithmetically definable extension of PA that proves its own consistency? -QUESTION [5 upvotes]: Gödel's second incompleteness applies, for instance, to r.e. extensions of PA. I am wondering if it applies more generally to arithmetically definable extensions of PA. -I see that there is a complete consistent $\Sigma^0_2$ extension of PA. Said theory therefore contains either the sentence "I am consistent" or "I am inconsistent". However, just because it's consistent and complete doesn't mean it contains the sentence "I am consistent" — it could contain the sentence "I am inconsistent", and remain consistent by being such that any model is a nonstandard model in which there exists a strictly nonstandard proof of its inconsistency. - -REPLY [5 votes]: Here is the answer I had made over on MathOverflow: -Surprisingly, the answer is yes! Well, let me say that the answer -is yes for what I find to be a reasonable way to understand what -you've asked. -Specifically, what I claim is that if PA is consistent, then there -is a consistent theory $T$ in the language of arithmetic with the -following properties: - -The axioms of $T$ are definable in the language of arithmetic. -PA proves, of every particular axiom of PA, that it satisfies the defining property of $T$, and so -$T$ extends PA. -$T$ proves that the set of axioms satisfying that definition forms a consistent theory. In other words, $T$ proves that $T$ is consistent. - -In this sense, the theory $T$ is a positive instance of what you -request. -But actually, a bit more is true about the theory $T$ I have in -mind, and it may lead you to think a little about what exactly you -want. - -Actually, PA proves that $T$ is consistent. -Furthermore, the theory $T$ has exactly the same axioms as PA. - -I believe that this was observed first by Feferman, and probably -someone can provide the best reference for this. -The idea of the proof is simple. We shall simply describe the -axioms of PA in a different way, rather than enumerating them in -the usual way. Specifically, let $T$ consist of the usual axioms -of PA, added one at a time, except that we add the next axiom only -so long as the resulting theory remains consistent. -Since we assumed that PA is consistent, it follows that actually -all the axioms of PA will actually satisfy the defining property -of $T$, and so PA will be contained in $T$. Furthermore, since PA -proves of any particular finite number of axioms of PA that they -are consistent, it follows that PA proves that any particular -axiom of PA will be in $T$. -Because of how we defined it, however, it is clear that PA and -hence also $T$ proves that $T$ is consistent, since if it weren't, -there would be a first stage where the inconsistency arises, and -then we wouldn't have added the axiom making it inconsistent. -Almost by definition, $T$ is consistent, and PA can prove that. So -$T$ proves that $T$, as defined by the definition we gave for it, -is consistent. So this theory $T$ actually proves its own consistency! -Meanwhile, let me point out that if one makes slightly stronger -requirements on what is wanted, then the question has a negative -answer, essentially by the usual proof of the second incompleteness theorem: -Theorem. Suppose that $T$ is a arithmetically definable -theory extending PA, such that if $\sigma$ is an axiom of $T$, -then $T$ proves that $\sigma$ is an axiom of $T$ and furthermore -PA proves these things about $T$. If $T$ is consistent, then it does not prove its own consistency. -Proof. By the Gödel fixed-point lemma, let $\psi$ be a -sentence for which PA proves $\psi\leftrightarrow\ \not\vdash_T\psi$. Thus, PA -proves that $\psi$ asserts its own non-provability in $T$. -I claim, first, that $T$ does not prove $\psi$, since if it did, -then since $T$ proves that its actual axioms are indeed axioms, it follows that $T$ would prove that that proof is indeed a proof, and so $T$ would prove that $\psi$ is provable in $T$, a statement which PA and hence $T$ proves is equivalent to $\neg\psi$, and so $T$ would also prove $\neg\psi$, contrary to -consistency. So $T$ does not prove $\psi$. And this is precisely -what $\psi$ asserts, so $\psi$ is true. -In the previous paragraph, we argued that if $T$ is consistent, -then $\psi$ is true. By formalizing that argument in -arithmetic, then since we assumed that PA proved our hypotheses on $T$, we see that PA proves that $\text{Con}(T)\to\psi$. So -if $T$ were to prove $\text{Con}(T)$, then it would prove $\psi$, -contradicting our earlier observation. So $T$ does not prove -$\text{Con}(T)$. -QED<|endoftext|> -TITLE: What is the the integral of $\sqrt{x^a + b}$? -QUESTION [5 upvotes]: How do you evaluate $\displaystyle\int\sqrt{x^a + b}\,\,\text{dx}$, where $a \neq 0$ and $a \neq 1$? -For example, how do you evaluate $\displaystyle\int\sqrt{x^2 + 1}\,\text{dx}$? If we let $u=x^2+1$, then $du=2x\,\text{dx}$. We cannot do this because there is no $2x$ in the original function. Of course you cannot let $u=\sqrt{x^2 + 1}$, because $du=\displaystyle\frac{x}{\sqrt{x^2+1}}\,\text{dx}$. There is no $\displaystyle\frac{x}{\sqrt{x^2+1}}$ in the original function. So how do we solve this? -How about $\displaystyle\int\sqrt{x^3 + 1}\,\text{dx}$? - -REPLY [3 votes]: Kim Peek's "funny" hypergeometric solution is really the series solution near $x=0$. We have, for $|x^a/b| < 1$, -$$ \sqrt{x^a + b} = \sqrt{b} \sqrt{1 + x^a/b} = -\sqrt{b} \sum_{k=0}^\infty {1/2 \choose k} (x^a/b)^k$$ -so integrating term-by-term -$$ \int \sqrt{x^a + b}\; dx = \sum_{k=0}^\infty {1/2 \choose k} \dfrac{x^{ak+1}}{(ak+1) \; b^{k-1/2}}$$ -where $${1/2 \choose k} = \dfrac{\Gamma(3/2)}{\Gamma(k+1)\; \Gamma(3/2-k)} -= \dfrac{(2k)!}{(-4)^k (k!)^2 (1-2k)}$$ -If you take the definition of the hypergeometric function as a power series, you get exactly this series. -Alternatively, for $|x^a/b| > 1$ you get a different series involving negative powers of $x$: -$$\sqrt{x^a + b} = x^{a/2} \sqrt{1 + b/x^a} = \sum_{k=0}^\infty {1/2 \choose k} b^k x^{(1/2 - k)a}$$ -so that -$$ \int \sqrt{x^a + b}\; dx = \sum_{k=0}^\infty {1/2 \choose k} \dfrac{b^k x^{(1/2-k)a+1}}{(1/2-k)a + 1}$$ -which also has a hypergeometric representation.<|endoftext|> -TITLE: Solving $e^{\sin(z)}=1$ in the complex plane -QUESTION [7 upvotes]: I am trying to solve $e^{\sin(z)}=1$ in the complex plane. -I know that this means that $\sin(z)=2k\pi i$ for some integer $k$. This is equivalent to saying that -$$\frac{e^{iz}- e^{-iz}}{2i}=2k \pi i,$$ -which means that -$$e^{2iz}+4k\pi e^{iz}-1=0.$$ -If we let $x=e^{iz}$, then it is a quadratic equation, but my discriminant depends on $k$, so I do not now how to simplify it. Is there an easier way to solve this? - -REPLY [3 votes]: To continue with your line of reasoning (which so far is correct), you need to solve $e^{2iz} + 4 k \pi e^{iz} - 1 = 0$. With $x = e^{iz}$, this becomes the quadratic equation $x^2 + 4kx - 1 = 0$. The discriminant is $\Delta = (4k\pi)^2 + 4 = 4 (4k^2\pi^2 + 1)$, which is always positive ($k$ is real). The two solutions are therefore $x = -2k\pi + \sqrt{4k^2\pi^2+1}$ and $x = -2k - \sqrt{4k^2\pi^2 + 1}$. - -In the first case we want to solve $e^{iz} = -2k\pi + \sqrt{4k^2\pi^2+1}$. Since $4k^2\pi^2+1 = (-2k\pi)^2 + 1> (-2k\pi)^2$, this is a positive real numbers, and so the first batch of solutions is -$$z = i\log(-2k\pi + \sqrt{4k^2+1}) + 2 i \pi l, \text{ for some integers } k, l.$$ -In the second case, the equation to solve is $e^{iz} = -2k\pi - \sqrt{4k^2\pi^2+1}$. This is a negative real number, and thus the second batch of solutions is: -$$z = i\pi + i\log(2k\pi+\sqrt{4k^2\pi^2+1}) + 2 i \pi l, \text{ for some integers } k, l.$$ - -And this is the complete set of solutions. - -PS: You can express that a bit more concisely by noticing that $\log(2k\pi + \sqrt{4k^2\pi^2+1}) = \operatorname{argsinh}(2k\pi)$, and so the set of solutions becomes -$$e^{\sin z} = 1 \iff z \in \{ i\operatorname{argsinh}(-2k\pi) + 2 i\pi l \mid k,l \in \mathbb{Z} \} \cup \{ i\pi + i\operatorname{argsinh}(2k\pi) + 2 i \pi l \mid k,l \in \mathbb{Z} \}.$$<|endoftext|> -TITLE: Prove that 10101...10101 is NOT a prime. -QUESTION [23 upvotes]: So basically we have a number $10101...10101$ that contains $2016$ zeros and can be written as$ \sum _{ k=0 }^{ 2016 }{ 100^{ k } }$ . I want to prove that this number is not a prime without using anything besides a piece of paper and a pen. I'm stuck on this for quite a few days now. - -REPLY [5 votes]: In fact, you can generalize this to any base $a\in\mathbb Z_{\ge 2}$. -If $a,k\in\mathbb Z_{\ge 2}$, then ($_a$ denotes 'base $a$'): -$$\underbrace{10101\cdots 101_a}_{k\text{ zeros}}=\sum_{i=0}^k a^{2i}=\frac{a^{2(k+1)}-1}{a^2-1}=\frac{\left(a^{k+1}+1\right)\left(a^{k+1}-1\right)}{a^2-1},$$ -$a^{k+1}+1>a^{k+1}-1>a^2-1$, therefore $\underbrace{10101\cdots 101_a}_{k\text{ zeros}}$ is composite. -Therefore, $10101_a, 1010101_a,\ldots$ are all composite (for any base $a\in\mathbb Z_{\ge 2}$).<|endoftext|> -TITLE: Is there a series to show $22\pi^4>2143\,$? -QUESTION [10 upvotes]: This extends this post. - -I. For $\pi^3$: - -$$\pi^6-31^2 =\sum_{k=0}^\infty\left(-\frac{63}{(2k+2)^6}+\frac{31^2}{(2k+3)^6}\right) =\sum_{k=0}^\infty P_1(k)\tag1$$ -As pointed out by J. Lafont, when $P_1(k)$ is expanded out, its coefficients are all positive. Thus so is the $\text{LHS}$, implying $\pi^3>31$. - -II. For $\pi^4$: - -The convergents of $\pi^4$ are, -$$97,\, \frac{195}{2},\, \frac{487}{5},\, \frac{1656}{17},\, \frac{2143}{22},\dots$$ -The last one, being the particularly close approximation $22\pi^4 \approx 2143.0000027$, was mentioned by Ramanujan. (See also this post.) Using, -$$\frac{\pi^8}{9450}=\sum_{k=0}^\infty \frac{1}{(k+1)^8}$$ -$$\frac{17\pi^8}{161280}=\sum_{k=0}^\infty \frac{1}{(2k+1)^8}$$ -and the same method to find $(1)$, we get, -$$\pi^8-\Big(\frac{487}{5}\Big)^2 =\sum_{k=0}^\infty\left(\frac{381}{5(2k+2)^8}+\frac{r_1^2}{(2k+3)^8}\right)=\sum_{k=0}^\infty P_2(k)\tag2$$ -$$\pi^8-\Big(\frac{2143}{22}\Big)^2 =\sum_{k=0}^\infty\left(-\frac{181695}{11^2(2k+2)^8}+\frac{r_2^2}{(2k+3)^8}\right)=\sum_{k=0}^\infty Q_1(k)\tag3$$ -$$\pi^8-\Big(\frac{2143}{22}\Big)^2 =\sum_{k=0}^\infty\left(\frac{r_2^2}{(k+2)^8}-\frac{70208}{1815(2k+1)^8} \right)=\sum_{k=0}^\infty Q_2(k)\tag4$$ -where $r_1 =\frac{487}{5},\,$ $r_2 =\frac{2143}{22}$. The coefficients of $P_2(k)$ are all positive, so $5\pi^4>487$. -However, when the $Q_i(k)$ are expanded out, the constant term for both is negative, so we cannot make an analogous conclusion. (In fact, it takes several terms before the sum turns positive.) - -Q: Can one find a similar series for $\pi^8-\Big(\frac{2143}{22}\Big)^2 = \sum_{k=0}^\infty R(k)$ such that all coefficients are positive and immediately implying $22\pi^4>2143$? - -REPLY [3 votes]: From the accelerated series -$$\zeta(4)=\frac{\pi^4}{90}=\frac{36}{17}\sum_{n=1}^{\infty } -\frac{1}{n^{4}\dbinom{2n}{n}}$$ -(Convergence acceleration technique for $\zeta(4)$ (or for $\eta(4)$) via creative telescoping?) -we have the direct sum -$$\pi^4 -\frac{2143}{22}= \frac{5}{52898832} \sum_{n=10}^\infty \left( \frac{1998926767}{n^4\dbinom{2n}{n}}+\frac{17452241}{(n+1)^4\dbinom{2(n+1)}{n+1}}\right)$$ -The denominator in the coefficient fraction factors into small primes: -$$52898832=(2·3·7)^4·17$$ -This series may be related to these.<|endoftext|> -TITLE: Improper integral: $\int_1^\infty\frac{\sin(\sqrt{x})}{\sqrt{x}}dx $. -QUESTION [6 upvotes]: mathematica is reporting that the improper integral $\int_1^\infty\frac{\sin(\sqrt{x})}{\sqrt{x}}dx $ coverges to $2\cos(1)$. However, when I try to confirm this by actually integrating it using u-substitution, I end up with $-2\lim\limits_{n=1}^\infty\left(\cos n - \cos 1\right)$. I am thinking we cannot determine the first limit here.(oscillation). Any help would be appreciated. - -REPLY [7 votes]: OP, you are correct, $$\int_1^n\frac{\sin \sqrt{x}}{\sqrt{x}}dx=-2\cos\sqrt{n}+2\cos 1$$ -Hence the improper integral is $$\int_1^\infty\frac{\sin \sqrt{x}}{\sqrt{x}}dx=\lim_{n\to\infty}\int_1^n\frac{\sin \sqrt{x}}{\sqrt{x}}dx=\lim_{n\to\infty}-2\cos\sqrt{n}+2\cos 1$$ -And the latter limit does not exist. Your computer algebra system (mathematica) is giving an incorrect answer. Alpha gives the same incorrect answer, probably because it's got mathematica under the hood.<|endoftext|> -TITLE: How to read Spectral Theory of Graphs -QUESTION [12 upvotes]: My background is a course is - -Linear Algebra -Hoffman,Kunze -Graph Theory-Frank Harary - -I am doing a coursework in Spectral Graph Theory . -As I am going through it, I am searching for some applications in this topic. - -One application I found was showing two graphs are non-isomorphic . If the Laplacian Matrix of two graphs have different spectrum then the graphs are non-isomorphic. -Are there any other? -What is the probability that if two graphs are cospectral then they are isomorphic? -Is Algebraic Graph Theory different from Spectral Graph Theory or one is a branch of the other? -Why are no books available on Spectral Graph theory barring a few while there are plenty on other topics? -How do people study in this topic? - -If anybody can find a suitable answer to these questions then I would be extremely grateful. - -REPLY [10 votes]: About your reference request, presumably you know Chung's book Spectral Graph Theory. To my knowledge this is the only reference dedicated to spectral methods; however, most major books on graph theory have sections on spectral methods. There seem to be scattered notes on the internet, but I don't know about those. -Edit: the recent book 'Graphs and Matrices' by Bapat is more accessible and has exercises, so it is probably better for self study. I have not read it, but browsing through, it seems like a nice textbook. -Regarding your questions: -1) There are many applications of spectral graph theory in equidistribution theory, additive combinatorics and computer science. Many natural families of graphs can be described by spectral properties and the Laplacian (adjacency matrix) of a graph regulates the behavior of natural dynamical systems on it. For starters read on expander families of graphs (https://en.wikipedia.org/wiki/Expander_graph) and the spectral study of random graphs; also see Qiaochu Yuan's answer in the related question "Motivation for spectral graph theory". -2) That is an interesting question; unfortunately, it is completely open. If $P_n$ is the proportion of graphs on $n$ vertices determined by their spectrum, we don't even know if the limit exists as $n\to \infty$. The conjecture is that $P_n \to 1$, so almost all graphs are determined by their spectra. The fact that such a natural first question is completely open hints at the difficulty of developing a very general 'spectral graph theory' beyond the basics. -3) 'Algebraic graph theory' is even less well-defined that 'spectral'. Following the wikipedia breakdown of algebraic graph theory, the 'linear algebra' of a graph is morally its spectral theory, if you interpret energy estimates, eigenvalue distribution and so on as 'normed algebra'. Group theory is largely concerned with highly symmetric graphs and the interplay between spectral properties and symmetries gives some of the applications mentioned in (1) (namely to equidistribution problems). I don't know much about graph invariants, so I will not comment on that. -4) The real reason why so few books are dedicated to spectral graph theory is that its basics are pretty simple to set up, and beyond that one comes very quickly to the forefront of research (just remember (2)). The research on spectral graph theory usually involves an object from a different research area giving rise to a family of graphs whose spectral properties are interesting, tractable, and relevant for the problem at hand. The accompanying research areas then usually determine the specifics of how spectral theory is to be applied, rather than vice versa. For example, if you are looking at Cayley graphs, it is group theory that dominates the techniques. If you are looking at random graphs, it is probability theory, and so on. -5) Research papers and by studying spectral geometry. Really, as Qiaochu mentioned in the other thread, spectral graph theory is the spectral geometry of the finite metric space given by the word metric of the graph; you first understand the basics of spectral geometry of metric spaces and then spectral graph theory is an instance of that. -Edit: in an answer to a related question (ELI5: What is spectral graph theory?), EHH gave the following link you may also find useful: https://www.youtube.com/watch?v=8XJes6XFjxM<|endoftext|> -TITLE: Why is the statement "all vector space have a basis" is equivalent to the axiom of choice? -QUESTION [6 upvotes]: I'm reading a section in an abstract algebra book, where it reviews vector spaces and suddenly comments that "all vector space have a basis" is equivalent to the axiom of choice...I haven't studied axioms of choice yet and after searching on the internet, I do not see why these two statements are equivalent...Could someone briefly explain to me? Thanks! - -REPLY [17 votes]: Note: It's worth pointing out that when we say "every vector space has a basis is equivalent to AC", we mean that these statements are equivalent over ZF (= "Zermelo-Fraenkel set theory without choice"). That is, the axiom system ZF can prove "AC iff every vector space has a basis." -The equivalence is not at all obvious! One implication is easy: using the axiom of choice to prove that every vector space has a basis. The other is the hard one, and was proved by Blass; see http://www.math.lsa.umich.edu/~ablass/bases-AC.pdf, which is self-contained. -Blass' construction actually proves that "every vector space has a basis" implies the axiom of multiple choice - that from any family of nonempty sets, we may find a corresponding family of finite subsets (so, not quite a choice function); over ZF this is equivalent to AC (this takes an argument, though, and in particular uses the axiom of foundation). -Very rough summary: start with a family $X_i$ of nonempty sets; without loss of generality, they're disjoint. Now look at the field $k(X)$ of rational functions over a field $k$ in the variables from $\bigcup X_i$. There is a particular subfield $K$ of $k(X)$ which Blass defines, and views $k(X)$ as a vector space over $K$. Blass then shows that a basis for $k(X)$ over $K$ yields a multiple choice function for the family $\{X_i\}$. - -The reason I don't give a better summary is that the full argument is really not reducible to a soundbite - if you want to understand it, you should read the details. There are many statements whose equivalence with AC has a "simple" picture; this is one of my favorite equivalences which is intricate!<|endoftext|> -TITLE: Construct a game with only pure strategy nash equilibrium. -QUESTION [7 upvotes]: I'm trying to construct a normal-form game with $2$ players such that the game has exactly $4$ Nash Equilibria -From the above properties, I know the game has to be a $4 \times 4$ matrix game, and it has $4$ pure strategy Nash Equilibrium with no mixed strategy Nash Equilibrium. This means there's no corresponding probability such that the players are indifferent to choose. Could someone find an example of this kind of matrix? And briefly explain how you construct it? Thanks a lot. - -REPLY [7 votes]: This requires degeneracy, since any non-degenerate game has an odd number of equilibria. -As a warmup let's do an example of a $2 \times 2$ game with exactly two (pure) equilibria: -$$ A=B= -\left(\begin{array}{cc} - 0 & 0 \\ - 0 & 1 -\end{array}\right) -$$ -The game has exactly two pure Nash equilibria: (top, left) and (bottom, right). The reason that no mixture is possible is that as soon as player 1 puts any positive probability on bottom, the unique best response is right. Likewise, by symmetry, as soon as player 2 puts positive probability on right, the unique best response is bottom. The game is degenerate because against left, a pure strategy, i.e., a mixed strategy with support size $1$, there are $2$ ($2>1$) best responses. This is a game where bottom and right are weakly dominant strategies for players 1 and 2 respectively. -Now I generalize this idea to a 4x4 game: -$$ -\begin{align} -A & = -\left(\begin{array}{cccc} - 0 & 0 & 0 & 0 \\ - 0 & 1 & 0 & 0 \\ - 0 & 1 & 2 & 0 \\ - 0 & 1 & 2 & 3 -\end{array}\right)\\ -B=A^\top & = -\left(\begin{array}{cccc} - 0 & 0 & 0 & 0 \\ - 0 & 1 & 1 & 1 \\ - 0 & 0 & 2 & 2 \\ - 0 & 0 & 0 & 3 -\end{array}\right) -\end{align} -$$ -The four pure equilibria are the diagonal cells. No mixing is possible for similar reasons to the 2x2 case. To check the answer, you can use my online game solver http://banach.lse.ac.uk. -This idea clearly generalizes to a construction of symmetric $n \times n$ games with exactly $n$ symmetric pure equilibria.<|endoftext|> -TITLE: Proving that $\cos(\frac{\arctan(\frac{11}{2})}{3}) = \frac{2}{\sqrt{5}}$ -QUESTION [9 upvotes]: I am trying to solve the cubic equation $x^3-15x-4=0$ using Cardano's formula. I already know that the solutions are $x=4$, $x= \sqrt{3}-2$ and $x= -\sqrt{3}-2$ and that using the formula in this problem requires finding the cube roots of $2+11i$ and $2-11i$, which are $2+i$ and $2-i$. But when I try to use the formula on my calculator, a TI-89 Titanium, I get $2\sqrt 5 \sin \left( \frac{\arctan(\frac{2}{11})}{3}+\pi/3 \right)$ instead of $4$. For some reason, the fact that $(2+i)^3 = 2 +11i$ and $x = 4$ is a zero of $x^3-15x-4$ feels like a byproduct of something else. So I have tried for more than a month to prove that $\cos(\frac{\arctan(\frac{11}{2})}{3}) = \frac{2}{\sqrt{5}}$ without using either of these results. - -REPLY [3 votes]: Starting with -$\cos(\dfrac{\arctan(\frac{11}{2})}{3}) = \frac{2}{\sqrt{5}}$ -this also means starting with the triangle of trig ratios drawn and Pythagoras theorem: - -$\tan(\dfrac{\arctan(\frac{11}{2})}{3}) = \frac{1}{2} = t ,$ -Now use the $ \tan 3 \theta = \dfrac{3 t - t^3}{1-3 t^2} \rightarrow \dfrac{11}{2} $ triple angle formula and simplify, done!<|endoftext|> -TITLE: Understanding Eigenvalues, Eigenfunctions and Eigenstates -QUESTION [5 upvotes]: Please could somebody explain the meaning and uses of Eigenvalues, eigenfunctions and eigenstates for me. I have taken 3 years of physics and math classes at university and never fully grasped the concept/ never had a satisfactory answer. I used eigenstates a lot in Quantum mechanics yet I did not understand their significance and it still bothers me to this day. -If possible please include some basic examples or analogies. - -REPLY [2 votes]: Just to supplement @PVanchinathan's excellent answer and because the comment became too long, I'm writing this answer. -The movement of the dots represents the linear transformation on a whole. Some vectors are also shown. For instance, the red ones are all vertical/horizontal in the original representation, but when transformed, they suddenly point in another direction. Same deal with the purple ones. But the blue ones doesn't change direction under the transformation, they only change their length. If we represent the linear transformation in question by a matrix $\mathbb{A}$, we see that to apply it to a blue vector $v_b$ (the eigenvector) is the same as multiplying it with some number $\lambda$ (the eigenvalue), which can be written succinctly as an eigen-equation $$\mathbb{A}v_b=\lambda v_b$$ -The reason this kind of thing is so useful, for instance in QM, is first and foremost because it is easy to work with them (there are many nice theorems that allow you to do nice things when you're working in a basis of eigenvectors), but also because the (time-independent) Schrödinger-equation itself is an eigen-equation: -$$H \psi = E \psi$$ -Oh, and eigenfunction is just another name for eigenvector. Same with eigenstate and eigenvalue. -Hope that helps!<|endoftext|> -TITLE: subobject classifier for partial orders -QUESTION [7 upvotes]: Does the category of partial orders have a subobject classifier? (Edit: No, see Eric's answer.) -If not, what is a category which is "close" to the category of partial orders (e.g. it should consists of special order-theoretic constructs) and has a subobject classifier? Bonus question: Is there also such an elementary topos? Notice that the category of partial orders has all limits, colimits and it is cartesian closed. - -REPLY [3 votes]: The fact that all internal co-categories in a coherent category are necessarily co-equivalence relations [see Peter Lumsdaine's TAC article A small observation on co-categories] provides a telltale sign that the category of posets fails to be a topos. -For the inclusion functor $\textbf{Poset} \to \textbf{Cat}$ is represented by the internal co-category whose (co-?)nerve is the inclusion of the non-empty finite ordinals $\Delta \to \textbf{Poset}$, which is evidently not a co-equivalence relation (since not all posets are equivalence relations).<|endoftext|> -TITLE: Is there a nontrivial fiber or principal bundle over $S^3$? -QUESTION [9 upvotes]: Is there a nontrivial fiber or principal bundle over $S^3$?I know that, by a paper of Steenrod,see the link below, every sphere bundle on 3- sphere is trivial but what about arbitrary fiber bundle? -Triviality/non-triviality of line/circle bundle over $S^3$ - -REPLY [16 votes]: Isomorphism classes of principal $G$-bundles over $S^3$ are classified by $\pi_3(BG)\cong \pi_2(G)$. So there is a nontrivial principal $G$-bundle over $S^3$ if and only if $G$ has non-vanishing second homotopy group. -As all Lie groups have vanishing second homotopy group $G$, any fiber bundle over $S^3$ with structure group a Lie group is trivial, in particular $S^3$ has no nontrivial vector bundles over it. -To get a nontrivial example, pick your favorite topological group $G$ with $\pi_2(G)\neq 0$ and a nontrivial element in that group. Pulling back the universal $G$-bundle gives you a principal $G$-bundle over $S^3$ which is nontrivial. -A good source of examples are diffeomorphism groups $Diff(M)$ of closed manifolds $M$, which are not finite dimensional manifolds (in particular Lie groups) in general, so may have nonvanishing second homotopy group. By the above argument $\pi_2(Diff(M))$ classifies principal $Diff(M)$-bundles and hence (via the associated bundle construction) also fiber bundles with fiber $M$ and structure group $Diff(M)$, i.e. smooth $M$-bundles over $S^3$. -As an example, Hatcher calculated the homotopy type of $Diff(S^1\times S^2)$: It is the one of $O(2)\times O(3)\times \Omega SO(3)$, so we arrive at $$\pi_2(O(2)\times O(3)\times \Omega SO(3))=\pi_2(\Omega SO(3))\cong\pi_3(SO(3))\cong\pi_3(S^3)\cong\mathbb Z.$$ Hence, smooth $S^1\times S^2$-bundles over $S^3$ are classified by the integers.<|endoftext|> -TITLE: Is every topological group the topological fundamental group of an space? -QUESTION [10 upvotes]: The fundamental group $\pi_{1}(X)$ of a path connected topological space $X$ is the image of $Hom(S^{1},X)$. So the fundamental group can be topologized with quotient topology where $Hom(S^{1},X)$, with based point consideration, is equipped to compact open topology. See D.K. Biss Topology and its Applications 124 (2002) 355-371. -Is it true that every topological group is the topological fundamental group of a path connected topological space? - -REPLY [4 votes]: The fundamental group equipped with the natural quotient topology is not always a topological group. In fact, there are many errors in Biss' paper that you reference. Enough so that it has been retracted from Topology and its Applications. This object you describe is still useful but now is usually called the quasitopological fundamental group and denoted $\pi^{qtop}(X,x)$. It is not true that every quasitopological group is isomorphic to some fundamental group $\pi^{qtop}(X,x)$. For more on it, see -J. Brazas, P. Fabel, On fundamental groups with the quotient topology, J. Homotopy and Related Structures 10 (2015) 71-91. arXiv -There is a natural topology you can put on $\pi_1(X,x)$ which makes it a topological group and for which many classical algebraic topology theorems have topological group analogues. This alternative topology is characterized as the finest group topology such that the function $Hom((S^1,b),(X,x))\to \pi_1(X,x)$ identifying homotopy classes is continuous (but may not be a quotient map). The resulting topological group is usually denoted $\pi^{\tau}_{1}(X,x)$. It is true that for every topological group $G$, there is some path-connected space $X$ such that $\pi^{\tau}_{1}(X,x)\cong G$. -Along with generalized covering space theory, $\pi_{1}^{\tau}$ helped to solve some older questions on open subgroups of topological groups. -See: -J. Brazas, The fundamental group as a topological group, Topology Appl. 160 (2013) 170-188 arXiv<|endoftext|> -TITLE: Find the limit of this sequence -QUESTION [5 upvotes]: Suppose $$ -f_n(x)=\sum_{k=1}^n \frac{\cos(kx)}{k}, -$$ -and let $a_n=\min_{x \in [0,\pi/2]} f_n(x)$, find - $\lim_{n \to\infty} a_n$. -I wrote a program and found that the -$\arg\min_{x \in [0,\pi/2]} f_n(x)$ is always close to $\pi/2$, -and the limit of $\{a_n\}$ seems to be $-\ln(2)/2$. -Can anyone give a proof? - -REPLY [2 votes]: Using the Taylor expansion -$$-\log(1-t) = \sum_{k=1}^\infty\frac{t^k}k,$$ -$$ --\log(1-e^{ix}) = \sum_{k=1}^\infty\frac{(e^{ix})^k}k = \sum_{k=1}^\infty\frac{e^{ikx}}k = -\sum_{k=1}^\infty\frac{\cos(kx)}k + i \sum_{k=1}^\infty\frac{\sin(kx)}k -$$ -and your sum is the $n$-th partial sum of the real part. -But -$$f(x) = \Re(-\log(1-e^{ix})) = -\frac12\log 2 - \frac12\log(1-\cos x)$$ -Can be proved (Dirichlet test) that $f_n\to f$ uniformly in $[\epsilon,\pi/2], \epsilon>0$, and using max {$f_n(x):x\in[a,b]$}$\to$ max{$f(x):x\in[a,b]$}, -$$\min f_n\to\min f.$$<|endoftext|> -TITLE: Laurent Series expansion without geometric series -QUESTION [5 upvotes]: There are several functions in complex analysis which I have not been able to get the Laurent expansion for, both of which are very different from the examples I see online and in the (4) textbooks I have checked out...: -I need to find the Laurent expansion about each singularity of the following function: -$$f(z) = {1 \over z^6+1}$$ -I had no issue with finding the singular points, but I don't see how to create a Laurent expansion from there---all of the online examples show something like: -$$f(x) = {1 \over z(z-1)}$$ -In which it is much more clear how to use a geometric series to find the Laurent series. -I also have the same issue for the following function: -$$f(z) = {1 \over z^4+2z^2+1}$$ -I can find the singularities, but where do I go from there? The examples found online are tough to map onto these problems. - -REPLY [2 votes]: Let $\alpha=e^{(2n+1)\pi i/6}$ be one of the roots of $z^6+1$ and $\alpha w=z-\alpha$. -$$ -\begin{align} -\frac1{z^6+1} -&=\frac1{1-(1+w)^6}\tag{1}\\ -&=-\frac1w\frac1{6+15w+20w^2+15w^3+6w^4+w^5}\tag{2}\\ -&=\sum_{k=-1}^\infty b_kw^k\tag{3}\\ -&=-\frac1{6w}+\frac5{12}-\frac{35}{72}w+\frac{35}{144}w^2+\frac{119}{864}w^3+\dots\tag{4} -\end{align} -$$ - -Explanation: - $(1)$: $z^6+1=1+\alpha^6(1+w)^6=1-(1+w)^6$ - $(2)$: Binomial Theorem - $(3)$: label the powers of $w$ in the expansion of $(2)$ - $(4)$: multiply both sides of $(2)$ and $(3)$ by $w\!\left(6+15w+20w^2+15w^3+6w^4+w^5\right)$: - $\phantom{(3)\,}$ $\color{#C00000}{-1}=\left(6+15w+20w^2+15w^3+6w^4+w^5\right)\sum\limits_{k=-1}^\infty b_kw^{k+1}$ - $\phantom{(3)\,}$ $\phantom{-1}=\color{#C00000}{6b_{-1}}+\color{#00A000}{(15b_{-1}+6b_0)}w+\color{#00A000}{(20b_{-1}+15b_0+6b_1)}w^2$ - $\phantom{(3)\,}$ $\phantom{-1}+\color{#00A000}{(15b_{-1}+20b_0+15b_1+6b_2)}w^3+\color{#00A000}{(6b_{-1}+15b_0+20b_1+15b_2+6b_3)}w^4$ - $\phantom{(3)\,}$ $\phantom{-1}+\sum\limits_{k=4}^\infty\color{#0000F0}{(b_{k-5}+6b_{k-4}+15b_{k-3}+20b_{k-2}+15b_{k-1}+6b_k)}w^{k+1}$ - $\phantom{(3)\,}$ The red term is $-1$ and gives $b_{-1}=-\frac16$ - $\phantom{(3)\,}$ The green terms are $0$ and give the other coefficients in $(4)$. - $\phantom{(3)\,}$ The blue term in the sum is $0$ and gives the recursion in $(5)$. - -where, for $k\ge4$, -$$ -b_k=-\frac{15b_{k-1}+20b_{k-2}+15b_{k-3}+6b_{k-4}+b_{k-5}}6\tag{5} -$$ -Then substitute $w=\frac{z-\alpha}\alpha$ into $(3)$ to get -$$ -\frac1{z^6+1}=\sum_{k=-1}^\infty\frac{b_k}{\alpha^k}(z-\alpha)^k\tag{6} -$$<|endoftext|> -TITLE: A curious triangle inequality -QUESTION [15 upvotes]: Let $ABC$ be a triangle. Pick a point $P$ inside the triangle. How would you show that -\begin{equation} -|PA|+|PB|+|PC|+\min\{|PA|,|PB|,|PC|\}\leq |AB|+|BC|+|CA|. -\end{equation} - -REPLY [4 votes]: We can prove an even stronger result. -Step 1 -Assume $P$ is closest to $A$, so we want to show -$$ 2|PA| + |PB| + |PC| \le |AB| + |BC| + |CA|. $$ -For any given $P$, let $A$ move on the circle with center $P$ and radius $|PA|$, This way, only the right hand side of the inequality changes, and we can minimize it subject to the constraint that $P$ remains inside $\triangle ABC$. We can see that there are two local minima at the extremes of $A$'s range, one when $P$ lies on $\overline{AB}$ and one when $P$ lies on $\overline{AC}$. -We will assume without loss of generality that the global minimum is when $P$ lies on $\overline{AB}$. -In this case (i.e. with $P$ on $\overline{AB}$), we see that $|AP| + |PB| = |AB|$, and we can subtract this from our desired inequality to reduce it to -$$|PA| + |PC| \le |BC| + |CA|.$$ -Step 2 -Now we try the same idea again: We continue by letting $C$ move on a circle centered at $P$, and we see that $|BC| + |CA|$ is minimized when $C$ comes down to the line $\overleftrightarrow{AB}$ on the side where $B$ is (since $B$ is at least as far from $P$ as $A$ is). -Now $|PA| + |PC| = |AC|$, which shows us that our target inequality is true. -QED -Stronger Result -At the end of step 2 above, we see that the gap between the two sides of the inequality is exactly $|BC|$, which does not correspond to the $|BC|$ of the original triangle, but does correspond to $||PC|-|PB||$, so in fact we can strengthen the original inequality to -$$2 \min\{|PA|,|PB|,|PC|\} + 2 \max\{|PA|,|PB|,|PC|\} -\le |AB| + |BC| + |CA|.$$ -Since equality only occurs when both the first step (moving $A$) and the second step (moving $B$) do not change the right hand side, and the second step results in a degenerate triangle (reducing the right hand side if the triangle was not initially degenerate), we see that for non-degenerate triangles, the inequality is strict (even if $P$ is on an edge or vertex of $\triangle ABC$): -$$2 \min\{|PA|,|PB|,|PC|\} + 2 \max\{|PA|,|PB|,|PC|\} -\lt |AB| + |BC| + |CA|.$$<|endoftext|> -TITLE: Is there an identity that says $|\sqrt {a^2+x^2} - \sqrt {a^2+y^2}| \leq |\sqrt {x^2} - \sqrt {y^2}|$? -QUESTION [6 upvotes]: Is there an identity that says $|\sqrt {a^2+x^2} - \sqrt {a^2+y^2}| \leq |\sqrt {x^2} - \sqrt {y^2}|$? -Because of the nature of the square root function, its derivative monotonically decreases. so differences "further up" the function would be less than those lower down. - -REPLY [6 votes]: Yes. -$$ -\left|\sqrt {a^2+x^2} - \sqrt {a^2+y^2}\right| =\frac{\lvert x^2-y^2\rvert}{\left|\sqrt {a^2+x^2} + \sqrt {a^2+y^2}\right|} -= |\sqrt {x^2} - \sqrt {y^2}|\cdot \frac{|\sqrt {x^2} + \sqrt {y^2}|}{\left|\sqrt {a^2+x^2} + \sqrt {a^2+y^2}\right|} -\leq |\sqrt {x^2} - \sqrt {y^2}| -$$ - -REPLY [4 votes]: This is true. You can see this by assuming $x>y$ without losing generality and then differentiating -\begin{equation} -f(a)=\sqrt{a^2+x^2}-\sqrt{a^2+y^2} -\end{equation} -with respect to $a$. Derivative is negative hence $f$ is decreasing function of $a$ and is maximized at $0$.<|endoftext|> -TITLE: Is $\pi^k$ any closer to its nearest integer than expected? -QUESTION [13 upvotes]: Particular questions such as Why is $\pi$ so close to $3$? or Why is $\pi^2$ so close to $10$? may be regarded as the first two cases of the question sequence Why is $\pi^k$ so close to its nearest integer? -For instance, we may stare in awe in front of the almost-unit -$$\frac{\pi}{31^\frac{1}{3}}=1.000067...$$ -or, in binary system, -$$\frac{\pi}{11111_{2}^\frac{1}{11_2}} \approx 1$$ -so proving that $\pi^3>31$ becomes interesting, but it would not be striking that sometimes $\pi^k$ lied close to its nearest integer if that was balanced by other unlucky times when it would be at almost half a unit, by the straightforward effect of the rounding function. -Under a uniform distribution assumption, the expected distance between $\pi^k$ and its nearest integer is $\frac{1}{4}$, and the title has at least the following two interpretations, as a random variable (given this made sense) and as particular outcomes of a random variable: - -In average, is $\pi^k$ any closer to its nearest integer than expected? The first three powers have differences less than $\frac{1}{4}$ in absolute value, namely -$$\begin{align} -\pi-3 -&\approx -.1416 \\ -\pi^2-10 -&\approx --0.1304 \\ -\pi^3-31 -&\approx0.0063 -\end{align}$$ -This event has probability $\left(\frac{1}{2}\right)^3=\frac{1}{8}$ assuming independence. What happens as the number of powers considered grows? Another approach, with median instead of average: Does the median of the absolute value of that difference tend to $\frac{1}{4}$? -Let $\lfloor x \rceil$ denote the rounding function. Although the answer to the first question may be false, for what values of $k$ does $\lfloor x^k \rceil ^\frac{1}{k}$ yield more bits of $\pi$ than it uses? For instance, $\lfloor x^{157} \rceil ^\frac{1}{157}$ seems to be an interesting approximation to $\pi$. (See this question) - -In either case: - -Q: Is the difference between $\pi^k$ and its nearest integer uniformly distributed in $(-\frac{1}{2},\frac{1}{2})$? - -REPLY [8 votes]: It's mentioned here that the sequence $x^n$ ($n=1,2 \cdots$) in modulo $1$ is known to be uniformly distributed for almost every $x>1$. At the same time, and perhaps surprisingly, not even a single example has been discovered - only some exceptions (and all algebraic). Furthermore, it has been proved (see same reference) that the "exceptions", in spite of having Lebesgue measure zero, are uncountable (hence they must include trascendental numbers). -I didn't find anything about the particular case $x=\pi$, neither about what happens with the distribution of the fractional parts (do they concentrate around $0$?) in the non-uniform exceptional cases.<|endoftext|> -TITLE: What's an Isomorphism? -QUESTION [10 upvotes]: I'm familiar with the definition (inverses and bijections, preserving operations) in the context of groups and vector spaces, the hoeomorphism of topological spaces, and have some feeling for the definition in category theory. -What I'm looking for is a mathematical justification: - -for statements like "....two isomorphic objects cannot be distinguished by using only the properties used to define morphisms; thus isomorphic objects may be considered the same as long as one considers only these properties and their consequences" (https://en.wikipedia.org/wiki/Isomorphism). -for the reliance on isomorphism in proofs. For example, the internal direct sum of subspaces of a vector space is isomorphic to the external direct sum of these subspaces. One can prove that the internal direct sum is associative and commutative and then call on isomorphism to say the same applies to the external direct sum. - -I would imagine somewhere in category theory there is some result along the lines that if $\phi$ is an isomorphism between two objects $O_1, O_2$ in a category, and $P$ is some logic statement about $O_1$ then $\phi(P) = P$, i.e. the logic statement about the corresponding entities in $O_2$ is true or false in accordance with the statement in $O_1$. -Maybe my imagination is running ahead of the facts, but I would appreciate some feedback on the formalisation of "...B is isomorphic to A and therefore since P is true in A ..." - -Addendum: thanks for comments and answer. It seems that an easily accessible answer applicable across all categories may be too much to aim for. What about answers for specific categories ? If one takes for example the category of topological spaces it appears (from what I've read) that "properties which can be defined in terms of open sets are preserved by homeomorphism". Can this statement be proved as such, or must one execute specific proofs for compactness, connectedness, convergence, etc ? - -REPLY [3 votes]: So, as you've noticed, this notion is absolutely ubiquitous. -There's a general notion of "isomorphic objects are equal (for all intents and purposes)". Particularly for categorists, notions that distinguish between isomorphic (or more generally, equivalent) objects are often referred to as "evil". In category theory, the principle you describe is called the principle of equivalence (or principle of isomorphism for a weaker notion). Most definitions of category theory use standard set theory and thus readily allow for evil definitions. As such, there is no principle of equivalence for those definitions; you can readily state properties that do not hold under equivalence of categories (or even isomorphisms of categories). This is because, in standard set theory, there is a global notion of equality which will always be able to distinguish between non-equal isomorphic objects. -If you look at the link above, you'll notice a lot of references to Homotopy Type Theory. This is a very new and very exciting development that directly addresses this issue. The most relevant part for you is the Univalence Axiom. The Univalence Axiom literally states that, in the context of homotopy type theory, equality is equivalent to equivalence. So all that treating isomorphic objects as equal is completely justified in homotopy type theory. By itself that wouldn't be that exciting, but homotopy type theory is a (fairly minor in some ways) extension of Martin Löf type theory which has been studied by type theorists and computer scientists and implemented for decades. It is the logical foundation for the Coq proof assistant. This means that 1) we have implementations of this logic already, 2) this logic is demonstrably able to formalize just about any mathematical notion, and 3) people are already doing real math in this. In other words, this incarnation of the principle of equivalence effectively encompasses all of mathematics (in practice), and homotopy type theory provides a possible "foundations" for mathematics that much more directly matches how mathematicians actually do mathematics. -There are other approaches to this besides homotopy type theory. The link above mentioned FOLDS which is a much more conservative approach, but Makkai's work is generally worth checking out. - -To step back a bit, the reason there is no theorem like you mention in category theory (as should be clear from the above) is that it isn't a theorem about category theory. It's a theorem about whatever meta-logic you are using to define category theory and those predicates $P$. There are three routes to go from here. You can just give up on such a property which is, technically, what just about everyone does for category theory. You can formulate a logic that is easy to specify and for which well-formedness is easy to check, and then prove the property about this logic. This is essentially what happens at the informal level and occasionally is formalized. For example, you can easily prove that the theorems in Peano arithmetic do not depend on what exactly numbers are, or that rational number arithmetic is well-defined. The problem with this route is that it is restrictive; only relatively simple properties can be stated and oftentimes even then only awkwardly. The third route, then, is to make a rich (but difficult to fully specify) logic that allows you to naturally and directly express what you want but whose well-formedness is (relatively) difficult to verify. This is the route homotopy type theory takes. (FOLDS is in between the second and third route.) This is what roughly what happens at an informal level for most mathematical work. Nominally set theory is the logic we're working in, but it is understood that polite company does not ask whether $2\in 3$ or whether $A\times B \times C$ is $(A\times B)\times C$ or $A\times(B\times C)$ or something else. There's an implicit notion of/language for "reasonable" questions to ask, and for those "reasonable" questions isomorphic objects are not distinguished.<|endoftext|> -TITLE: Coupon Collectors Problem with Packets: Clarifying Wikipedia -QUESTION [5 upvotes]: The Coupon Collector's Problem (CCP) is very useful in many applications. However, the "default" CCP is relatively simple: suppose you have an urn containing $n$ pairwise different balls. Now you want to draw a ball from the urn with replacements until you have seen each of the $n$ balls at least one. -Now you can compute the average waiting time to get the number of draws overall needed by the formula -\begin{align} -\mathbb{E}[X] = \sum_{i=1}^n \mathbb{E}[X_i] = nH_n -\end{align} -where $H_n$ is defined as the harmonic series and $\mathbb{E}$ is the expected value. Also, the random variable $X$ is defined as the random number of draws you have to make in order to get all $n$ balls at least once. $X_i$ denotes the additional number of draws one has to make in order to get from $i-1$ different balls to $i$ different balls drawn. Additionally, each ball has an equal probability of $1/n$. -Now consider an advanced CCP question: how does the formula change in case you want to draw $p\geq 1$ pairwise different balls (instead of only one as in the default CCP) per draw, called packets? -In other words: Given an urn containing $n$ balls, how many balls do I need to draw in order to get all $n$ balls when drawing always $p\geq 1$ (pairwise different) balls out of the urn? The set of balls is drawn with replacement. -(Therefore all balls of one package are different, but different packages can contain same balls.) -An answer gives this paper on top of page 20, and also this german lecture gives an answer on slide 229, 14.7b). A third -- at the same time very intuitive to get -- answer is given on the german Wikipedia, subsection "Päckchen". -Now two questions arise. - -Why do the answers in the paper and the lecture differ? If you plug in some numbers, you get different results for numbers above 1000. -How do I get from these solutions to the one given on Wikipedia? For me it seems like an approximation of the real value, since it is very fast to compute compared to the "scientific" answers, and the results is always "in the near of" the results of the other computations. - -Since I am interested in understanding the formula on Wikipedia, can anyone help understanding the equation how the formula is derived or give some insight? - -REPLY [3 votes]: The German Wikipedia formula is indeed wrong. -It's hard to figure out why someone comes up with a wrong solution for something. However, you could think of an experiment (different from the CCP) where the formula would give the right answer. -Say we have an urn with n balls numbered from 1 to $n$. Now we draw one ball at a time with replacement, until we got every number at least once. This is CCP with $p = 1$. If we have already seen $k$ distinct numbers, the expected value for the necessary draws to get the $(k+1)$st distinct number is $\frac{n}{n-k}$. Therefore, the expectation of the total number of necessary draws is -$$ -\sum_{k=0}^{n-1} \frac{n}{n-k}. -$$ -Now let's change the setting a little bit. We start again from scratch and draw one ball at a time, basically with replacement. But any time we get a previously unseen number, we do not replace it. However, as soon as we have seen $p$ distinct numbers, we replace all of them. Then we keep drawing balls with replacement (from all $n$ balls again); any time we get a previously unseen number, we do not replace it, until we have seen another $p$ distinct numbers; then we replace again all $p$ balls into the urn, and so on. This is a mixture of with and without replacement sampling. You could view this as a series of "with replacement" episodes, where episode $k$ lasts until you get the $k$th distinct ball, and where the number of balls in the urn during episode $k$ is $n-((k-1)\mod p)$, including $n-(k-1)$ previously unseen balls. Therefore, the expected duration for the $k$th episode is $$\frac{n-((k-1)\mod p)}{n-(k-1)},$$ and thus the total number of necessary draws is in expectation -$$\sum_{k=1}^n \frac{n-((k-1)\mod p)}{n-(k-1)} = \sum_{k=0}^{n-1} \frac{n-(k\mod p)}{n-k}.$$ -This is the Wikipedia formula you mentioned. -Note that in the CCP with $p$ coupons in each package, we draw $p$ coupons without replacement, then replace them, draw again $p$ coupons without replacement and so on, so in some sense this as also a series of draws, where the number of balls in the urn varies between $n$ and $n-(p-1)$. This similarity seems to have fooled the Wikipedia author. -If $p$ is small and $n$ is large, the CCP with $p > 1$ coupons may be approximated by CCP with $p=1$, and in this case the experiment decribed above is equal to CCP. Therefore the (wrong) Wikipedia formula cannot be way off in this case. But I suspect the discrepancies to be larger if $p$ is large (or $n$ is small).<|endoftext|> -TITLE: How to show the divergence of $\sum\limits_{n=1}^\infty\frac{\sin(\sqrt{n})}{\sqrt{n}}$ -QUESTION [15 upvotes]: The 10 standard tests taught in class are: -1) $n^{th}$ term test for divergence.(Not applicable: $\lim =0$). -2) Geometric Series(Not applicable). -3) Telescoping Series(Not applicable) -4) Integral Test(Not applicable: $f<0$ sometimes) -5) $p$-series(Not applicable) -6) Direct Comparison(maybe) -7) Limit Comparison(Not applicable $a_n<0$ sometimes) -8) Alternating Series Test(Not Alternating) -9) Ratio Test fails -10) Root Test fails -I did find a hint online that states we should show that for $k^2+1\leq n\leq k^2+k$ we have $\sum\limits_{n=k^2+1}^{k^2+k}\frac{\sin(\sqrt{n})}{\sqrt{n}}>\frac{1}{8}$. Is there an easier way and if not how should we go about showing this? - -REPLY [2 votes]: The hint you stated can't be true for all $k$ but it gives an idea on how to show the serie is divergent. -First remember that on $[2k\pi+\pi/4;2k\pi+3\pi/4]$ (for $k $ an integer) $\sin(x)\geq \sqrt 2 /2$. Now the condition $\sqrt n \in [2k\pi+\pi/4;2k\pi+3\pi/4]$ is equivalent to $n\in [4k^2\pi^2+k\pi^2+\pi^2/16;4k^2\pi^2+3k\pi^2+9\pi^2/16]$, and this last interval has a length of $2k\pi^2+\pi^2/2$, which is greater than $18k+2$ (using the fact that $\pi\geq3)$. So this interval contains at least $18k$ integers. -Thus we have -$$\sum_{2k\pi+\pi/4\leq \sqrt n\leq2k\pi+3\pi/4}\frac{\sin(\sqrt n)}{\sqrt n}\geq \sum_{2k\pi+\pi/4\leq \sqrt n\leq2k\pi+3\pi/4}\frac{\sqrt2/2}{ {2k\pi+3\pi/4}}\geq 18k\frac{\sqrt2}{2\cdot ( {2k\pi+3\pi/4})}$$ -which is greater than some (strictly positive) constant. -So the serie must be divergent. -I dont think there is an (significantly) easier way to prove this result though.<|endoftext|> -TITLE: Can Continuous Time Markov Chains be used as a reasonable voting system? -QUESTION [11 upvotes]: I just compared a couple of example elections, as given on Wikipedia to show how Condorcet-methods differ from non-Condorcet ones, to what happens if you just interpret the underlying preference graphs as Continuous Time Markov chains (CTMC). At least in each of the example cases, while the exact ordering almost never matched, in the limes $t\to\infty$, the CTMC always gave the Condorcet winner. So I wonder: How well do CTMC actually do compared to more usual or sophisticated election methods? What criteria would they fulfill or fail? I've only ever seen them be used to simulate voting behavior over time rather than evaluating a single election, so there ought to be a catch. -To be more specific, for the remainder of this questions, here are some examples: - -5 candidates, 45 voters -Tally: -$$ -\begin{matrix} -5 & A>C>B>E>D \\ -5 & A>D>E>C>B \\ -8 & B>E>D>A>C \\ -3 & C>A>B>E>D \\ -7 & C>A>E>B>D \\ -2 & C>B>A>D>E \\ -7 & D>C>E>B>A \\ -8 & E>B>A>D>C -\end{matrix} -$$ -In matrix-form, for the Schulze Method this looks like: -$$ -\begin{bmatrix} -\downarrow \ beats \rightarrow & A & B & C & D & E \\ -A & & 20 & 26 & 30 & 22 \\ -B & 25 & & 16 & 33 & 18 \\ -C & 19 & 29 & & 17 & 24 \\ -D & 15 & 12 & 28 & & 14 \\ -E & 23 & 27 & 21 & 31 & -\end{bmatrix} -$$ -And ultimately you arrive at the Schulze ranking $E>A>C>B>D$ with E being the winner. -Now the CTMC versions looks like this: Instead of leaving the diagonal of the above matrix empty, you put in the negative of the sum of the number of times the given candidate is beaten, i.e. you sum up the rest of the column and put it in the empty spot, such that each column sums to 0. -$$M= -\begin{bmatrix} --82 & 20 & 26 & 30 & 22 \\ -25 & -88 & 16 & 33 & 18 \\ -19 & 29 & -91 & 17 & 24 \\ -15 & 12 & 28 & -111 & 14 \\ -23 & 27 & 21 & 31 & -78 -\end{bmatrix} -$$ -And to find the corresponding CTMC ranking, I have to calculate $\lim\limits_{t\to\infty} e^{t M}$ and multiply the resulting matrix by an arbitrary positive vector $v$. - If I want the result in terms of the number of voters to vote for what candidate, the vector should sum up to the number of voters. The exact vector doesn't matter though, because I'm looking for the steady state which, up to a scale-factor, will be the same for all input vectors. -So if I do that for this example I get: -$$\lim\limits_{t\to\infty} e^{t M} v = \begin{bmatrix} 10.151 \\ 8.989 \\ 8.977 \\ 5.983 \\ 10.900 \end{bmatrix}$$ -And my CTMC ranking ends up being $E>A>B>C>D$ which is almost the same as the Schulze ranking above, except for B and C which end up being really close. -For the other examples I will only show the CTMC matrix and the final rankings. That should give all the necessary information. - -9 voters, 4 candidates: -$$ -\begin{bmatrix} --14 & 5 & 5 & 3 \\ -4 & -11 & 7 & 5 \\ -4 & 2 & -16 & 5 \\ -6 & 4 & 4 & -13 -\end{bmatrix} -$$ -Schulze ranking: Ties: $B>C>D>A$, $B>D>A>C$, $B>D>C>A$, $D>B>A>C$, $D>B>C>A$ (so either B or D wins) -CTMC ranking: $B>D>A>C$ - -(relative voter count in %), 4 candidates: -$$ -\begin{bmatrix} --1.74 & .42 & .42 & .42 \\ -.58 & -1.06 & .68 & .68 \\ -.58 & .32 & -1.27 & .83 \\ -.58 & .32 & .17 & -1.93 -\end{bmatrix} -$$ -Ranked Pairs ranking: $B>C>D>A$ -CTMC ranking: $B>C>A>D$ -(non-Condorcet methods would have elected A) - -Final Example: -(relative voter count), 3 candidates: -$$\begin{bmatrix} --.52 & .68 & 0 \\ -0 & -.68 & .72 \\ -.52 & 0 & -.72 -\end{bmatrix}$$ -Ranked Pairs ranking: $A>B>C$ ($C>A$ would have caused a loop but is the last choice to be locked in and is therefore ignored) -CTMC ranking: $A>B>C$ (the weight for C is only .02 smaller than B, the vector $v$ was chosen to sum to $1$.) - -Sources: -https://de.wikipedia.org/wiki/Schulze-Methode -https://en.wikipedia.org/wiki/Schulze_method -https://en.wikipedia.org/wiki/Ranked_pairs - -EDIT: Ok it looks like all those examples were mostly lucky coincidences. I just tried to apply it to another example as given in https://en.wikipedia.org/wiki/CPO-STV and it completely fails. -In that example I'd have the transition matrix -$$\begin{bmatrix} --184 & 25 & 25 & 25 & 25 \\ -49 & -119 & 15 & 41 & 49 \\ -34 & 34 & -107 & 34 & 34 \\ -75 & 34 & 41 & -121 & 54 \\ -26 & 26 & 26 & 21 & -162 -\end{bmatrix}$$ -Where: -IRV-based STV gives: $C>A>B$ -CPO-STV gives: $C>A>E$ -CTMC gives: $D>C>B\left(>E>A\right)$ -So all of a sudden one of the first winners regardless of whether you are going for a Condorcet method or not will be the loser of CTMC. It should be noted that, in the CPO-STV as used, the Hagenbach-Bischoff-quota $\frac{votes}{seats+1}$ was used rather than the more popular Droop-quota $\frac{votes}{seats+1}+1$ or the widely considered fairer Meek-quota which changes as votes are cast, and that A only is elected immediately because the HB-quota of 25 is fulfilled whereas the other two of 26 would not have been. However, A would still have been voted for in either method: With Droop-quota, if I did this right, IRV-STV would have given $C>E>A$ and CPO-STV would have given $C>A>D$ so the above result is still a rather weird outcome, but the quota did have a significant effect on the result. -Meanwhile, on Schulze-STV: https://en.wikipedia.org/wiki/Schulze_STV -there are two different examples. For the first one I get: -$$\begin{bmatrix} --40 & 63 & 50 \\ -27 & -114 & 39 \\ -13 & 51 & -89 -\end{bmatrix}$$ -Where: -STV gives: $A>B>C$ -Schulze-STV gives: $A>B>C$ -CTMC gives: $A>B>C$ -so they all agree, whereas in the other example with vote-management in (one possible attack on STV systems) you get: -$$\begin{bmatrix} --52 & 63 & 38 \\ -27 & -114 & 39 \\ -25 & 51 & -77 -\end{bmatrix}$$ -Where: -STV gives $A>C>B$ -Schulze-STV gives $A>B>C$ -CTMC gives $A>C>B$ -clearly showing that, if you were to use it as an STV method, CTMC would be vulnerable to vote management. -So if it's viable at all it probably is really bad still. The question remains open though. Just the tone changes: Just how bad is it? Is there anything good (in terms of this method fulfilling some interesting voting axiom) at all or were all the examples above really just this lucky to have it work at all? - -REPLY [3 votes]: Note that a much simpler way to find the stationary distribution of a CTMC is to solve $\mathbf M \boldsymbol \pi = \mathbf 0$. -One of the problems with this system is a severe vulnerability to candidate cloning. Imagine two candidates $A, B$ with a 60% majority preferring $A$. As expected, $A$ wins. -$$ -\begin{align*} -3 &: A > B \\ -2 &: B > A \\ -\end{align*} \\ -\mathbf M = \begin{bmatrix}-2 & 3 \\ 2 & -3\end{bmatrix}, \boldsymbol \pi = \begin{bmatrix}3 \\ 2 \end{bmatrix} -$$ -Now suppose we add a candidate $C$ that’s almost an exact copy of $B$ whose only purpose is to be slightly worse than $B$. Now $B$ wins! -$$ -\begin{align*} -3 &: A > B > C \\ -2 &: B > C > A \\ -\end{align*} \\ -\mathbf M = \begin{bmatrix}-4 & 3 & 3 \\ 2 & -3 & 5 \\ 2 & 0 & -8\end{bmatrix}, \boldsymbol \pi = \begin{bmatrix}12 \\ 13 \\ 3 \end{bmatrix} -$$<|endoftext|> -TITLE: For which rings does a polynomial in $R$ have finitely many roots? -QUESTION [5 upvotes]: Which infinite rings satisfy the following -Every non-zero polynomial in $R[X]$ has only finitely many roots ? -Are there such rings which are not integral domains ? - -REPLY [8 votes]: Assume $R$ is such an infinite ring, i.e. every non-zero polynomial has only finitely many roots. -If $a \in R$ is a zero-divisor, the set of roots of $ax \in R[x]$ is precisely the Annihilator of $a$, which is an ideal in $R$. -Let $0 \neq b \in \operatorname{Ann}(a)$. By assumption $bR$ must be finite, because we have $bR \subset \operatorname{Ann}(a)$. Consider the map $$R \to bR, 1 \mapsto b.$$ -The image is finite, hence the kernel must be infinite (since $R$ is infinite). But the kernel is precisely the set of roots of $bx \in R[x]$, contradiction! -Conclusion: An infinite ring (commutative, with $1$) satisfies your property iff it is an integral domain. - -REPLY [2 votes]: This is an incomplete answer to your question but covers a fair bit of ground. I'm unaware of any theorem that classifies this exactly for all rings, but I make no claims about being an expert in ring theory! I hope another answer comes along with broader classification results! -So let's take care of the easy case first: If $R$ is an integral domain, then certainly each $p(x) \in R[x]$ of positive degree has finitely many roots. In fact, as you're undoubtedly aware, if $n = \deg p$ then $p(x)$ has at most $n$ roots in $R$. This can be proved by the usual inductive argument using the division algorithm. -Thus this works for rings in which the division algorithm holds, right? Well, actually, no, not quite. The classical example is that over the quaternion ring $R=\mathbb{H}$ (which has left- and right-division algorithms) the polynomial $p(x) = x^2 + 1$ has infinitely many roots. As discussed in the answers to this question, the "usual inductive argument" I just skipped over for integral domains $R$ subtly relies on the fact that $R$ is commutative. -Hence moving to non-commutative rings poses problems. Likewise, if $R$ (is infinite and) has zero divisors, then nonzero polynomials with infinitely many roots always arise. Indeed, if $R$ is infinite and $a,b \in R \setminus\{0\}$ with $ab = 0$, then $br$ is a root of $f(x) = ax$ for all $r \in R$. -Hopefully this gives you some perspective on your question. If there are finer-toothed classifications of non-commutative rings in which all polynomials have finitely-many roots, I hope to see them here.<|endoftext|> -TITLE: Does $\sqrt{a+b} \le \sqrt a + \sqrt b$ hold for all positive real numbers a and b? -QUESTION [15 upvotes]: I thought of this a while ago, but can't make up a proof or a counterexample. Does anyone know more about this? -$$\sqrt{a+b} \le \sqrt a + \sqrt b , \forall a,b \in \mathbb R_+$$ -Moreover, what happens with more variables? Say: -$$\sqrt{x_1+x_2+...+x_n} \le \sqrt x_1 +\sqrt x_2 + ... + \sqrt x_n $$ -with $ x_i \in\mathbb R_+ \forall i \in \{1,2,...,n\} $ -Or when $i = \mathbb N$ -PS: I tried looking on the internet for this but I don't know how I would call this. - -REPLY [5 votes]: Hint -$$\sqrt{a+b} \leq \sqrt{a+2\sqrt{ab}+b}=\sqrt{(\sqrt{a}+\sqrt{b})^2}$$ -Same way -$$\sqrt{x_1+...+x_n} \leq \sqrt{x_1+...+x_n+2(\sqrt{x_1x_2}+\sqrt{x_1x_2}+..+\sqrt{x_{n-1}x_n})}=\sqrt{\left(\sqrt{x_1}+...+\sqrt{x_n}\right)^2}$$ - -REPLY [2 votes]: Suppose you have proved the inequality for $n=2$. Suppose it holds for $n\ge2$; then -$$\def\vA{\vphantom{A}} -\sqrt{x_1+\dots+x_n+x_{n+1}\vA} -\le -\sqrt{x_1+\dots+x_n\vA}+\sqrt{x_{n+1}\vA} -\le -\sqrt{x_1\vA}+\dots+\sqrt{x_n\vA}+\sqrt{x_{n+1}\vA} -$$ -The first $\le$ is from the case $n=2$, the second one from the induction hypothesis. -For the $n=2$ case, just square.<|endoftext|> -TITLE: Is this limit indeterminate or $e^2$ or what? -QUESTION [5 upvotes]: What is the answer to this: - -$$ -\lim_{x\to ∞} \left({2x+3\over 2x-1}\right)^x -$$ - -My calculator says this is $ e^2 $ but the only answer I can get to is $ 1^\infty $, which is indeterminate. - -REPLY [3 votes]: Using the limit definition of the exponential function -$$e^z=\lim_{x\to \infty}\left(1+\frac zx\right)^x$$ -we can write -$$\begin{align} -\lim_{x\to \infty}\left(\frac{2x+3}{2x-1}\right)^x&=\lim_{x\to \infty}\left(\frac{1+\frac{3/2}{x}}{1+\frac{-1/2}{x}}\right)^x\\\\ -&=\frac{\lim_{x\to \infty}\left(1+\frac {3/2}{x}\right)^x}{\lim_{x\to \infty}\left(1+\frac {-1/2}{x}\right)^x}\\\\\ -&=\frac{e^{3/2}}{e^{-1/2}}\\\\ -&=e^2 -\end{align}$$<|endoftext|> -TITLE: Day convolution intuition -QUESTION [13 upvotes]: In the nLab, Day convolution is introduced as a generalisation of convolution of complex-valued functions, but I'm wondering how exactly to understand this. -I can (just about) parse the definitions, but have absolutely no intuition or geometric insight at all. -Here is my thinking so far (excuse all of the quotation marks) : - -A way of thinking about simple examples of presheaves is to imagine an association: to each open set of a topological space associate a set of continuous functions on that open set (really we could obtain a group or ring structure on this set of functions, but let's ignore that for a moment and just look at set-valued presheaves). -Then a presheaf is a 'function' that maps an open set to a set of functions. So a convolution of presheaves is obtained by 'blurring' these 'functions' together. - -But what this actually means is rather beyond me. -The issue (for me) is twofold: - -What does Day convolution look like in this simple case where we take our starting category to be $\mathsf{Op}(T)$, which is the category of open sets and inclusion maps of some topological space $T$ (which is usually the motivating example for presheaves). -In fact, can we even look at this example? -As far as I can see, $\mathsf{Op}(T)$ doesn't admit a monoidal structure; -What does Day convolution look like generally? -Given some convolution of presheaves, are there any simple examples (if 1. doesn't work) that give a good intuition, where the Day convolution has a reasonably succinct description? - -Edit: - -How does Day convolution fit in with 'regular' convolution. -That is, can we recover the usual convolution from Day convolution? - -I hope this question is rigorous enough, and if not then I'll try to edit it to be more so. - -REPLY [19 votes]: Day convolution is a categorification of the monoid algebra construction. There is a formal analogy between the two, but one is not a literal generalisation of the other. So to address your question 3, we should not expect to recover the usual convolution from Day convolution. -Let's develop the following analogy: -\begin{array}{|c|c|} \hline -\textbf{monoid algebra} & \textbf{Day convolution} \\ \hline \hline -\text{set} & \text{category} \\ \hline -\text{monoid} & \text{monoidal category} \\ \hline -\text{ring } R & \text{monoidally cocomplete category } \mathcal{V} \\ \hline -R\text{-module} & \text{cocomplete } \mathcal{V}\text{-category} \\ \hline -R\text{-algebra} & \text{monoidally cocomplete } \mathcal{V}\text{-category} \\ \hline -\text{free } R\text{-module on a set } X& \text{free cocomplete } \mathcal{V}\text{-category on a category } \mathcal{C}\\ -R^{(X)}& [\mathcal{C}^\text{op},\mathcal{V}]\\ \hline -\text{free } R\text{-algebra on a monoid } M & \text{free monoidally cocomplete } \mathcal{V}\text{-category}\\ -& \text{on a monoidal category } \mathcal{A} \\ -R^{(M)} \text{ with convolution product} & [\mathcal{A}^\text{op},\mathcal{V}] \text{ with Day convolution}\\ \hline -\end{array} -Here a monoidally cocomplete category is a cocomplete monoidal category $\mathcal{V}$ such that $\otimes \colon \mathcal{V} \times \mathcal{V} \to \mathcal{V}$ is cocontinuous in each variable. This condition corresponds in our analogy to the distributivity of multiplication over addition in a ring. -Let $(e_x)_{x\in M}$ be the canonical basis for $R^{(M)}$, so that each element of $R^{(M)}$ can be written $f = \sum_{x} f(x) e_x$. The convolution product on $R^{(M)}$ is then determined by the requirement that $M \to R^{(M)}, x \mapsto e_x$ is a monoid homomorphism. For: -\begin{equation} -\begin{split} - f \ast g & = \left(\sum_{x} f(x) e_x \right) \ast \left(\sum_{y} g(y) e_y \right) \\ -& = \sum_{x,y} f(x)g(y) e_x \ast e_y \\ -& = \sum_{x,y} f(x)g(y) e_{xy}. -\end{split} -\end{equation} -An analogous argument gives the formula for Day convolution. The representables $\mathcal{A}(-,A)$ provide a ''basis'' of $[\mathcal{A}^{op},\mathcal{V}]$: each object may be expressed as the canonical colimit $$F \cong \int^{A} FA \otimes \mathcal{A}(-,A).$$ The Day convolution is determined by the requirement that the Yoneda embedding $\mathcal{A} \to [\mathcal{A}^\text{op},\mathcal{V}]$ be strong monoidal. We have: -\begin{equation} -\begin{split} - F \ast G & \cong \left(\int^{A} FA \otimes \mathcal{A}(-,A) \right) \ast \left(\int^{B} GB \otimes \mathcal{A}(-,B) \right) \\ -& \cong \int^{A,B} F(A)\otimes G(B) \otimes \mathcal{A}(-,A) \ast \mathcal{A}(-,B) \\ -& \cong \int^{A,B} F(A)\otimes G(B) \otimes \mathcal{A}(-,A\otimes B). -\end{split} -\end{equation} -Note that we have used the requirement that the Day convolution product must preserve colimits in each variable. -Now, Day convolution can be defined for the more general case of a promonoidal category $\mathcal{A}$. Here we can continue our analogy and think of the promonoidal structure as providing the ''structure coefficients'' of the Day convolution product.<|endoftext|> -TITLE: Are existentially defined subsets of affine algebraic sets unions of a finite number of affine algebraic sets? -QUESTION [6 upvotes]: Consider a set of polynomials in $\mathbb{C}[x_1,\dots,x_n]$. The zero locus of these polynomials $Z$ is a subset of $\mathbf{A}^n$ and is an affine algebraic set. -Now, consider the following subset of $\mathbf{A}^{n-1}$: -$$ -S = \left \{ (x_1,\dots,x_{n-1}) \, \middle| \, (x_1,\dots,x_{n-1}) \in \mathbf{A}^{n-1} \text{ s.t. } \exists \, x_n \text{ where } (x_1,\dots,x_n) \in Z \right \} -$$ -Does this operation have a name? -Is $S$ equal to the union of a finite number of affine algebraic sets (as sets of points)? Clearly if $S$ is finite this is true. -If this does not hold, are there any other useful ways to decompose $S$, or indeed can anything useful be said about such sets? - -REPLY [7 votes]: Here's an example showing that $S$ is not always a finite union of algebraic sets. Let $Z$ be the zero locus of the single polynomial $x_1x_2 - 1$. Then $S = \mathbb{A}^1\setminus \{0\}$. -What is true is that $S$ is always a finite union of sets defined by finitely many polynomial equations (basic Zariski closed sets) and negated equations (basic Zariski open sets). Such a set is called a constructible set, and the fact that the projection of a Zariski-closed set (or more generally a constructible set) is a constructible set is known as Chevalley's theorem in algebraic geometry. -The $S$ in the example above is defined by the single negated equation $x_1\neq 0$. -As a logician, I prefer to think of this in terms of quantifier-elimination in the theory of algebraically closed fields. This result, due to Tarski, says that a subset of $K^n$ ($K$ algebraically closed) defined by a first-order formula (built up from polynomial equations by finite Boolean combinations and quantifiers) can actually be defined without quantifiers. The set $S$ in your question is defined by the first-order formula $$\exists x_n\, \bigwedge_{i = 1}^k f_i(x_1,\dots,x_n) = 0.$$ -Putting the quantifier-free formula we get from quantifier-elimination in disjunctive normal form, it looks like $$\bigvee_{i = 1}^n \bigwedge_{j = 1}^m \varphi_{ij}(\overline{x}),$$ -where each $\varphi_{ij}(\overline{x})$ is $p_{ij}(\overline{x}) = 0$ or $p_{ij}(\overline{x})\neq 0$ for some polynomial $p_{ij}$. This is explicitly a finite union of sets defined by finitely many polynomial equations and negated equations. - -REPLY [5 votes]: The operation is called projection. Tbe first-order theory of the complex field (which is the same as the first-order theory of algebraically closed fields of characteristic $0$) admits quantifier elimination. This means that $\exists x_n (x_1, \ldots, x_n) \in Z$ is equivalent to a propositional combination of primitive formulas of the form $p_j(x_1, \ldots, x_n) = 0$ for some finite set of polynomials $p_j$. Hence $A$ can be obtained from a finite set of algebraic sets using union, intersection and complement. -[Aside: analogous results hold over the real field, in which case the definable sets are called semi-algebraic sets and the primitive formulas also include formulas of the form $p_j(x_1, \ldots, x_n) > 0$].<|endoftext|> -TITLE: Can you get a closed-form for $\prod_{p\text{ prime}}\left(\frac{p+1}{p-1}\right)^{\frac{1}{p}}$? -QUESTION [5 upvotes]: When I use the Taylor expansion series for $$\log(1+x)^{1+x}+\log(1-x)^{1-x}$$ with $x=\frac{1}{p}$, $p$ prime, I believe that I can deduce -$$\sum_{p\text{ prime}}\left(\frac{1}{p^2}+\frac{1}{2\cdot3p^4}+\frac{1}{3\cdot5p^6}+\frac{1}{4\cdot7p^8}+\cdots\right)=\log\frac{6}{\pi^2}+\log\prod_{p\text{ prime}}\left(\frac{p+1}{p-1}\right)^{\frac{1}{p}}.$$ -If previous computations are right, and can be justified I want to ask the following - -Question. Can you compute - $$\prod_{p\text{ prime}}\left(\frac{p+1}{p-1}\right)^{\frac{1}{p}}?$$ - -Thanks in advance. -My goal is learn, thus if you can give the details about how are justified the more important steps to deduce my computations it is the best. Also, how can we justify the convergence of such infinite product? - -REPLY [7 votes]: The prime products that are 'easy' to evaluate are usually those that can somehow be related somehow to the Euler product for the Riemann $\zeta$ function -$$\frac{1}{\zeta(s)} = \prod_p \left[1 - \frac{1}{p^s}\right]\tag{1}$$ -Your product is not on this form, or anything resembling it, so that makes evaluating it much harder so I doubt that there is a known closed form for it. Note for example that there is not even a known closed form for the much simpler product $\prod_p\left[1- \frac{2}{p^2}\right]$. However we can derive a very good approximation for your product. -For $x\ll 1$ we have the approximation $(1+x)^m \approx 1+mx$. By taking $m = x = \frac{1}{p}$ we obtain -$$\left(\frac{1+\frac{1}{p}}{1-\frac{1}{p}}\right)^{\frac{1}{p}}\approx \frac{1+\frac{1}{p^2}}{1-\frac{1}{p^2}}~~~~\text{for large }p$$ -The product of the term above can be evaluated since $\prod_{p} \left[1-\frac{1}{p^2}\right] = \frac{6}{\pi^2}$ and $\prod_{p} \left[1+\frac{1}{p^2}\right] = \frac{15}{\pi^2}$. These products can be derived from the Euler product $(1)$ as $\frac{1}{\zeta(2)}$ and $\frac{\zeta(2)}{\zeta(4)}$ respectively. Now by multiplying togeather the first $N$ terms in your product and using the approximation above for the remaining terms we obtain the simple approximation -$$\prod_p \left(\frac{p+1}{p-1}\right)^{\frac{1}{p}} \approx \frac{5}{2}\prod_{n=1}^N\left(\frac{p_n+1}{p_n-1}\right)^{\frac{1}{p_n}}\frac{p_n^2-1}{p_n^2+1}$$ -which gets better and better the larger we take $N$. For example if $N=3$ we get $\frac{72}{65} 2^{2/15} 3^{7/10}\simeq 2.62145$ which are within $0.04\%$ of the true answer $2.62239915779\ldots$ found by numerically evaluating the sum for $N=10000$.<|endoftext|> -TITLE: Find all real $x$ such that $1990[x] +1989[-x]=1$ (where $[x]$ is the floor function for $x$). -QUESTION [7 upvotes]: Find all real $x$ such that $1990[x] +1989[-x]=1$ (where $[x]$ is the - floor function for $x$). - -My effort -Rearranging our equation we have : -\begin{array}{c} -1990[x]+1989[-x]&=1 \\ -1989([x]+[-x])+[x] &=1 \\ -\end{array} -Supposing that $x$ is an integer ,I have that $[x]+[-x]=0$ and the problem breaks down to -$$[x]=1$$ -which has the only solution $x=1$ -Else ,$x$ is a real number with nonzero fractional part and $[x]+[-x]=-1$ which yields in our case -\begin{array}{c} --1989 + [x] &= 1 \\ -[x] &=1990 \\ -\end{array} -For this to happen we must therefore have that $x \in (1990,1991)$ - -Question -Is my effort complete and correct ?What would have been other ways to - approach the problem ? - -REPLY [2 votes]: More generally, for $m,n,a \in \mathbb Z$, we look at the problem of determining all $x \in \mathbb R$ such that -$$ m \lfloor x \rfloor + n \lfloor -x \rfloor = a. $$ -Write $x=\lfloor x \rfloor + \{x\}$, with $\lfloor x \rfloor \in \mathbb Z$ and $0 \le \{x\}<1$. Thus, $x \in \mathbb Z$ if and only if $\{x\}=0$. -If $\{x\}=0$, then $x \in \mathbb Z$, and so $-x \in \mathbb Z$. Thus, $\lfloor -x \rfloor=-\lfloor x \rfloor$ in this case. -If $\{x\}>0$, then $-x=(-\lfloor x \rfloor -1)+(1-\{x\})$, where $-\lfloor x \rfloor -1 \in \mathbb Z$ and $0<1-\{x\}<1$. Thus, $\lfloor -x \rfloor=-\lfloor x \rfloor-1$. -We combine these two cases as -$$ \lfloor x \rfloor + \lfloor -x \rfloor = \begin{cases} 0 & \:\mbox{if}\: x \in \mathbb Z; \\ -1 & \:\mbox{if}\: x \notin \mathbb Z. \end{cases} $$ -So for $x \in \mathbb Z$, -$$ a = m \lfloor x \rfloor + n \lfloor -x \rfloor = n \left( \lfloor x \rfloor + \lfloor -x \rfloor \right) + (m-n) \lfloor x \rfloor = (m-n) \lfloor x \rfloor = (m-n)x. $$ -and for $ x \notin \mathbb Z$, -$$ a = m \lfloor x \rfloor + n \lfloor -x \rfloor = n \left( \lfloor x \rfloor + \lfloor -x \rfloor \right) + (m-n) \lfloor x \rfloor = -n + (m-n) \lfloor x \rfloor. $$ -We summarize the solution set as follows: -$\bullet$ If $m=n$, then -$$ \begin{cases} x \in \mathbb Z & \:\mbox{if}\: a=0, n \ne 0, \\ x \in \mathbb R & \:\mbox{if}\: a=0, n=0, \\ x \in \mathbb R \setminus \mathbb Z & \:\mbox{if}\: a \ne 0, a+n=0, \\ \text{no solution} & \:\mbox{if}\: a \ne 0, a+n \ne 0. \end{cases} $$ -$\bullet$ If $m \ne n$, then -$$ \lfloor x \rfloor = \frac{a+n}{m-n}. $$ -Additionally, if $x \in \mathbb Z$, then $x=\frac{a}{m-n}$. -The solution set in the particular case $n=m-1$, $a=1$ is -$$ x=1 \:\:\text{or}\:\: m < x < m+1. \quad \blacksquare $$<|endoftext|> -TITLE: Summation of a term to infinity -QUESTION [15 upvotes]: I read through many tutorials but no one mentioned this explicitly. -Is the following conversion valid? -$$\sum_{k=0}^\infty \frac{k-1}{2^k} = \lim_{n\to \infty} \sum_{k=0}^n \frac{k-1}{2^k}$$ -Please excuse if it seems stupid or too simple to ask in the forum. - -REPLY [29 votes]: It's not only valid, it's how it's defined. - -Note that the operation "addition" is defined only if we apply it a finite amount of times. Thus, adding an infinite amount of terms doesn't make sense. We'll have to define it as a limit, as that only includes nice, finite sums. - -REPLY [16 votes]: This is the definition of infinite series. It is the limit of the partial sums $S_n$: -$$S_n = \sum_{k = 0}^n a_k$$ -$$\sum_{k = 0}^{\infty} a_k := \lim_{n \to \infty} S_n = \lim_{n \to \infty} \sum_{k = 0}^n a_k$$<|endoftext|> -TITLE: Proof of the Stratonovich integral? -QUESTION [5 upvotes]: Computing the integral $\int \Phi(x_t,t)dx_t$ -writing the equation in the form we can write the integral as the mean square limit -$$\int \Phi(x_t,t)dx_t=\lim_{\Delta \to 0} \sum^{j=1}_{N-1} [\Phi((\frac {x(t_j)+x(t_{j+1})}{2},t_j)][x(t_j+1)-x(t_j)]\ \ \ \ \ \ .\ (3)$$ -and in Ito's form as $$\int \Phi(x_t,t)dx_t=\lim_{\Delta \to 0} \sum^{j=1}_{N-1} [\Phi x(t_j),t_j][x(t_j+1)-x(t_j)] \ \ \ \ \ \ \ \ \ \ .\ (4)$$ -Let us prove the existence of the limit in $(3)$ and find the formula relating -the two indicated integrals. To do this we select the $\Delta$ partitioning and consider -the difference between the limit expressions on the right-hand sides of -$(3)$ and $(4)$. Making use of the differentiability with respect to $x$ of the -function $\Phi(x_t, t)$ we get -$$D_{\Delta}=\sum^{j=1}_{N-1} [\Phi((\frac {x(t_j)+x(t_{j+1})}{2}),t_j-\Phi(x(t_j),t_j][x(t_{j+1})-x(t_j)] \ \ \ \ \ \ .(5)$$ $$= \frac{1}{2} \sum ^{N-1}_{j=1} \frac {\partial \Phi}{\partial x}[(1-\theta)x(t_j)+\theta x(t_{j+1}),t_j)][x(t_{j+1})-x(t_j)]^2 ,0\le \theta \le 1/2,t_j=t_j^{\Delta}. $$ my question is how to proceed after $(5)$, how this final equation comes . - -REPLY [3 votes]: If $x \mapsto \Phi(x,t)$ is differentiable, it follows from Taylor's formula that -$$\Phi(x,t) = \Phi(y,t) + (x-y) \frac{\partial}{\partial x} \Phi(\zeta,t)$$ -for some intermediate point $\zeta$ between $x$ and $y$ (i.e we can find $\lambda \in (0,1)$ such that $\zeta = \lambda x+ (1-\lambda) y$). Using this identity for -$$x :=\frac{x(t_j)+x(t_{j+1})}{2} \qquad y := x(t_j) \qquad t = t_j$$ -we find -$$\Phi \left( \frac{x(t_j)+x(t_{j+1})}{2}, t_j \right) = \Phi(x(t_j),t_j)+ \frac{x(t_{j+1})-x(t_j)}{2} \frac{\partial}{\partial x} \Phi(\zeta,t_j) \tag{1}$$ -with -$$\zeta = \lambda x(t_j)+ (1-\lambda) \frac{x(t_j)+x(t_{j-1})}{2} \tag{2} $$ -for some $\lambda \in (0,1)$. Note that $(2)$ is equivalent to -$$\begin{align*} \zeta &= x(t_j) \left[ \frac{2\lambda}{2} + \frac{(1-\lambda)}{2} \right] + \underbrace{\frac{1-\lambda}{2}}_{=:\theta} x(t_{j+1}) \\ &= x(t_j)(1-\theta) + \theta x(t_{j+1}) \end{align*}$$ -for some $\theta \in (0,1/2)$. Hence, by $(1)$, -$$\Phi \left( \frac{x(t_j)+x(t_{j+1})}{2}, t_j \right) -\Phi(x(t_j),t_j)= \frac{x(t_{j+1})-x(t_j)}{2} \frac{\partial}{\partial x} \Phi ( x(t_j)(1-\theta) + \theta x(t_{j+1}), t_j).$$ -Multiplying this expression with $x(t_{j+1})-x(t_j)$ and summing over $j=1,\ldots,N-1$ yields the identity you are looking for. (Mind that $\theta = \theta(j)$; we cannot expect to find one $\theta$ which works for all $j=1,\ldots,N-1$).<|endoftext|> -TITLE: How does advancing through the math major work? -QUESTION [20 upvotes]: I am an undergrad math major that just completed Calculus 3 last semester. This semester I signed up for Discrete Mathematics, and will be taking Intro to Advanced/Abstract Math next. -Of course-- I expected the numbers and computation to be larger and much more complex but instead am finding that there is hardly any number-crunching at all. Just a lot of proofs and logic skills. What gives? I thought the more advanced math got, the more complex the number-crunching would get. -Is it always going to be like this from now on? For both pure and applied math majors? -And secondly-- after completing four years of this stuff, how in the world do you guys remember every rote memorization technique taught step-by-step during freshmen year for all of your past courses in trigonometry, college algebra, calculus 1-3 classes? There are usually like 4-5 steps per technique, with 3-4 techniques per section w/ 10 sections per chapter of a book! -Take for instance my College Algebra book is 500 pages long-- I can't even remember every single section's memorization of how to answer the problem even after a year of doing Calculus, let alone the three years it will take me to graduate as a math major! -Maybe I just don't understand but it seems like all of the earlier useful rote memorization technique is going to get lost. IS this supposed to be the case? - -REPLY [2 votes]: I thought the more advanced math got, the more complex the number-crunching would get. - -Yes, it's so advanced that you crunch arbitrary numbers¹! (Like $n$ or $x$ or $z$.) Sometimes you also crunch $n$-tuples of arbitrary numbers (called vectors), or more advanced number-like things called cohomology classes, or ideals, or... -¹a/k/a variables -And there are so many number crunching techniques that you really have to have a firm understanding of basic logic. You get a glimpse of the real fun in your final year of your undergraduate, but it really starts during your Master's or PhD and just gets better from there. -Enjoying hands-on computation with real numbers is important for understanding how to crunch arbitrary numbers, so I'd say it's great that you enjoy number crunching. - - -Maybe I just don't understand but it seems like all of the earlier useful rote memorization technique is going to get lost. IS this supposed to be the case? - -Oh, rote memorization is for kindergarten. You need to remember very little, because you'll be able to quickly derive the formulae you need. (To derive the sine/cosine equalities – like double angle formulae, etc. – all you need is Pythagoras' Theorem and the law for exponentials $(a^m)^n = a^{mn}$).<|endoftext|> -TITLE: are two metrics with same compact sets topologically equivalent? -QUESTION [5 upvotes]: are two metrics with same compact sets topologically equivalent ? -I think if the cardinal of set is finite then we have one metric that is the discrete metric and every metric on this set is equivalent with the discrete metric. -now let $X$ be infinite set , in this case I consider $X= \Bbb{N}$ and $d(x,y)=|x-y|$ and k(x,y) is discrete metric ,k and d have same compact sets and are -topologically equivalent because any single set is open set, are two metrics with same compact sets topologically equivalent ? (on infinite set) - -REPLY [8 votes]: Yes, because in particular they have the same convergent sequences (a convergent sequence with its limit is a compact subset). And so the same closed sets (for metric spaces $X$, a subset $C$ is closed iff for every sequence from $C$ that converges in $X$ has its limit in $C$), and so the same open sets as well.<|endoftext|> -TITLE: Convergence of Fourier sine and cosine series -QUESTION [6 upvotes]: Discuss whether or not it is possible to have a Fourier series - $$a_0+\sum_{k=1}^\infty[a_k\cos(kx)+b_k\sin(kx)]$$ converge for all - $x$ without either $$a_0+\sum_{k=1}^\infty a_k\cos(kx) \text{ or } - \sum_{k=1}^\infty b_k\sin(kx)$$ converging. - -This is a problem in Bressoud's analysis book and my solution is as follows: "No, because if we let $f(x)=a_0+\sum_{k=1}^\infty[a_k\cos(kx)+b_k\sin(kx)]$, then the two other series are obtained by taking $\frac{f(x)\pm f(-x)}{2}$ and since $f(x)$ is convergent for all $x$ the sine and cosine series should also be convergent." -Here is the hint from the back of the book: - -If the Fourier series converges at $x=0$, then $\sum_{k=1}^\infty a_k$ - converges, and therefore the partial sums of $\sum_{k=1}^\infty a_k$ - are bounded. - -Although I think my solution is correct (please correct me if I'm wrong) I still would like to see other solutions and in particular understand the author's hint since I can't see how the boundedness of partial sums can help. -Thanks! - -REPLY [2 votes]: 1 Proceeding by contraposition, if one of those two series $\Sigma a_i$,$\Sigma b_i$ doesn't converge (that they are absolutely convergent follows from the fact that the Fourier series converges everywhere) i.e. equivalently if one of $\Sigma a_i$,$\Sigma b_i$ have unbounded partial sums, then that implies that the partial sums of the Fourier series itself must also be unbounded (sum of limits is limit of sum) which contradicts the assumption that the Fourier series converges. -I.e. the boundedness of partial sums that you mentioned is a - necessary condition for series convergence, hence if it fails for - either $\Sigma a_i$,$\Sigma b_i$, then it fails for the Fourier - series as a whole, and hence the Fourier series does not converge at - all x. -(note that this follows because of absolute convergence of the series and the monotone/dominated convergence theorem) -2 Your solution is also correct. For example, if $\frac{f(x)-f(-x)}{2}$ is not convergent, then either f(x) or f(-x) is not convergent, which contradicts the assumption that the Fourier series converges everywhere.<|endoftext|> -TITLE: Pythagorean triplets of the form $a^2+(a+1)^2=c^2$ and the space between them -QUESTION [22 upvotes]: I was searching for pythagorean triples where $b=a+1$, and I found using a python program I made the first 10 integer solutions: - -$0^2+1^2=1^2$ -$3^2+4^2=5^2$ -$20^2+21^2=29^2$ -$119^2+120^2=169^2$ -$696^2+697^2=985^2$ -$4059^2+4060^2=5741^2$ -$23660^2+23661^2=33461^2$ -$137903^2+137904^2=195025^2$ -$803760^2+803761^2=1136689^2$ -$4684659^2+4684660^2=6625109^2$ - -Now what's so interesting? I discovered that any $c$, divided by the previous (for example $5/1$ or $29/5$) limits to $5.828427...=\left(\frac{1}{\sqrt2-1}\right)^2=\sqrt8+3$. My question: why? - -REPLY [2 votes]: This explains why the ratios converge to the square of the silver ratio. - - -This shows how all of the primitive Pythagorean triples can be generated by -Pellian sequences, and therefore the ratios of consecutive hypotenuses from any of those sequences would converge to the square of the silver ratio. -All of the primitive triples can be listed by sequences with the same recursion relation, just different initial values. The initial values will only affect the coefficients in the explicit forms for those sequences. This will not affect what their ratios converge to as n goes to infinity for each sequence. -THEOREM: All of the primitive Pythagorean triples can be generated, ordered and -largely sorted without redundancy by substitution into {v^2 – u^2, 2uv, v^2 + u^2 } of consecutive pairs of terms of the Pell numbers and similar sequences formed by the same recursion relation, P(n + 2) = 2P(n + 1) + P(n), and initial values n, n + m and n – m, 3n – 2m, where n and m are positive integers, n > m, gcd(m,n) = 1, -and m is odd. -(The leg difference of triples generated from both sequences is d = 2n^2 – m^2. -And the Pell numbers can be considered the special case of a singleton sequence: -n, n + m, . . . , where n = m = 1.)<|endoftext|> -TITLE: Finding the shortest distance between two Parabolas -QUESTION [5 upvotes]: Recently, a problem asked me to find the minimum distance between the parabolas $y=x^2$ and $y=-x^2-16x-65$. -I proceeded with the problem as thus. -Let $P(a,a^2), Q(b, -b^2-16b-65), a-b=x$. -$\therefore PQ^2=x^2+(2a^2+2ax+16a+x^2+16x+65)^2$. -$PQ^2=x^2+(2(a+\frac{x+8}{2})^2+\frac{(x+8)^2+2}{2})^2 \ge (x^2+(\frac{(x+8)^2+2}{2})^2)(1+\frac{1}{4}) \times \frac{4}{5}$ -Applying Cauchy gives us that -$PQ^2 \ge (\frac{1}{4}x^2+3x+\frac{33}{2})^2 \times \frac{4}{5} \ge (\frac{15}{2})^2 \times \frac{4}{5}=75$ -This implies that the answer is $\sqrt{75}$. -However, it took me a long time to find the values for Cauchy, and the calculations proved tedious. -What are other approaches to this problem? -EDIT: $(\frac{15}{2})^2 \times \frac{4}{5} \neq 75$, it`s $45$ actually! - -REPLY [2 votes]: Suppose the closest points are $(a,a^2)$ and $(b,-b^2-16b-65)$. -As @brevan-ellefsen noted, the slopes of these parabolas are equal at these points, so $2a=-2b-16$, or $b=-a-8$. -The closest points are therefore $(a,a^2)$ and $(-a-8,-(-a-8)^2-16(-a-8)-65)$ for some value of $a$. The squared distance between these points is $$s(a)=(2a+8)^2+(a^2+(a+8)^2-16(a+8)+65)^2=65 + 32 a + 8 a^2 + 4 a^4.$$ This is smallest when $s'(a)=16(a^3+a+2)=(a+1)(a^2-a+2)=0$, or when $a=-1$. When $a=-1$, $s(a)=65 -32 + 8 + 4=45$, so the minimum distance between the parabolas is $\sqrt{45}$.<|endoftext|> -TITLE: If the tensor product of algebras $A \otimes B$ is unital, both $A$ and $B$ must be unital -QUESTION [6 upvotes]: It is clear that if $A$ and $B$ are unital algebras (over a field), then the tensor product $A \otimes B$ is also unital, with the unit being $1_A \otimes 1_B$. I came across an exercise that questions about the converse statement. That is, if $A \otimes B$ is a unital non-zero algebra then $A$ and $B$ must also be unital. -I started by denoting $e$ the unit of $A \otimes B$. We can write $e = \sum_{i=1}^{n} a_i \otimes b_i$, with $n$ being minimal. This minimality implies that $a_1, \cdots, a_n$ and $b_1, \cdots, b_n$ are linearly independent. If we can prove that $n = 1$, then $e = a \otimes b$ is a pure tensor. These elements $a \in A$ and $b \in B$ are the ideal candidates for units in $A$ and $B$, respectively. However, I have not been able to arrive to a contradiction if $n > 1$ using only the basic tools and computations of tensors. Since no properties from $A$ or $B$ are assumed, I do not know what other tools can be used in this generality. -Note: Said exercise can be found in Introduction to Noncommutative Algebra by M. Bresar, chapter 4, page 104. - -REPLY [2 votes]: I was able to find a positive answer thanks to my lecturer who, as far as I know, is not present on M.SE. I will write a proof based on the idea he gave me, which is his credit. -Let $A$, $B$ be algebras over a field $F$ and suppose $A \otimes B$ is nonzero and unital. Since $A \otimes B$ is nonzero, then both $A$ and $B$ are nonzero. Take $0 \neq x \in A$ and $y \in B$ arbitrary elements. Write $e = \sum_{i=1}^{n} a_i \otimes b_i$ (as I did on the question). Since $e$ is the unit in $A \otimes B$, we have -\begin{align} x \otimes y = (x\otimes y)e = \sum_{i=1}^{n} xa_i \otimes yb_i \tag{1}. \end{align} -Furthermore, since $x \neq 0$, we can expand $x$ to a basis of $A$ and hence, write $xa_i = \lambda_ix + v_i$, where $\lambda_i \in F$ and $v_i$ is linearly independent with $x$. Plugging this to $(1)$, we get -\begin{align} x \otimes y = \sum_{i=1}^{n} (\lambda_i x +v_i)\otimes yb_i = x \otimes\sum_{i=1}^{n} \lambda_i y b_i + \sum_{i=1}^{n} v_i \otimes yb_i \tag{2}\end{align} -or equivalently, after arranging terms -\begin{align} x \otimes (y - \sum_{i=1}^{n}\lambda_i y b_i)= \sum_{i=1}^{n} v_i \otimes yb_i \tag{3}.\end{align} -Since $x$ is linearly independent with each $v_i$, we conclude that -\begin{align} y = \sum_{i=1}^{n}\lambda_i y b_i = y (\sum_{i=1}^{n} \lambda_ib_i) \tag{4}\end{align} -(via a result on linear independence of tensor products, for instance, Lemma 4.8 in Bresar). -Analogously, using $x \otimes y = e(x \otimes y)$, we can conclude that $y = (\sum_{i=1}^{n} \lambda_ib_i)y$ and since $y \in B$ is arbitrary, this means that $\sum_{i=1}^{n} \lambda_ib_i$ is the unit of $B$. -The same argument works for proving that $A$ has unit as well.<|endoftext|> -TITLE: What is the rate of convergence in the uniform local limit theorem? -QUESTION [6 upvotes]: Let $(X_i)_i$ be a sequence of iid random variables, s.t. for some sequences $a_n, b_n$ the normalized sum $$Z_n=\frac{X_1+\dots+X_n}{b_n}-a_n$$ converges weakly to an $\alpha$-stable distributed random variable $Z$ with density $q$. It is known that if $Z_n$ has a bounded density $p_n$, then we have a uniform local limit theorem -$$\Delta_n=\sup_{x\in\mathbb{R}}|p_n(x)-q(x)|\to 0.\tag{1}\label{1}$$ -Question: What can we say about the rate of convergence $\Delta_n\to 0$? -I found a result concerning $L^p$-convergence, see Banys 1975, but no asymptotic expansion or at least some large/small-o estimates for the case $p=\infty$. -I would also appreciate any comments concerning related topics, e.g. rate of convergence for the corresponding characteristic functions (since \eqref{1} is shown by Fourier-inversion). -Thanks in advance - -REPLY [2 votes]: (A late answer since I recently came across a similar question.) -The best result I found is by Basu and Maejima: - -If the distribution of $X_1$ is absolutely continuous with density $f_{X_1}$ and belongs to the normal domain of attraction of an $\alpha$-stable random variable (strictly stable if $\alpha\in (0,1]$) with density $f_Z$, the characteristic function of $X_1$ is integrable to some power and - $$ -\int_{-\infty}^\infty x^{\lfloor\alpha\rfloor+1}|f_{X_1}(x) - f_Z(x)|dx<\infty,\tag{1} -$$ - then (with $Z_n$ as above) - $$ -\sup_{x\in \mathbb{R}}(1+|x|^{\alpha})|f_{Z_n}(x) - f_Z(x)|= O(n^{1-(1+\lfloor\alpha\rfloor)/\alpha}),\quad n\to\infty. \tag{2} -$$ - -The condition (1) about existence of pseudomoment is quite restrictive. Rachev and Yukich pose instead assumptions on some "ideal metrics". Such assumptions are weaker than (1) and are close to necessary, but I found them rather hard to verify.<|endoftext|> -TITLE: Show that $8x^4 −16x^3 +16x^2 −8x+k = 0$ has at least one non-real root for all real $k$. Find the sum of the non-real roots -QUESTION [6 upvotes]: Show that $8x^4 −16x^3 +16x^2 −8x+k = 0$ has at least one non-real root for all real $k$. Find the sum of the non-real roots. - -Since this polynomial looks so symmetric, I think factoring it might help. We have that $8(x^4-2x^3+2x^2-x) = 8x(x-1)(x^2-x+1)= -k$. Then I'm not sure how to work with the non-real root part, but I think that $(x^2-x+1)$ may have something to do with it. - -REPLY [3 votes]: The I-take-no-clever-shortcuts approach: -I consider the polynomial $p(x)=8x^4-16x^3+16x^2-8x$. -I try to rewrite $p(x)$ as something of the form $d(ax^2+bx+c)^2$ because if it is so, I know how to find the roots. The $d$ coefficient is there only because I like my leading coefficient of $x^4$ to be rational, and $8$ is not a square, but $8/2=4=2^2$ it is. So let me assume $d=2$, which is like to say I'm considering the coefficients of $p(x)/2$ -Expanding $(ax^2+bx+c)^2$ I find -$$a^2x^4 + 2 a b x^3 +(b^2 + 2 a c) x^2+2 b c x+c^2$$ -it is not difficult to see if there is a choice (actually there are two with opposite signs) of $a,b,c$ that makes the above polynomial equal to the source one, minus the constant terms (that we will send to the right anyway). -Indeed we can immediately find that $a=\pm 2$. Let me try with $a=2$. From that I derive (from the coefficient of $x^3$) that $b=-2$ and then $c=1$. -All in all we have obtained $p(x)=2(2x^2-2x+1)^2 -2$ and thus -that $p(x)=-k$ can be rewritten as -$$ -2(2x^2-2x+1)^2 = 2-k -$$ -Now, it is not difficult to work out an expression for the roots of this equation. You simply consider the two equations (one for each sign) -$$ -2x^2-2x+1 = \pm \sqrt{1-k/2} -$$ -and solve for $x$ using the classic formula. At the end you obtain four solutions, namely -$$ -x_{1,2,3,4}=\frac{1}{2}\left(1\pm\sqrt{-1\pm\sqrt{4-2k}}\right) -$$ -From here, it is easy to answer the questions of the problem.<|endoftext|> -TITLE: Proving dimension formula in linear algebra -QUESTION [5 upvotes]: Let $V$ and $W$ be finite dimensional vector spaces and let $T:V \to W$ be a linear transformation. -(a) Prove that if $\dim(V) < \dim(W)$ then $T$ cannot be onto. -(b) Prove that if $\dim(V) > \dim(W)$ then $T$ cannot be one-to-one. - -What I tried: -(a) Proving by contradiction. Suppose that $T$ is onto. Then, since we are also given that $T$ is linear, then $T$ has to be one-to-one. Thus $T$ is both one-to-one and onto which means $\dim(V) = \dim(W)$ hence contradiction the fact that $\dim(V) < \dim(W)$. -(b) Again proving by contradiction, suppose that $T$ is one-to-one. Then we know that $\dim N(T) = 0$. -And since $\dim R(T) + \dim N(T) = \dim(V)$, this makes $\dim R(T) = \dim(W)$, and thus $V$ maps onto $W$, which contradicts the fact that $\dim(V) > \dim(W)$ and hence proving the statement. -Is my prove correct? Could anyone explain? Also could anyone show me how to do the prove directly instead of using contradiction? Thanks - -REPLY [2 votes]: You don't need contradiction. -Suppose $\dim V<\dim W$; then -$$ -\dim R(T)=\dim V-\dim N(T)<\dim W-\dim N(T)<\dim W -$$ -so $\dim R(T)<\dim W$ and $T$ is not onto. -Suppose $\dim V>\dim W$; then -$$ -\dim N(T)=\dim V-\dim R(T)>\dim W-\dim R(T)\ge0 -$$ -so $\dim N(T)>0$ and $T$ is not one-to-one. - -What about your proofs? The fact that an onto linear map is also one-to-one is valid only if domain and codomain have the same dimension. So you can't use the fact that $T$ is one-to-one in the first attempt. -The second attempt is likewise affected by the wrong assumption that an one-to-one linear map is onto, which again is only valid if domain and codomain have the same dimension.<|endoftext|> -TITLE: A Compact Hausdorff Space with no Manifold Structure? -QUESTION [5 upvotes]: What is an example of a compact Hausdorff space that cannot be given the structure of a -(i) differential manifold -(ii) topological manifold? - -REPLY [3 votes]: The interval $[0,1]$ is a compact Hausdorff space which doesn't carry the structure of a manifold without boundary. (Of course, it carries the structure of a manifold with boundary.)<|endoftext|> -TITLE: Is composition of regular epimorphisms always regular? -QUESTION [8 upvotes]: In a finitely complete and cocomplete category. Does it always hold that the composition of two regular epimorphisms is regular? And if it's not the case, what kind of additional constraints can make it true (say, a pre-abelian category)? -What I already knew is it holds for categories where regular epimorphisms and strong epimorphisms conincide. - -REPLY [6 votes]: My standard example of a category where regular epimorphisms are not closed under composition is the category $\mathbf{Cat}$ of small categories. -Let $\mathbb{2}=\{0\to 1\}$ be the category with two objects and one non-identity morphism between them, and let $F:\mathbb{2}\to\mathbb{N}$ be the functor sending this morphism to $1$, where $\mathbb{N}$ is the additive monoid of natural numbers, viewed as 1-object category. -Let $G:\mathbb{N}\to\mathbb{Z}$ be the inclusion of additive monoids, viewed as functor between the associated 1-object categories, and let $H: \mathbb{Z}\to\mathbb{Z}/2\mathbb{Z}$ be the quotient map, again viewed as functor between 1-object categories. -Then $F$ and $H\circ G$ are regular epis in $\mathbf{Cat}$, but $H\circ G\circ F$ is not.<|endoftext|> -TITLE: Convergence and value of infinite product $\prod^{\infty}_{n=1} n \sin \left( \frac{1}{n} \right)$? -QUESTION [8 upvotes]: Since the limit $\frac{\sin(x)}{x}=1$ for $x \rightarrow 0$, I wondered about the infinite product: -$$\prod^{\infty}_{n=1} n \sin \left( \frac{1}{n} \right)=\sin(1) \cdot 2 \sin\left( \frac{1}{2} \right) \cdot 3 \sin\left( \frac{1}{3} \right) \dots$$ -By numerical experiment in Mathematica it seems to converge, even if very slowly (I mean to non-zero value): -$$P(14997)= 0.755371783$$ -$$P(14998)= 0.755371782$$ -$$P(14999)= 0.755371782$$ -$$P(15000)= 0.755371781$$ -I can prove the convergence by integral test for the series: -$$\sum^{\infty}_{n=1} \ln\left( n \sin \left( \frac{1}{n} \right) \right)$$ -$$\int^{\infty}_{1} \ln\left( x \sin \left( \frac{1}{x} \right) \right) dx=\int^{1}_{0} \frac{1}{y^2} \ln \left( \frac{\sin (y)}{y} \right) dy=-0.168593$$ -I think the integral test can work with negative function as long as it's monotone, otherwise I can just put the minus sign before the infinite sum. -By the way, this is a related question about the convergence of the sum above. -But I'm more interested in the infinite product itself. -I'm not sure if the value of this infinite product can be found and how to go about it. Is it zero or not? Any thoughts would be appreciated - -REPLY [3 votes]: Alright, this is another answer, much better one. -We can evaluate this product numerically with excellent precision, if we get it into a better form. -$$P=\prod^{\infty}_{n=1} n \sin \left( \frac{1}{n} \right)=\prod^{\infty}_{n=1} \prod^{\infty}_{k=1} \left(1- \frac{1}{\pi^2 n^2 k^2} \right)$$ -Now we take logarithm of the product: -$$\ln P=\sum^{\infty}_{n=1} \sum^{\infty}_{k=1} \ln \left(1- \frac{1}{\pi^2 n^2 k^2} \right)=-\sum^{\infty}_{n=1} \sum^{\infty}_{k=1}\sum^{\infty}_{l=1}\frac{1}{l~\pi^{2l} n^{2l} k^{2l}}=-\sum^{\infty}_{l=1}\frac{\zeta (2l)^2}{l~\pi^{2l}}$$ -This last single sum Mathematica computes with great precision, so we can write: -$$\ln P=-0.280556336229155079602039680939198362173$$ -And the product is: - -$$P=\exp \left(-\sum^{\infty}_{l=1}\frac{\zeta (2l)^2}{l~\pi^{2l}} \right)=0.75536338851857321406336498617047655360$$ - -By the same logic we also have: - -$$P_1=\prod^{\infty}_{n=1} n \sinh \left( \frac{1}{n} \right)=\exp \left(-\sum^{\infty}_{l=1}\frac{(-1)^l \zeta (2l)^2}{l~\pi^{2l}} \right)=1.307970936664283649012104476$$<|endoftext|> -TITLE: The knowledge of $n=n(s)$ can be used to determine the curvature $k(s)$ and the torsion $\tau (s)$ -QUESTION [7 upvotes]: Question: - -Show that the knowledge of the vector function $n=n(s)$ of a curve $\alpha:I\rightarrow \mathbb{R^3}$ with nonzero torsion everywhere, determines the curvature $k(s)$ and the torsion $\tau (s)$ of $\alpha$. - -Notes: $n$ is the normal versor to $\alpha$. -Attempt: I tried using Frenet-Serret formulas, and then using the vector product between $n$ and $n'$, but it seems like I can't get to any result. - -REPLY [3 votes]: The Question as posed here is not solvable (as is indicated by the comments in the answer above). One needs the knowledge of the function $\frac{\kappa}{\tau}$ at one point $t_0$. -Lets give a counterexample: -Consider the Helix -$$ c(s) := (a \cdot \cos(s) , a \cdot \sin(s) , b \cdot s) \text{ for } s \in \mathbb{R}$$ -with $a^2 +b^2 = 1$, $a,b>0$. Then $c(s)$ ist parametrized by arclength. -The normal vector is given by -$$ n(s) = (- \cos(s) , -\sin(s), 0) \text{ for all } s\in \mathbb{R}. $$ -One has in general $\kappa=a, \tau= -b$ and $\frac{\kappa}{\tau}=-a/b$. -The choices $a_1=b_1= 1/\sqrt{2}$ and $a_2= 1/2 ,b_2= \sqrt{3}/2$ give two different curves parametrized by arclength. Both curves have the same normal vector for all times and non-vanishing torsion. And both curves have different curvature and torsion.<|endoftext|> -TITLE: Why is the image of an algebraic group by a morphism also an algebraic group? -QUESTION [6 upvotes]: Let $K$ be a field and $G\subset K^m$ an (affine) algebraic group. -If $\varphi:G\rightarrow (K^n,+)$ is a morphism of algebraic groups, why is $\varphi(G)$ is an algebraic group ? -I would say for instance that $\varphi(G)$ is constructible by Chevalleys Theorem (if $K$ is algebraically closed, isn't it ?), and a group so closed... hence an algebraic variety, but that seems to involve to much technicalities, and does not hold if $K$ is not algebraically closed (does it ?). A more direct way to see this ? - -REPLY [6 votes]: First, Chevalley's theorem holds pretty universally—it holds for any morphism locally of finite presentation (or maybe you need actual finite presentation, I can't remember). -Anyways, the true reason is fairly sophisticated if you don't assume that your groups are smooth. The reference is then SGA 3 Proposition 1.2, Exposé VIB. -If your groups ARE smooth (which, for example, is always true in characteristic $0$), then, in fact any morphism $f:G\to H$ is a quotient map onto the scheme-theoretic image which is a smooth group scheme itself. The fact about it being a quotient map and smooth is just basic theory. The hard part is the Closed Orbit Lemma. See here for a (not too rigorous) discussion of the topic.<|endoftext|> -TITLE: Convergent sequence of irrational numbers that has a rational limit. -QUESTION [7 upvotes]: Is it possible to have a convergent sequence whose terms are all irrational but whose limit is rational? - -REPLY [7 votes]: I will give a more interesting answer (I think OP wants something like that): -$$\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{2+\cdots}}}}=2$$ -Generally -$$\sqrt{a+b\sqrt{a+b\sqrt{a+b\sqrt{a+\cdots}}}}=\frac{1}{2}(b+\sqrt{b^2+4a})$$ -It's not hard to find such numbers that $\sqrt{b^2+4a}$ is rational. -Also: -$$\sqrt{2}^{\sqrt{2}^{\sqrt{2}^...}}=2$$ -Also, using Euler's continued fraction theorem we can have something like this: -$$1=\cfrac{\pi^2/9}{2-\pi^2/9+\cfrac{2\pi^2/9}{12-\pi^2/9+\cfrac{12\pi^2/9}{30-\pi^2/9+\cfrac{30\pi^2/9}{56-\pi^2/9+\cdots}}}}$$ - -Actually, I can do even better. Let $\phi$ be the golden ratio, then we have: -$$1=\frac{1}{\phi^2}+\frac{1}{\phi^3}+\frac{1}{\phi^4}+\frac{1}{\phi^5}+\cdots=\sum^{\infty}_{k=2}\frac{1}{\phi^k}$$ -But we don't want $e$ to feel left out, so here is another one: -$$1=\cfrac{e}{e+\frac{1}{e}-\cfrac{1}{e+\frac{1}{e}-\cfrac{1}{e+\frac{1}{e}-\cdots}}}$$ - -Another good one. Using the following: -$$2=e^{\ln 2}=e^{1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\dots}$$ -We obtain an infinite product, converging to $2$: -$$\prod_{k=1}^{\infty} \frac{\sqrt[2k-1]{e}}{\sqrt[2k]{e}} =\frac{e \sqrt[3]{e} \sqrt[5]{e} \sqrt[7]{e} \cdots}{\sqrt{e} \sqrt[4]{e} \sqrt[6]{e} \sqrt[8]{e} \cdots}=2$$<|endoftext|> -TITLE: The sum of the following infinite series $\frac{4}{20}+\frac{4\cdot 7}{20\cdot 30}+\frac{4\cdot 7\cdot 10}{20\cdot 30 \cdot 40}+\cdots$ -QUESTION [12 upvotes]: The sum of the following infinite series $\displaystyle \frac{4}{20}+\frac{4\cdot 7}{20\cdot 30}+\frac{4\cdot 7\cdot 10}{20\cdot 30 \cdot 40}+\cdots$ - -$\bf{My\; Try::}$ We can write the given series as $$\left(1+\frac{4}{20}+\frac{4\cdot 7}{20\cdot 30}+\frac{4\cdot 7\cdot 10}{20\cdot 30 \cdot 40}+\cdots\right)-1$$ -Now camparing with $$(1+x)^n = 1+nx+\frac{n(n-1)x^2}{2!}+\cdots$$ -So we get $\displaystyle nx=\frac{4}{20}$ and $\displaystyle \frac{n(n-1)x^2}{2}=\frac{4\cdot 7}{20\cdot 30}$ -So we get $$\frac{nx\cdot (nx-x)}{2}=\frac{4\cdot 7}{20\cdot 30}\Rightarrow \frac{4}{20}\cdot \left(\frac{4-20}{20}\right)\cdot \frac{1}{2}x^2=\frac{4}{20}\cdot \frac{7}{30}$$ -But here $x^2=\text{Negative.}$ -I did not understand how can I solve it -Help me, Thanks - -REPLY [9 votes]: The numerators suggest that you could make use of a power series involving exponents that are rational numbers with denominator $3$. -$$\begin{align} -\sum_{n=1}^{\infty}\frac{4\cdot7\cdot\cdots\cdot(3n+1)}{(n+1)!10^n} -&=\sum_{n=1}^{\infty}\frac{\frac43\cdot\frac73\cdot\cdots\cdot\frac{3n+1}3}{(n+1)!\left(10/3\right)^n}\\ -&=\sum_{n=1}^{\infty}\frac{1}{n+1}\binom{\frac{3n+1}{3}}{n}\left(\frac{3}{10}\right)^n\\ -&=\left[\sum_{n=1}^{\infty}\frac{1}{n+1}\binom{\frac{3n+1}{3}}{n}x^n\right]_{x=3/10}\\ -&=\left[\frac{1}{x}\sum_{n=1}^{\infty}\frac{1}{n+1}\binom{\frac{3n+1}{3}}{n}x^{n+1}\right]_{x=3/10}\\ -&=\left[\frac{1}{x}\int_0^x\sum_{n=1}^{\infty}\binom{\frac{3n+1}{3}}{n}t^{n}\,dt\right]_{x=3/10}\\ -&=\left[\frac{1}{x}\int_0^x\sum_{n=1}^{\infty}\binom{-\frac{4}{3}}{n}(-t)^{n}\,dt\right]_{x=3/10}\\ -&=\left[\frac{1}{x}\int_0^x\left(\left(1-t\right)^{-4/3}-1\right)\,dt\right]_{x=3/10}\\ -&=\left[\frac{1}{x}\left[3\left(1-t\right)^{-1/3}-t\right]_{t=0}^{t=x}\right]_{x=3/10}\\ -&=\left[\frac{1}{x}\left(3\left(1-x\right)^{-1/3}-x-3\right)\right]_{x=3/10}\\ -&=\frac{10}{3}\left(3\left(1-\frac{3}{10}\right)^{-1/3}-\frac{3}{10}-3\right)\\ -&=10\left(\frac{7}{10}\right)^{-1/3}-11\\ -&=\sqrt[3]{\frac{10^4}{7}}-11\\ -\end{align}$$<|endoftext|> -TITLE: The root system of $sl(3,\mathbb C)$ -QUESTION [7 upvotes]: I want to determine the root-system of the lie algebra $sl(3,\mathbb C)$. Does someone know a good (and complete) reference for this problem? -I know that the root-system is $A_2$ but I want to see a complete proof (calculation) for this. -Thanks in advance. - -REPLY [8 votes]: The root system of $A_2$ consists of $\Phi=\{ \pm \alpha,\pm \beta , \pm (\alpha+\beta) \}$, which can be constructed as follows. With the Cartan subalgebra $\mathfrak{h}_{\mathbb{R}}\cong \mathbb{R}^2$ -and the canonical scalar product on $\mathbb{R}^2$ we can realize the simple roots as -$$ -\alpha=\begin{pmatrix} -\frac{1}{2} \\ \frac{\sqrt{3}}{2} \end{pmatrix}, \quad -\beta=\begin{pmatrix} 1 \\ 0 \end{pmatrix}. -$$ -Then we obviously have $(\alpha,\alpha)=(\beta,\beta)=1$, and in addition -\begin{align*} -\langle \alpha, \alpha^{\vee}\rangle & = \frac{2(\alpha,\alpha)}{(\alpha,\alpha)}= 2 ,\\ -\langle \alpha, \beta^{\vee}\rangle & = \frac{2(\alpha,\beta)}{(\alpha,\alpha)}= -1,\\ -\langle \beta, \beta^{\vee}\rangle & =2, -\end{align*} -and we obtain the Cartan numbers for type $A_2$. This shows that the root system of $A_2$ consists of $\Phi=\{ \pm \alpha,\pm \beta , \pm (\alpha+\beta) \}$. For additional information also see the MSE question here, for the case of $B_2$. Of course, the same discussion is valid for type $A_2$.<|endoftext|> -TITLE: Find (linear) transformation matrix using the fact that the diagonals of a parallelogram bisect each other. -QUESTION [6 upvotes]: This is the first time I post something on this website. I'm on this question already for hours. I'm clearly not asking the community to do my homework, I'm hoping someone can explain me how I should solve the following question; - -Let $l$ be a line through the origin in $\mathbb{R}^2$, $P_l$ the linear transformation that projects a vector onto $l$, and $F_l$ the transformation that reflects a vector in $l$. - - -Draw diagrams to show that $F_l$ is linear. Diagrams? How does this look like? A standard matrix? -Figure 3.14 (see image) suggests a way to find the matrix of $F_l$, using the fact that the diagonals of a parallelogram bisect each other. Prove that $F_l = 2P_l(x) - x$, and use this result to show that the standard matrix of $F_l$ is (see image). -If the angle between $l$ and the positive $x$-axis is $A$, show that the matrix of $F_l$ is (see image). - -I attached the question as image -Hopefully you can help. -Thanks! -Image: i.stack.imgur.com/vFkmM.jpg -EDIT: Image shown here: - -REPLY [3 votes]: For the first part, the diagram which you should draw is similar to the figure 3.14. Your map $F_\ell : \mathbb{R}^2 \to \mathbb{R}^2$ is linear if $F_\ell(x+y) = F_\ell(x) + F_\ell(y)$ and $F_\ell(cx) = cF_\ell(x)$. Here $x$, $y$ are points in your domain $\mathbb{R}^2$ and $c$ is a scalar. You can check the equalities by drawing the vectors on either side of the two equations I wrote. For example, to check the second equality, you have $x$, $\ell$, and $F_\ell(x)$ already drawn in Figure 3.14. Try drawing $cx$, then $F_\ell(cx)$, and draw $cF_\ell(x)$, and verify that these latter two are equal. You can do something similar for the first equality. -For part b: I'd prefer to rewrite the equality as $x + F_\ell(x) = 2P_\ell(x)$. Now look at Figure 3.14. How is the left side, $x + F_\ell(x)$, depicted geometrically? How is $2P_\ell(x)$ depicted geometrically? Convince yourself that these vectors are equal. -Now the standard matrix of a linear map from $\mathbb{R}^2$ to $\mathbb{R}^2$ consists of two columns: the first column says what the map does to the vector $\langle 1, 0 \rangle$, and the second column says what the map does to the vector $\langle 0, 1 \rangle$. You should first write the standard matrix for $P_\ell$ by actually computing the vector projection of each of these two basis vectors onto the vector $\langle d_1, d_2 \rangle$. Now you can use the equality to compute the standard matrix of $F_\ell$. -Finally, part c will follow from b if you plug in the direction vector $\langle \cos \theta, \sin \theta \rangle$.<|endoftext|> -TITLE: Question about proof: Uniform cauchy $\Rightarrow$ Uniform convergence -QUESTION [7 upvotes]: I have one quick question regarding the proof of a theorem contained in here : https://www.math.ucdavis.edu/~hunter/m125a/intro_analysis_ch5.pdf - -Theorem 5.13. A sequence $(f_n)$ of functions $f_n : A → R$ converges - uniformly on $A$ if and only if it is uniformly Cauchy on $A$. - - -Question: -In the triangle inequality part -$|f_n(x) - f(x)| \leq |f_n(x) - f_m(x)| + |f_m(x) - f(x)|$ -Why do we know that $f_m(x) \to f(x)$ as $m \to \infty$ so that $|f_m(x) - f(x)| < \frac{\epsilon}{2}$ -I mean, isn't $|f_m(x) - f(x)|$ basically the same as $|f_n(x) - f(x)|$ on the left hand side? Just one is indexed by $m$ instead $n$? Why do we know that $|f_m(x) - f(x)| < \epsilon$ but we do not know immediately whether $|f_n(x) - f(x)| < \epsilon$ - -REPLY [7 votes]: Another way to look at it is as follows. This circumvents any consideration of the dependence of $m$ on $x$ which, clearly, is confusing. -It can be shown that for $n,m > N$ we have, for all $x \in A,$ -$$|f_n(x) - f(x)| < \epsilon + |f_m(x) - f(x)|.$$ -Hence, -$$|f_n(x) - f(x)| -\epsilon < |f_m(x) - f(x)|.$$ -With $x$ fixed, we take the limit of both sides as $m \to \infty$. Since $f_m(x)$ converges pointwise to $f(x)$ and the LHS does not depend on $m$ it follows that for all $x \in A$ and $n > N$, -$$ |f_n(x) - f(x)| -\epsilon = \lim_{m \to \infty}\left(|f_n(x) - f(x)| -\epsilon\right) \leqslant \lim_{m \to \infty}|f_m(x) - f(x)| = 0 \\ \implies |f_n(x) - f(x)| \leqslant\epsilon .$$ -Here we have used the following lemma which you can easily prove by contradiction: -$$a_m > a \, ,\forall m \in \mathbb{N} \implies \lim_{m \to \infty}a_m \geqslant a. $$<|endoftext|> -TITLE: What does $-p \ln p$ mean if p is probability? -QUESTION [7 upvotes]: In statistical mechanics entropy is defined with the following relation: -$$S=-k_B\sum_{i=1}^N p_i\ln p_i,$$ -where $p_i$ is probability of occupying $i$th state, and $N$ is number of accessible states. I understand easily what probability is: for a frequentist it's just average frequency of getting this result. But I have a hard time trying to intuitively understand what $-p_i \ln p_i$ means. In the case where $p_i=p_j\; \forall i\ne j$ this reduces to $\ln N$, i.e. logarithm of number of accessible states. -But in general case of unequal probabilities, what does $-p_i\ln p_i$ really represent? Is it some sort of "(logarithm of) average number of accessible states"? Or maybe it's more useful to try to understand what $p_i^{p_i}$ is (but this seems even harder)? - -REPLY [5 votes]: Let's say you wanted to compress the results of a sequence of independent trials into a sequence of bits. -Then the "ideal" encoding of the result of the trials would have $-\log_2 p_i$ bits for event $i$. This is in the limit, as the number of trials approaches infinity. -Now, what is the expected number of bits per trial? Then, since it is $-\log_2 p_i$ with probability $p_i$, the result is $-\sum p_i\log_2 p_i$. That is, if you want to encode $N$ occurrences of this event, you are going to require, on average, $-N\sum p_i\log_2 p_i$ with your absolutely best encoding. -You can see this most ideally when the $p_i$ are all if the form $\frac{1}{2^{k_i}}$. -For example, if $p_1=1/2, p_2=1/4, p_3=1/4$, then an "ideal" encoding has '0' for event $1$, $10$ for event $2$, and $11$ for event $3$. Then the expected bits per trial is $\frac{1}{2}\cdot 1 + \frac{1}{4}\cdot 2+\frac{1}{4}\cdot 2 = -\sum p_i\log p_i=\frac{3}{2}$. This means, with $N$ trials of this sort, the expected number of bits to store the results will be $\frac{3}{2}N$. -So entropy is also part of what mathematicians call "information theory." That is, the entropy of a system tells you how much (expected) information is needed to describe the results. -Now, if your probabilities are not so nice, then you'd have to encode smarter. For example, if $p_1=p_2=p_3=\frac{1}{3}$, then you wouldn't get "ideal" storage by storing the values one at a time. But, say, if you took five bits at a time, you could store three results, since in $5$ bits, there are $32$ values, and thus you could store any of $27$ results of each roll. In $8$ bits, you can store the result of $5$ trials. In $m$ bits, you can store $\log_3(2^m)$ results. So to store $n$ results, you need $m$ bits with $\log_3(2^m)\geq n$, which is $$m\geq \frac{n}{\log_3 2} = n\log_2 3 = -n\sum p_i\log_2 p_i$$ -So $-p_i\log p_i$ is not really the significant thing. The significant thing is storing the result $i$ in $-\log p_i$ bits. In general, if you stored event $i$ as (an average of) $b_i$ bits, then the "expected" number of bits in a single trial would be: -$$\sum p_ib_i$$ -It's just that the ideal storage, which minimizes the expected number of bits for a huge number of trials, is $b_i=-\log p_i$.<|endoftext|> -TITLE: Stochastic Integration with respect to Cauchy Process? -QUESTION [7 upvotes]: I'm interested in a one-dimensional stochastic process: -$$dX_t = f(X_t)dt + g(X_t) dZ_t$$ -where $Z_t$ is a Cauchy process and $f,g$ are nice polynomials (I'm looking at an ODE that gets perturbed by noise but where the noise has large tails). $Z_t$ has stationary, independent increments with the Cauchy distribution: -$$P\left(Z_{t+s} - Z_s \in dx \right) = \frac{t}{\pi(x^2+t^2)}dx$$ -It is known that $Z_t$ is a Levy Process and hence a semimartingale, so I can potentially use Ito's Lemma to prove nice things about the process $X_t$. -A few questions: - -Is it possible to compute $E\left[\int_0^t g(X_s)dZ_s\right]$? It would be nice if this is $0$ but since $Z_t$ follows a Cauchy distribution with undefined expectation, I doubt it will be $0$ -Is it possible to compute the quadratic variation of $[dZ,dZ]_t$? -Since perhaps the tails may be too large, perhaps it may be better to use a distribution with finite first and second moments. I believe this one may work: $\frac{t\sqrt2}{\pi (x^4+t^4)} dx$. Thus I'd still have "large" tails but with a few finite moments. Is it possible to compute the expectation of a stochastic integral w.r.t. this process and its quadratic variation? (Does this distribution have a name?) - -REPLY [2 votes]: First, you need to read a book on Lévy processes. -Note that you should write $g(X_{t-})dZ_t$ unless you are considering left-continuous processes (which you shouldn't do normally). - -The expectation will always(?) be undefined unless the integrand is identically zero. The reason, heuristically, is that $$E[\int_0^t g(X_{s-}) dZ_s] = \int_0^t E[g(X_{s-}) dZ_s] = \int_0^t E[g(X_{s-}) E[dZ_s]]$$ -since $g(X_{s-})$ is $\mathcal F_{s-}$-measurable, while $dZ_s$ is independent of $\mathcal F_{s-}$. But the inner expectation is undefined. -Yes, it is possible. It will be equal to the sum of squares of its jumps. This does not sound too exciting, but it becomes more interesting if you consider the quadratic variation (on interval $(0,t)$) as a process. Then this is an increasing Lévy process (subordinator), which is $1/2$-stable. -This won't work, since this density won't define a Lévy process. But there are many other possibilities. For instance, you can take a stable Lévy motion with stability index $\alpha>1$. Then it will have a finite expectation but infinite variance. You'll find more examples in books.<|endoftext|> -TITLE: Is there a classical analog of Bloch's theorem? -QUESTION [6 upvotes]: In quantum mechanics, having a spatially periodic Hamiltonian imposes a lot of structure on solutions of Schrodinger's equation (e.g. band structure), primarily due to Bloch's theorem. In perfect analogy, ODE's with periodicity in time have structure, as described by Floquet theory. Is there anything analogous for classical systems (dynamical systems) which are periodic in space rather than time? So, for example, if a system consists of a ball rolling in a periodic landscape, with a potential like $V(x,y)=\sin(x)+\sin(y)$, are there theorems that allow one to deduce anything interesting about the trajectory of the ball from the periodicity of the potential? - -REPLY [2 votes]: A partial answer: yes (there are such possibilities) / no (probably not in the way the OP had in mind): -The linearity of equations and the representation theory of translations are fundamental to the theories of Bloch and Floquet. In the case of Bloch, translations along a lattice vector $u$ commute with the Hamiltonian: - $$ T_u \phi(x)=\psi(x+u), \; \; T_u \circ H = H\circ T_u$$ -which makes possible the simultaneous diagonalisation of $T_u$ and $H$. Requiring solutions to be uniformly bounded (though not in $L^2$ globally, a small caveat) means that diagonalization of $T_u$ is done by unitary representations, thus yielding the Bloch wave-decomposition having factors of the form $e^{i \mathbf k\cdot \mathbf r}$ and its sequel. -In the case of Floquet, one considers a linear (once again) ode on a Banach space $E$, which may be written as -$$\left( \frac{d}{dt} - A(t) \right) x = 0$$ -When the bounded linear operaor $A$ is periodic then $A$ commutes with a time translation of period $\tau$, leaving possible the simultaneous diagonalization. This time, there being no constraint of global bounds the representations of the $T_\tau$ being of the form $M_\tau\in {\rm GL}(E)$ gives rise to the usual Floquet theory where solutions verify relations like $x(t+\tau)=M_\tau x(t)$. -When an ode is nonlinear $\dot{x}-f(t,x)=0$ but with lattice periodicity in $x$ then there is no nice commutation that comes to mind (at least mine) with consequences for solutions of the ode (if $f$ is linear and periodic in some direction, then it is constant in that direction; not so interesting). If you take e.g. the chaotic behavior of solutions to the Sinai billard (ode on a square with a convex obstacle), there seems to be little symmetry in the general solutions. -However, if you look at time-evolution of measures (or densities) under the flow of a (non-linear periodic) ode, then very likely there are Bloch/Floquet like result for eigenvalues of the now linear evolution operator. I searched failed to find a reasonable example for a flow but there are at least such results in that direction for composition maps with a lattice symmetry. An example is Floquet spectrum for weakly coupled map lattices.<|endoftext|> -TITLE: Determining when $\int_{0}^{\infty} \cos(\alpha x) \prod_{m=1}^{n} J_{0}(\beta_{m} x) \, \mathrm dx =0$ without using contour integration -QUESTION [11 upvotes]: Let $J_{0}(z)$ be the Bessel function of the first kind of order zero, and assume that $\alpha$ and $\beta_{m}$ are positive real parameters. -$J_{0}(z)$ is an even function that is real-valued along the real axis. -And when $z$ approaches infinity at a constant phase angle, $J_{0}(z)$ has the asymptotic form $$J_{0}(z) \sim \sqrt{\frac{2}{\pi z}} \cos \left(z-\frac{\pi}{4} \right), \quad |\arg(z)| < \pi. $$ -So by integrating the entire function $$e^{i \alpha z} \prod_{m=1}^{n} J_{0}(\beta_{m}z) , \quad \sum_{m=1}^{n} \beta_{m} < \alpha,$$ around a contour that consists of the real axis and the infinitely large semicircle above it, it would seem to follow that $$\int_{0}^{\infty} \cos(\alpha x) \prod_{m=1}^{n} J_{0}(\beta_{m} x) \, \mathrm dx =0 \, , \quad \sum_{m=1}^{n} \beta_{m} < \alpha. \tag{1} $$ -(For the cases $n=1$ and $n=2$, you would need to appeal to Jordan's lemma.) - -Is there way to prove $(1)$ that doesn't involve contour integration? - -EDIT: -A similar approach also shows that $$\int_{0}^{\infty} \frac{\cos(\alpha x)}{1+x^{2}} \prod_{m=1}^{n} J_{0}(\beta_{m} x) \, \mathrm dx = \frac{\pi e^{-a} }{2} \prod_{m=1}^{n}I_{0}(\beta_{m}), \quad \sum_{m=1}^{n} \beta_{m} \le \alpha,$$ where $I_{0}(z)$ is the modified Bessel function of the first kind of order zero. - -REPLY [6 votes]: It's because the Fourier transform of $\mathrm{J}_0$ vanishes outside $[-1,1]$. -Let $I$ be the integral -$$ \def\J{{\mathrm{J}_0}}\def\dd{{\,\mathrm{d}}}\def\ii{{\mathrm{i}}} -\def\ee{{\mathrm{e}}} -I(\alpha) = \int_0^\infty \cos\alpha x\prod_k \J(\beta_k x)\,\dd x. $$ -I will use the integral representation -$$ \J(x) = \int_0^\pi \cos(x\sin\theta) \frac{\dd\theta}{\pi} = -\int_0^1 \cos(x u)\frac{2\dd u}{\pi\sqrt{1-u^2}} $$ -together with the Fourier transform of the Heaviside sign function in -the form -$$ \int_0^\infty e^{\ii ax}\,\dd x = \text{P.V.}\frac{\ii}{a} + -\pi\delta(a). $$ -Expanding each Bessel function, we get -$$ I(\alpha) = \int_0^\infty\dd x\int_0^1 \Big( \prod_k \frac{2\dd - u_k}{\pi\sqrt{1-u_k^2}}\Big) -\cos\alpha x \prod_k \cos(\beta_k x u_k). $$ -Now expand each cosine as $\cos x = \frac12(\ee^{\ii x} + \ee^{-\ii - x})$: -$$ \cdots = \int_0^\infty \dd x\int_0^1 -\Big( \prod_k \frac{2\dd u_k}{\pi\sqrt{1-u_k^2}} \Big) -\sum_{s\in\{\pm1\}^{n+1}} 2^{-n-1} \exp\Big( -\ii s_0\alpha x + \sum_k \ii s_k \beta_k u_k x \Big), $$ -where the sum is taken over all $2^{n+1}$ choices of signs -$s_0,\ldots,s_n = \pm1$ that come from expanding the cosines in -exponentials. -The integral over $x$ now can be done directly, as above: -$$ \cdots = \frac{1}{2\pi^{n-1}} \int_0^1 -\Big( \prod_k \frac{\dd u_k}{\sqrt{1-u_k^2}} \Big) -\sum_{s\in\{\pm1\}^{n+1}} -\delta\Big(s_0\alpha + \sum_k s_k \beta_k u_k \Big). $$ -(The imaginary part has to vanish so only the $\delta$ term remains.) -This makes it clear why the integral vanishes: the integral -representation of the $n$ Bessel functions integrates over the -$n$-cube $[0,1]^n$, but the $2^{n+1}$ hyperplanes -$$ s_0\alpha + \sum_k s_k \beta_k u_k = 0 $$ -do not intersect this cube at all when -$$ \sum_k \beta_k < |\alpha|. $$<|endoftext|> -TITLE: Smooth sawtooth wave $y(x)=\cos(x-\cos(x-\cos(x-\dots)))$ -QUESTION [11 upvotes]: Consider an infinite recursive function -$$y(x)=\cos(x-\cos(x-\cos(x-\dots)))$$ -$$y=\cos(x-y)$$ -Plotting the function $y(x)$ implicitly we get a smooth sawtooth-like wave: - - -Was this function studied before? For example, its derivative, Fourier series or other properties. It may be useful in electronics or other applications. - -I can find the expression for its derivative in terms of $y$ and $x$, but I do not know how to plot it effectively (without tabulating it by recursive formula for $y$). -$$y'(x)=-\frac{\sin(x-y)}{1-\sin(x-y)}$$ -We obtain the correct expression for the maxima and minima of the function. (Note that the first positive maximum is at $x=1$ and the first minimum is at $x=\pi-1$, which is confirmed by numerical computation). -Since the function is smooth, the derivative should be finite everywhere. Does this mean that the denominator in this expression can never be equal to $0$? -By the way, the recursive formula itself is not a fluke - I checked its convergence numerically in Mathematica and it seems to converge for all values. However, for the values close to the 'vertical lines' the convergence is slower than for the rest. -Edit -The function is not smooth, because its derivative is not defined at $x=\frac{\pi}{2}+2\pi n$ and $y=0$ as cardboard_box pointed out and as can be seen from the expression. Oh, it's just vertical tangents apparently -Update -I wanted to illustrate the very helpful answers and comments I've been given so far. -First, credit goes to Lucian for introducing me to Clausen function (see the first comment below). However, this function does not have vertical tangents anywhere, as you can see in the graph (I plotted Clausen using its Fourier series, since it's easier than the integral definition): - -So, the function I defined here is much more "sawtooth-like". -And finally, davik gave the parametric form for the function and figured out its Fourier series, which I plot below: - -You may notice I displaced and reversed the original function when plotting these graphs. We can also use sine instead of cosine if needed. -Now the final question that I wanted to ask: - -Has anyone seen this function anywhere? Maybe it's worth it to make a complete description of its properties and publish somewhere? - -REPLY [6 votes]: Note that this function's graph can be parametrized as $ x(t) = t + \cos(t)$ and $ y(t) = \cos (t)$ -So, we can pretty much use this to find most interesting things you may want to know. First, it is a shear of the graph of $ y = \cos(x)$ by the matrix -$$ -\pmatrix{1 & 1 \\ - 1 & 0} -$$ -Which tell you the zeros and maxima. Also, if you want vertical tangents, since $y'(t) = - \sin(t);~~ x'(t) = 1 - \sin (t)$ you have vertical tangents at $\frac{\pi}{2} + 2\pi k$. -If we shift this to the right by $\frac{\pi}{2}$, then by quite straightforward computation, together with the definition of the Bessel function, we get that the nth fourier sine series coefficent -$$b_n = \frac{2(-1)^nJ_n(n)}{n}$$ where $J_n(x)$ is the Bessel function of the first kind.<|endoftext|> -TITLE: Vakil's definition of smoothness -- what happens at non-closed points? -QUESTION [9 upvotes]: The following is definition 12.2.6 in Vakil's notes. - -A $k$-scheme is $k$-smooth of dimension $d$, or smooth of dimension - $d$ over $k$, if it is pure dimension $d$, and there exists a cover by - affine open sets $\operatorname{Spec} k[x_1,\dots , x_n]/(f_1,\dots, f_r)$ where the Jacobian matrix has corank $d$ at all points. - -The Jacobian matrix at a point $p$ is the usual matrix of partials evaluated at $p$. -At closed points, the quotient ring is a field, so we have a map of vector spaces and the usual definition of rank applies. For a general prime $P$, what is meant here? For Vakil, a ring element $f\in R$ has the value $f\in R/P$ at the prime $P$. But in this general case we have only a map of domains, not fields, and its not clear to me what is meant by corank here. -One way to interpret this is that certain minors vanish, but I'm not sure if that was what Vakil intended. He explicitly defines "corank" as dimension of the cokernel. - -REPLY [2 votes]: The Jacobian criterion does not work on non-closed $\mathbb{K}$-points of a $\mathbb{K}$-scheme locally of finite type; or it does not work as we know it! -Let $X$ be a scheme over a field $\mathbb{K}$, locally of finite type; that is: -\begin{gather} -\forall P\in X,\,\exists U\subseteq X\,\text{open,}\,t_1,\dots,t_n\,\text{indeterminates,}\,f_1,\dots,f_r\in\mathbb{K}[t_1,\dots,t_n]=R,\\(f_1,\dots,f_r)=I: -P\in U\simeq\operatorname{Spec}R_{\displaystyle/I}\cong V(I)\subseteq\mathbb{A}^n_{\mathbb{K}}\,\text{affine closed subscheme} -\end{gather} -therefore -\begin{equation} -\forall P\in U\subseteq X,\,T_PX\cong T_PU\cong T_PV(I)\leq T_P\mathbb{A}^n_{\mathbb{K}}=T_{\mathfrak{m}_P}\mathcal{O}_{\mathbb{A}^n_{\mathbb{K}},P}=\left(\mathfrak{m}_{P\displaystyle/\mathfrak{m}_P^2}\right)^{\vee}\cong\kappa(P)^n -\end{equation} -where $\mathfrak{m}_P$ is the maximal ideal of the local ring $\mathcal{O}_{\mathbb{A}^n_{\mathbb{K}},P}$ and $\kappa(P)$ is the relavant residue field. -If $P$ is a closed point of $X$, then $P$ is a closed point in $U$; let $\mathfrak{p}$ be the maximal ideal of $R$ corresponding to $P$, and let $\varphi:\mathbb{K}[s_1,\dots,s_r]\to\mathbb{K}[t_1,\dots,t_n]$ the morphism of $\mathbb{K}$-algebras such that -\begin{equation} -\forall i\in\{1,\dots,r\},\,\varphi(s_i)=f_i; -\end{equation} -then: -\begin{gather*} -\operatorname{coker}\varphi=R_{\displaystyle/I},\\ -\varphi^{*}:\mathbb{A}^n_{\mathbb{K}}\to\mathbb{A}^r_{\mathbb{K}},\\ -\ker\varphi^{*}=V(I); -\end{gather*} -let $\varphi_0=\varphi_{\varphi^{*}(P)}:\mathbb{K}[s_1,\dots,_r]_{(s_1,\dots,s_r)}\to\mathbb{K}[t_1,\dots,t_n]_{\mathfrak{p}}$, by definition: -\begin{equation} -(d_P\varphi_0)^{\vee}:\left(T_O\mathbb{A}^r_{\mathbb{K}}\right)^{\vee}\to \left(T_P\mathbb{A}^n_{\mathbb{K}}\right)^{\vee} -\end{equation} -and in particular: -\begin{equation} -\left(T_PV(I)\right)^{\vee}=\operatorname{coker}(d_P\varphi_0)^{\vee}\Rightarrow T_PV(I)=\ker d_P\varphi_0; -\end{equation} -where $(d_P\varphi_0)^{\vee}$ is a $\mathbb{K}$-linear map and $T_P\mathbb{A}^n_{\mathbb{K}}\cong\kappa(P)^n\cong\mathbb{K}^m$ by Hilbert's (Strong) Nullstellensatz (see also the REMARK1). -By Hilbert's Basis Theorem, $\mathfrak{p}$ is a finite generated $\mathbb{K}$-vector space, therefore: -\begin{gather} -\mathfrak{p}=(e_1,\dots,e_m)\\ -\forall i\in\{1,\dots,r\},\,(d_P\varphi_0)^{\vee}(\overline{s_i})=\overline{f_i}=\sum_{j=1}^ma_i^j\overline{e_j},\,\text{where:}\,a_i^j\in\mathbb{K}; -\end{gather} -but every $a_i^j$ no makes sense as the formal derivation of $f_i$ with respect to the element $e_j$ computed at $P$, unless $P$ is a closed $\mathbb{K}$-point of $X$! (Jump to UPDATE.) -Again: is it all clear? -I repeat, the definition 12.2.6 and it is completed by exercise 12.2.H, it is true that this definition is intricated and in apparence is unsatisfactory; but this definition is completely right and it works on $\mathbb{K}$-schemes locally of finite type. -REMARK1: If $P$ is not a closed point of $X$, that is a closed point of $\mathbb{A}^n_{\mathbb{K}}$, we can't apply the Hilbert (Strong) Nullstellensatz. -UPDATE: By a base change, we can define $\overline{\varphi}:\mathbb{F}[s_1,\dots,s_r]\to\mathbb{F}[t_1,\dots,t_n]$ where $\mathbb{F}=\kappa(P)$, we can repeat the same reasoning described for $\varphi$ and we can prove that: -\begin{equation} -T_P(V(I)/\operatorname{Spec}\mathbb{F})=\ker d_P\overline{\varphi}_0 -\end{equation} -where the notation is clear. -Because $P$ is a closed $\mathbb{F}$-point of $\mathbb{A}^n_{\mathbb{F}}$, then: -\begin{equation} -\exists\alpha_1,\dots,\alpha_n\in\mathbb{F}\mid\mathfrak{p}=(t_1-\alpha_1,\dots,t_n-\alpha_n) -\end{equation} -and therefore, via a formal Taylor series of the $f_i$'s -\begin{gather} -\forall i\in\{1,\dots,r\},\,(d_P\varphi_0)^{\vee}(\overline{s_i})=\overline{f_i}=\dots=\sum_{j=1}^n\frac{\partial f_i}{\partial t_j}\bigg|_{(t_1-\alpha_1,\dots,t_n-\alpha_n)}\left(\overline{t_j-\alpha_j}\right) -\end{gather} -that is $T_P(V(I)/\operatorname{Spec}\mathbb{F})$ is the kernel of the linear map from $\mathbb{F}^r$ to $\mathbb{F}^n$ described from the Jacobian matrix of the $f_i$'s with respect to $t_j$'s, with entries in $\mathbb{F}$ and valued in $P$. -REMARK2: In general $T_PV(I)$ and $T_P(V(I)/\operatorname{Spec}\mathbb{F})$ are not isomorphic as $\mathbb{F}$-vector spaces; see exercise 6.3 from Görtz and Wedhorn - Algebraic Geometry I, Schemes With Examples and Exercises. -EDIT: The enumeration is refered to December 29 2015 FOAG version.<|endoftext|> -TITLE: How do conformal maps affect curvature? -QUESTION [8 upvotes]: Let $(\overline{M}^{n+1}, \langle \cdot, \cdot \rangle)$ be a riemannian manifold with riemannian connection $\overline{\nabla}$ and consider $M^n \subset \overline{M}$ an orientable hypersurface with unit normal vector field $\nu: M \to T \overline{M}$. Given a conformal diffeomorphism $f: \overline{M} \to \overline{M}$ say, with conformal factor $\mu^2 \in C^{\infty}(\overline{M}, \mathbb{R}_+^*)$, i.e., -\begin{align*} -\langle Df(p) \cdot v_1, Df(p) \cdot v_2\rangle = \mu^2(p) \langle v_1, v_2 \rangle, \quad \forall p \in \overline{M}, \, \forall v_1, v_2 \in T_p \overline{M}, -\end{align*} -how can we relate the principal curvatures of $M$ at a point $p$ with those of $f(M)$ at the point $f(p)?$ -If $\mu = 1$, i.e., if $f$ is an isometry, can we say that the correspondent principal curvatures are equal? - -REPLY [2 votes]: I'm doubtful that there's a simple relationship between the principal curvatures. Let's compare the shape operators. First, observe that the unit normal vector field $\tilde{\nu}_{f(p)}$ to $f(M)$ at $f(p)$ is given by -$$\tilde{\nu}_{f(p)} = \frac{1}{\mu(p)} df_p(\nu)$$ -Taking $\overline{\nabla}$ of both sides yields -$$\overline{\nabla}_{df(\cdot)} \tilde{\nu} = - \frac{ \overline{\nabla}_{\cdot} \mu}{\mu^2} df(\nu) + \frac{1}{\mu} \overline{\nabla}_{\cdot} \big( df(\nu) \big) \\ -=- \frac{ \overline{\nabla} \mu}{\mu} \tilde{\nu} + \frac{1}{\mu} \big(\overline{\nabla} df \big) (\nu) + \frac{1}{\mu} df \big( \overline{\nabla} \nu \big)\\ -= - \frac{ \overline{\nabla} \mu}{\mu} \tilde{\nu} + \frac{1}{\mu} \big(\overline{\nabla}_\nu df \big) + \frac{1}{\mu} df \big( \overline{\nabla} \nu \big)$$ -where the last equality follows from the fact that $\overline{\nabla} df = Hess(f)$ is symmetric. -Note that by taking $\overline{\nabla}$ of $\langle df(\nu), df(\nu) \rangle = \mu^2$, one can see that $\frac{ \overline{\nabla} \mu}{\mu} \tilde{\nu}$ is the normal component of $\frac{1}{\mu} \big(\overline{\nabla}_\nu df \big)(\cdot) = \frac{1}{\mu} \big(\overline{\nabla}_{\cdot} df \big)(\nu)$. -Therefore, the shape operator $\tilde{S} = \overline{\nabla} \tilde{\nu} : Tf(M) \to Tf(M)$ is the self-adjoint endomorphism given by -$$\tilde{S} \circ df = \frac{1}{\mu} \Big( proj_{Tf(M)} \circ \overline{\nabla}_\nu df + df \circ S \Big)$$ -$$\text{i.e.}\quad \tilde{S} \big(df(X)\big)= \frac{1}{\mu} \Big( \big( - proj_{Tf(M)} \circ \overline{\nabla}_\nu df \big)(X) + df (S (X)) \Big) \qquad \forall X \in TM$$ -Without extra assumptions on $Hess(f)$, I don't see how to relate the eigenvalues of $\tilde{S}$ to those of $S$. -Perhaps an informative example might be to consider the Mobius transformation -$$z \mapsto \frac{z-i}{z+i}$$ -in a neighborhood of the real line in $\mathbb{C}$. -This is a conformal transformation that takes the real line which has principal curvature 0 in $\mathbb{C}$ to a circle which has nonzero principal curvature.<|endoftext|> -TITLE: Epicycles as precursors of Fourier series -QUESTION [7 upvotes]: How convincing an argument can be formulated to claim that the Ptolemaic epicycles were actually an early precursor of Fourier series? Ptolemy lived ~200AD, and so well pre-dates Fourier ~1800. - -REPLY [2 votes]: You might want to take a look at the paper Epicycles are Almost as Good as Trigometric Series by Acosta, Smith, Kosheleva & Kreinovich -They report that epicycles were originally proposed by Appollonius of Perga, late 3rd to early 2nd century BCE, developed by several others, and finalised by Ptolomy in the 1st century AD. -They remark that traditional textbooks term it as 'bad science' that was overthrown by the 'genius' of Copernicus. -But that from the mathematical sense a system of epicycles make perfect sense as a trigonometric series. -They don't comment on any direct connection between epicycles and Fourier series. However, it is known that Fourier was interested in physics - and so likely, astronomy.<|endoftext|> -TITLE: Find all $z$ such that $e^z=6i$ -QUESTION [6 upvotes]: Am I on the right track to solving this? -$$e^z=6i$$ -Let $w=e^z$ -Thus, -$$w=6i$$ -$$e^w=e^{6i}$$ -$$e^w=\cos(6)+i\sin(6)$$ -$$\ln(e^w)=\ln(\cos(6)+i\sin(6))$$ -$$w=\ln(\cos(6)+i\sin(6))$$ -$$e^z=\ln(\cos(6)+i\sin(6))$$ -$$\ln(e^z)=\ln(\ln(\cos(6)+i\sin(6)))$$ -$$z=\ln(\ln(\cos(6)+i\sin(6)))$$ -I had another method that started by taking the natural log of both sides right away, but that leads to $\arctan(6/0)$, which is undefined... - -REPLY [4 votes]: Hopefully, from all of these solutions, you know how to solve this problem. Now, let's try doing it your way. You've done everything right so far: -$$z=\ln(\ln(\cos(6)+i\sin(6)))$$ -By Euler's Identity, we have $\cos(6)+i\sin(6)=e^{6i}$, so clearly, taking the $\ln$ of this is just $6i$: -$$z=\ln(6i)$$ -Now, if we go back to our original equation: -$$e^z=6i$$ -The equation we have at the end of all of this is just taking the $\ln$ of both sides of the original equation. Basically, everything you did is all valid, but you basically return to the original equation when we're all done with simplifying everything, which is why you were off-track. - -REPLY [2 votes]: Suppose $z=x+iy$ and $x$ and $y$ are real. Then -$$ -6i = 6(0 + i) = e^z = e^{x+iy} = e^x e^{iy} = e^x(\cos y + i\sin y). -$$ -So $e^x = 6$ and $0+1i=\cos y + i\sin y$. Thus $\cos y=0$ and $\sin y=1$. So $y = \pi/2$ or $\pi/2+ 2\pi n$ for some integer $n$. - -REPLY [2 votes]: HINT: -$$6i=e^{\log(6)+i\pi/2+i2n\pi}=e^z$$<|endoftext|> -TITLE: Prove $\binom{3n}{n,n,n}=\frac{(3n)!}{n!n!n!}$ is always divisible by $6$ when $n$ is an integer. -QUESTION [11 upvotes]: Prove $$\binom{3n}{n,n,n}=\frac{(3n)!}{n!n!n!}$$ is always divisible by $6$ when $n$ is an integer. - -I have done a similar proof that $\binom{2n}{n}$ is divisible by $2$ by showing that $$\binom{2n}{n}=\binom{2n-1}{n-1}+\binom{2n-1}{n}=2\binom{2n-1}{n-1}$$ but I am at a loss for how to translate this to divisible by $6$. Another way to do this proof would be to show that when you shoot an $n$-element subset from $2n$ you can always match it with another subset (namely the $n$-elements that were not chosen). Again, no idea how to translate this to $6!$. - -REPLY [2 votes]: Notice that $$\binom{3n}{n,n,n}=\frac{(3n)!}{n!n!n!} = \binom{3n}{n}\binom{2n}{n}.$$ We have shown that $\binom{2n}{n}$ is divisible by 2. Now, all we must do is show that $\binom{3n}{n}$ is divisible by 3. You know that $\binom{3n}{n}$ is the number of n-element subsets of a 3n-element set (or, the number of ways to choose n objects among 3n distinct objects), which is always an integer. Note that $$\binom{3n}{n} = \frac{(3n)!}{(n!)(2n)!} = \frac{(3)(n)(3n - 1)!}{(n)(n-1)!(2n)!} = \frac{(3)(3n - 1)!}{(n-1)!(2n)!} = (3)\frac{(3n - 1)!}{(n-1)!(2n)!} = (3)\binom{3n-1}{n-1},$$ which is an integer and is divisible by 3. -By Peter's suggestion, you can generalize to say that for all integers k, $$\binom{kn}{n_{1},n_{2}, ... , n_{k}} = \binom{kn}{n}\binom{(k-1)n}{n}...\binom{(k-k+1)n}{n}$$ and $$\binom{kn}{n} = \frac{(kn!)}{n!(kn-n)!} = \frac{(kn)(kn-1)!}{(n)(n-1)!(kn-n)!} = k\binom{kn-1}{n-1}, $$ and use induction to prove that $$\binom{kn}{n_{1},n_{2}, ... , n_{k}}$$ is divisible by $k!.$<|endoftext|> -TITLE: Concentration inequality for sum of squares of i.i.d. sub-exponential random variables? -QUESTION [14 upvotes]: Suppose $X_1, X_2, \ldots, X_n$ are independent and each has the same distribution with a sub-exponential random variable $X$ (for example, $X$ is the square of a standard normal Gaussian variable). Can I obtain a concentration inequality for the square of sub-exponential $X_i$, say, -$$\mathbb{P}\left( \frac{1}{n} \left( X_1^2+\cdots+X_n^2 \right) \ge \mathbb{E}\left[X^2\right] + t \right) \le C \exp\left( - n \cdot \min\left( C_1 t^2, C_2 t, C_3 \sqrt{t} \right) \right),$$ -where $C, C_1, C_2, C_3$ are constants? -This problem arises in my research. - -Remark: -Actually, for i.i.d. sub-Gaussian random variables $Y_i\ (i=1,\ldots,n)$, I knew that -$$\mathbb{P}\left( \frac{1}{n} \left(Y_1+\cdots+Y_n\right) \ge \mathbb{E}\left[Y\right] + t \right) \le \exp\left( - n\cdot C_1 t^2 \right).$$ -Besides, since $Y_i^2\ (i=1,2,\ldots,n)$ are sub-exponential, I also knew that -$$\mathbb{P}\left( \frac{1}{n} \left(Y_1^2+\cdots+Y_n^2\right)\ge \mathbb{E}\left[ Y^2 \right] + t \right) \le \exp\left( - n\cdot \min(C_1 t^2, C_2 t) \right).$$ -These two inequalities can be proved by a Chernoff bound, since the moment generating functions of $Y$ (sub-Gaussian) and $Y^2$ (sub-exponential) both exist. -However, I want to know whether there is an inequality like -$$\mathbb{P}\left( \frac{1}{n} \left( Y_1^4+\cdots+Y_n^4 \right) \ge \mathbb{E}\left[Y^4\right] + t \right) \le C \exp\left( - n \cdot \min\left( C_1 t^2, C_2 t, C_3 \sqrt{t} \right) \right),$$ -even though the moment generating function of $Y^4$ (square of sub-exponential) does not exist. - -REPLY [8 votes]: No. For non-negative i.i.d. $Y_i$, $$P(Y_1\ge (\mu+t)n)\le P\Bigg(\sum_{i=1}^n Y_i\ge (\mu+t)n\Bigg)\le\exp(-nf(t))$$ implies that $Y_1$ is sub-exponential and a square of a sub-exponential is not guaranteed to be sub-exponential. -However you can obtain $$P(X_1^2+\dots+X_n^2>nt)\sim nP(X_1^2>nt)=n\exp(-\lambda\sqrt {nt})$$ for $X_i\sim \exp(\lambda)$ which you can extend to all subexponential $X_i$ that have exponential tails.<|endoftext|> -TITLE: A good book for beginning Group theory -QUESTION [31 upvotes]: I am new to the field of Abstract Algebra and so far it's looking to me quite tough. So far I have encountered the following books in group theory - Contemporary abstract algebra by Joseph Gallian and Algebra by Michael Artin. But can someone suggest me a book which has theorems and corollaries explained using examples and not just mere proofs? - -REPLY [6 votes]: Less (and more) than what you're looking for, but very interesting: Abstract Algebra done Concretely.<|endoftext|> -TITLE: When does Newton-Raphson Converge/Diverge? -QUESTION [7 upvotes]: Is there an analytical way to know an interval where all points when used in Newton-Raphson will converge/diverge? -I am aware that Newton-Raphson is a special case of fixed point iteration, where: -$$ g(x) = x - \frac{f(x)}{f'(x)} $$ -Also I've read that if $|f(x)\cdot f''(x)|/|f'(x)^2| \lt 1$, then convergence is assured. I am just not sure on how to use this fact? Could anyone give me some examples? Thanks. - -REPLY [3 votes]: A theoretically nice but practically nearly useless answer is provided by the Newton-Kantorovich theorem: If $L=M_2$ is an upper bound for the magnitude of the second derivative over some interval $I$, and with $x_0\in I$ and the first step $s_0=-\frac{f(x_0)}{f'(x_0)}$ the "ball" $B(x_0+s_0,|s_0|)=(x_0+s_0-|s_0|,x_0+s_0+|s_0|)$ is contained in $I$ and -$$ -L·|f'(x_0)^{-1}|^2·|f(x_0)|\le\frac12 -$$ -then there is a unique root inside that ball and Newton's method converges towards it.<|endoftext|> -TITLE: Classifying Covering Spaces using First Cohomology -QUESTION [5 upvotes]: I am familiar with the classification of covering spaces of a space $X$ in terms of subgroups of $\pi_1(X)$ (up to conjugation). However, if $X$ is a manifold, I know that $H^1(X; G)$ classifies G-bundles over $X$ (using Cech cohomology here). I think finite regular covering spaces are $\mathbb{Z}/k \mathbb{Z}$-bundles; regular means that the deck transformations act transitively on the fiber (and regular covers correspond to normal subgroups of $\pi_1(X)$). -Does this mean that $H^1(X; \mathbb{Z}/k\mathbb{Z})$ is in bijection with k-sheeted regular covering spaces over $X$. I could not find such a statement anywhere and so am a bit suspicious. -Also, if this is correct, what does $H^1(X; \mathbb{Z})$ classify? I'm not sure what a $\mathbb{Z}$-bundle is - what has automorphism group equal to $\mathbb{Z}$? -Also, $H^1(X; \mathbb{Z}) = [X, S^1]$ so if $H^1(X; \mathbb{Z})$ classifies some kind of bundles, there should be universal bundle over $S^1$ which pulls back to these bundles. What is this bundle? - -REPLY [6 votes]: It's somewhat delicate here to get all the details right. First, $G$-bundles for a finite group $G$ are not required to be connected, so the relevant version of the classification of covering spaces is the disconnected version, which goes like this: the category of covering spaces of a nice connected space $X$ with basepoint $x$ is equivalent to the category of $\pi_1(X, x)$-sets. -More explicitly, $n$-sheeted covers (possibly disconnected) are equivalent to actions of $\pi_1(X, x)$ on $n$-element sets, or even more explicitly to conjugacy classes of homomorphisms $\pi_1(X, x) \to S_n$. Said another way, $n$-sheeted covers, possibly disconnected, are classified by the nonabelian cohomology set -$$H^1(X, S_n).$$ -Among these, the connected covers correspond to the transitive actions, which are classified by conjugacy classes of subgroups of $\pi_1(X, x)$ of index $n$. Among these, the regular covers correspond to normal subgroups. -Now, for $G$ a finite group, a $G$-bundle is more data than a $|G|$-sheeted cover: the fibers are equipped with a free and transitive right action of $G$ and everything has to be compatible with this. Said another way, $G$-bundles are equivalent to actions of $\pi_1(X, x)$ on $G$ regarded as a right $G$-set, or more explicitly to conjugacy classes of homomorphisms $\pi_1(X, x) \to G$ (thinking of $G$ as a subgroup of $S_{|G|}$ to make the connection back to covers). -Given a finite regular $n$-sheeted cover $Y \to X$ with corresponding normal subgroup $H = \pi_1(Y, y)$ of $\pi_1(X, x)$, we can think of this cover as a $G = \pi_1(X, x)/H$-bundle, but not all $G$-bundles arise in this way (the monodromy map $\pi_1(X, x) \to G$ is not required to be surjective in general), and we only know that $G$ is some finite group of order $n$. Moreover, the data of a $G$-bundle includes the data of an isomorphism between $G$ and this quotient; it's not enough just to know that it exists. -So we can find a finite regular $n$-cover which is not a $\mathbb{Z}/n\mathbb{Z}$-bundle, even up to isomorphism of covers, by finding a group $\pi_1(X, x)$ with a normal subgroup $H$ of index $n$ such that the quotient is not $\mathbb{Z}/n\mathbb{Z}$. A simple example is $X = T^2, \pi_1(X, x) \cong \mathbb{Z}^2$; take $H = 2 \mathbb{Z}^2$, so that the quotient is $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$. -And we can find a $\mathbb{Z}/n\mathbb{Z}$-bundle which is not a finite regular $n$-cover in the usual sense, again even up to isomorphism of covers, by finding a disconnected such bundle; for example, $X \times \mathbb{Z}/n\mathbb{Z}$ for $n \ge 2$ and any $X$.<|endoftext|> -TITLE: How can I find the limit without using a closed form expression -QUESTION [12 upvotes]: I am trying to evaluate this limit without using the closed form expression for the sum of natural numbers raised to $k$th power. $$\lim_{n \to \infty} \dfrac{ 1^n +2^n+\cdots +n^n}{n^n}$$ -So far I have tried l'Hôpital which complicates it rather than simplifying and Cesaro Stolz doesn't seem to work either. - -REPLY [10 votes]: Bernoulli's Inequality says that for $n\ge k$, -$$ -\left(1-\frac kn\right)^n -$$ -is an increasing sequence. Therefore, by Monotone Convergence -$$ -\begin{align} -\sum_{k=0}^n\left(\frac kn\right)^n -&=\sum_{k=0}^n\left(\frac{n-k}n\right)^n\\ -&=\sum_{k=0}^n\left(1-\frac kn\right)^n\\ -&\to\sum_{k=0}^\infty e^{-k}\\ -&=\frac e{e-1} -\end{align} -$$<|endoftext|> -TITLE: Is there any palindromic power of $2$? -QUESTION [26 upvotes]: My question is in the title: - -Is it possible to find $n≥4$ such that $2^n$ is a palindromic number (in base $10$)? - -A palindromic number is a number which is the same, independently from which side we read it (forwards and backwards), for example $121, 484, 55755$. - -My guess is "no". I know that a palindromic number $x$ with even length (i.e. the number of digits $\max\{n : 10^n \mid x\}$ is even) is a multiple of $11$: see here or here or here. In particular, a power of $2$ with an even length can't be a palindromic number. -However, I don't know what to do with the case of an odd length. For instance, if $x=abcdcba$, then $abccba$ is a multiple of $11$, but I don't see how this can help. -Here is a related question. On MathOverflow, related questions are: (1) and (2). Maybe also this thread (since $(1)$ is focused on binary expansion). -I tried with Mathematica and there is no palindromic power of $2$ with exponent $n<10000$: - palindromeQ[n_] := IntegerDigits[n] === Reverse@IntegerDigits[n]; - For[i = 1, i < 10000, i++, If[PalindromeQ[2^i], Print[i]] ] - -Finally, I think that the answer will be the same if we replace $2$ by any integer $n>1$ which is not a multiple of $11$. I don't know how I could prove (even for the case of even length…) that $11^n$ is not a palindromic number for $n≥5$. -Any hint will be helpful. Thank you! - -REPLY [8 votes]: According to the Wikipedia page on palindromic numbers: - -G. J. Simmons conjectured [that] there are no palindromes of form $n^k$ for $k > 4$ (and $n > 1$). - -The cited reference is: - -Murray S. Klamkin (1990), Problems in applied mathematics: selections from SIAM review, p. 577<|endoftext|> -TITLE: Riemann zeta-function functional equation proof -QUESTION [9 upvotes]: I'm reading through Titchmarch's "The Theory of the Riemann Zeta-Function" and there's a part in the functional equation proof number 3 that I haven't figured out. -He defines a function -$$\psi(x)=\sum_{n=1}^\infty e^{-n^2\pi x}$$ -and next, for $x>0$ it is known that -$$ -\sum_{n=-\infty}^\infty e^{-n^2\pi x}=\frac1{\sqrt{x}}\sum_{n=-\infty}^\infty e^{-\frac{n^2\pi}x}, -$$ -or -$$2\psi(x)+1=\frac1{\sqrt{x}}\left( 2\psi\left(\frac1{x}\right)+1\right).$$ -Where does the second equation come from exactly? - -REPLY [6 votes]: I spelled this argument out in some detail, so that I could understand it. The argument is based on https://www.youtube.com/watch?v=-GQFljOVZ7I, but elaborated to my tastes. -Define $\theta(x) = 2 \psi(x) + 1 $. Then, -$$\theta(x)=(\sum_{n=-\infty}^1 e^{-n^2\pi x}) + 1 + (\sum_{n=1}^\infty e^{-n^2\pi x}) =\sum_{n=\infty}^\infty e^{-n^2\pi x}$$ -The Poisson summation formula. is, -$$ \sum_{n-\infty}^\infty{f(n)} = \sum_{k=-\infty}^\infty \int_{-\infty}^\infty f(y) e^{-2\pi iky} x dy$$ -Substituting $ f(n) = e^{-n^2\pi x}$ and $ f(y) = e^{-y^2\pi x}$ gives, -$$ \theta(x) = \sum_{n-\infty}^\infty{e^{-n^2\pi x}} = \sum_{k=-\infty}^\infty \int_{-\infty}^\infty e^{-y^2\pi x} e^{-2\pi iky} dy = \sum_{k=-\infty}^\infty \int_{-\infty}^\infty e^{-\pi x(y^2 + 2iy \frac{k}{x})} dy$$ -Complete the square by adding and subtracting a term $ i^2 \frac{k^2}{x^2} $ -$$ \theta(x) = \sum_{k=-\infty}^\infty \int_{-\infty}^\infty e^{-\pi x(y^2 + 2iy \frac{k}{x} + i^2\frac{k^2}{x^2} - i^2\frac{k^2}{x^2})} dy$$ -Substituting $ y^2 + 2iy \frac{k}{x} + i^2 y^2\frac{k^2}{x^2} = (y + i \frac{k}{x})^2 $ and $ i^2\frac{k^2}{x^2} = - \frac{k^2}{x^2}$ gives, -$$ \theta(x) = \sum_{k=-\infty}^\infty \int_{-\infty}^\infty e^{-\pi x((y + i \frac{k}{x})^2 + \frac{k^2}{x^2})} dy = \sum_{k=-\infty}^\infty e^{-\pi\frac{k^2}{x}} \int_{-\infty}^\infty e^{-\pi x(y + i \frac{k}{x})^2} dy $$ -as $ e^{-\pi\frac{k^2}{x}} $ is not a function of y, and so can be moved outside the integral. -An argument can then be made using path integrals that that follow a rectangle around the complex plain that, -$$ \int_{-\infty}^\infty e^{-\pi x(y + i \frac{k}{x})^2} dy = \int_{-\infty}^\infty e^{-\pi x z^2} dz = \frac1{\sqrt{\pi x}} \int_{-\infty}^\infty e^{-z^2} dz = \frac{\sqrt{\pi}}{\sqrt{\pi x}} = \frac1{\sqrt{x}}$$ -The argument uses the fact that a closed integral around no poles is zero, and also relies on terms at infinity going to zero. This is standard complex analysis, but it is not easy to see without a diagram. It really should be proved separately. -Also using, $$\theta(\frac1{x})=\sum_{k=1}^\infty e^{-\frac{k^2\pi}{x}}$$ -$$ \theta(x) = \sum_{k=-\infty}^\infty e^{-\pi\frac{k^2}{x}} \int_{-\infty}^\infty e^{-\pi x(y + i \frac{k}{x})^2} dy = \frac{\theta(\frac1{x})}{\sqrt{x}} $$ -Substituting back $ \theta(x) = 2 \psi(x) + 1 $ gives the equation in terms of $\psi$. -$$ 2 \psi(x) + 1 =\frac{2 \psi(\frac1{x}) + 1}{\sqrt{x}} $$ - -The proof uses a result of the form, -$$ \int_{-\infty}^\infty e^{-(y + ib)^2} dy = \int_{-\infty}^\infty e^{-z^2} dz $$ -Consider $ \int e^{-z^2} dz $ arround a rectangular path P given by $ -\infty \to \infty \to \infty+ib \to -\infty+ib \to -\infty $. From Cauchy's integral theorem this integral must be zero because it is a closed loop, and $ e^{-z^2} $ has no poles. -$$ \int_{P} e^{-z^2} dz = 0 $$ -so, -$$ 0 = \int_{-\infty}^{\infty} e^{-y^2} dy + \lim_{Y \to \infty} \int_{0}^{b} e^{-(Y + ix)^2} dx + \int_{\infty}^{-\infty} e^{-(y+ib)^2} dy + \lim_{Y \to -\infty} \int_{b}^{0} e^{-(Y + ix)^2} dx $$ -$$ 0 = \int_{-\infty}^{\infty} e^{-y^2} dy + \lim_{Y \to \infty} \int_{0}^{b} e^{-(Y + ix)^2} dx - \int_{-\infty}^{\infty} e^{-(y+ib)^2} dy - \lim_{Y \to \infty} \int_{0}^{b} e^{-(Y - ix)^2} dx $$ -And I claim that, -$$ \lim_{Y \to \infty} \int_{0}^{b} e^{-(Y + ix)^2} dx = \lim_{Y \to \infty} \int_{0}^{b} e^{-(Y - ix)^2} dx = 0 $$ -which gives, -$$ 0 = \int_{-\infty}^{\infty} e^{-y^2} dy - \int_{-\infty}^{\infty} e^{-(y+ib)^2} dy $$ -To prove the claim, firstly, -$$ \lim_{Y \to \infty} \int_{0}^{b} e^{-(Y \pm ix)^2} dx = \lim_{Y \to \infty} \int_{0}^{b} e^{-(Y^2 \pm 2iYx -x^2)} dx = \lim_{Y \to \infty} \int_{0}^{b} e^{-Y^2} e^{\pm iYx} e^{x^2} dx $$ -Consider the complex magnitude, -$$ |\lim_{Y \to \infty} \int_{0}^{b} e^{-Y^2} e^{\pm 2iYx} e^{x^2} dx| \le \lim_{Y \to \infty} \int_{0}^{b} |e^{-Y^2}| |e^{\pm 2iYx}| |e^{x^2}| dx $$ -But, $ |e^{\pm 2iYx}| = 1 $ and $ \lim_{Y \to \infty} {|e^{-Y^2}|} = 0 $ and $\int_{0}^{b} |e^{x^2}| dx$ is finite giving, -$$ \lim_{Y \to \infty} \int_{0}^{b} |e^{-Y^2}| |e^{\pm iYx}| |e^{-x^2}| dx = \lim_{Y \to \infty} \int_{0}^{b} |e^{-Y^2}| |e^{-x^2}| dx = (\lim_{Y \to \infty} {|e^{-Y^2}|}) \int_{0}^{b} |e^{x^2}| dx = 0$$ -So, -$$ |\lim_{Y \to \infty} \int_{0}^{b} e^{-(Y \pm ix)^2} dy| \le 0 \implies \lim_{Y \to \infty} \int_{0}^{b} e^{-(Y \pm ix)^2} dy = 0 $$<|endoftext|> -TITLE: The set of integers is not open or is open -QUESTION [9 upvotes]: Baby Rudin gives the example of the set of all integers being not open if it is a subset of $\mathbb{R}^2$. -If we consider the set of integers in $\mathbb{R}$, is this set also not open? I can find a neighbourhood which will contain any point, $p$, however is it a requirement that a neighbourhood contains more than one point? -I'm trying to understand this fully and have searched through the various posts that have a slight relation and can not find out specifically how these take interior and isolated points into account and how these relate to openess. - -REPLY [14 votes]: A set $U\subset \mathbb R$ is open if and only if for every $x\in U$, there exists some $\epsilon > 0$ such that $(x-\epsilon, x+\epsilon)$ is a subset of $U$. -For $U=\mathbb Z$, this is clearly not the case: - -Take $x=0$ -Take any $\epsilon > 0$. -Then, $\min\{x+\frac\epsilon2, x+\frac12\}$ is an element of $(x-\epsilon, x+\epsilon)$, but it is not an element of $\mathbb Z$. -Therefore, $(x-\epsilon, x+\epsilon)$ is not a subset of $\mathbb Z$ for any value of $\epsilon$ -Therefore, $\mathbb Z$ is not open.<|endoftext|> -TITLE: Determine the number of subgroups of $\Bbb Z_p \times\Bbb Z_p$, where $p$ is prime. -QUESTION [5 upvotes]: There are some answers online and we got one in our lecture. Unfortunately I have spent several hours trying to make sense of it and getting nowhere. I think it is mainly due to the fact of me being very very poor at groups of the type integers modulo $n$. -I should note that I understand that the problem boils down to finding all the cyclic subgroups of $\Bbb Z_p \times \Bbb Z_p$ of order $p$. -Help would be much appreciated, thank you! - -REPLY [10 votes]: A subgroup of $\mathbb{Z}_{p}\times\mathbb{Z}_{p}$ must have order dividing $p^2$ by Lagrange's theorem. Since $p$ is prime, the possible orders of subgroups of $\mathbb{Z}_{p}\times\mathbb{Z}_{p}$ are $1,p,p^2$. For $1,p^2$ there are only two subgroups of $\mathbb{Z}_{p}\times\mathbb{Z}_{p}$ with that order, namely $\{(e,e)\}$ and $\mathbb{Z}_{p}\times\mathbb{Z}_{p}$ respectively, where $e\in\mathbb{Z}_{p}$ is the identity element. -So now suppose that $A\leq \mathbb{Z}_{p}\times\mathbb{Z}_{p}$ with $|A| = p$. Then since $p$ is prime, $A$ must be cyclic, so there exists some element $(a,b)\in\mathbb{Z}_{p}\times\mathbb{Z}_{p}$ such that $A = \langle(a,b)\rangle $, so $(a,b)$ must have order $p$ in $\mathbb{Z}_{p}\times\mathbb{Z}_{p}$. -The converse is also true, i.e. if $(a,b)$ has order $p$ then the subgroup it generates has order $p$. So the set of subgroups of $\mathbb{Z}_{p}\times\mathbb{Z}_{p}$ of order $p$ is $\{\langle(a,b)\rangle\mid(a,b) $ has order $p$$\}$. -So now note that the elements of order $p$ of $\mathbb{Z}_{p}\times\mathbb{Z}_{p}$ are exactly the elements of the form $(a,b)$ where either $a\neq e$, or $b\neq e$ (or both $\neq e$). That is, they are exactly the elements of $\mathbb{Z}_{p}\times\mathbb{Z}_{p}\setminus \{(e,e)\}$, of which there are $p^2-1$. However, each element of order $p$ accounts for $p-2$ other elements of order $p$. You can partition the set of elements of order $p$ into equivalence classes under the equivalence relation $(a,b)\sim(c,d)$ iff there exists $t\in\mathbb{Z}$ such that $(a,b)^{t} = (c,d)$. Each equivalence class has $p-1$ elements (note the identity element of $\mathbb{Z}_{p}\times\mathbb{Z}_{p}$ is not in the set of elements of order $p$), and so the number of equivalence classes is $\frac{p^2-1}{p-1} = p+1$. So there are $p+1$ subgroups of order $p$. -Now adding to account for the subgroups $\{(e,e)\}$ and $\mathbb{Z}_{p}\times\mathbb{Z}_{p}$, we have that $\mathbb{Z}_{p}\times\mathbb{Z}_{p}$ has $p+3$ subgroups.<|endoftext|> -TITLE: Differential equations with dense solutions -QUESTION [15 upvotes]: Consider the differential equation $P(y',y'',y''',y'''')=0$ on $\mathbb R$, where $P(x,y,z,w)$ is the homogeneous polynomial of degree $7$ given by -$$ -3x^4yw^2-4x^4z^2w+6x^3y^2zw+24x^2y^4w-12x^3yz^3-29x^2y^3z^2+12y^7. -$$ -This example was given by Rubel in 1981 (Bulletin of the AMS), and he proved that for any continuous functions $f,g\colon\mathbb R\to\mathbb R$ with $g>0$ there is a solution $y$ of the differential equation satisfying -$$ -|y(t)-f(t)|\le g(t),\quad \text{for all } t\in\mathbb R. -$$ -Quite impressive. When one reads the proof one understands that all comes from the particular structure of the equation, but really impressive! -My question is the following: - -Is there any polynomial of smaller degree leading to the same property? - -A perhaps more ambitious question would be: what is the smallest degree of a polynomial having this property? In fact one could also vary the number of variables of the polynomial. - -REPLY [5 votes]: Perhaps it is paywalled, but this paper of C. Elsner claims to obtain a universal ODE out of a sixth-order polynomial equation. The title is "A Universal Differential Equation of Degree 6". The solutions of this equation are, like in Rubel's, $C^\infty$. -Wolfram mathworld describes two additional families (due to Duffin and Briggs) of universal (polynomial) differential equations, the solutions of which appear to be only $C^n$ for some $n$. These families have degree 3.<|endoftext|> -TITLE: difficult integral $\int_0^{\pi/2}\frac{x^2({1+\tan x})^2}{\sqrt{\tan x}({1-\tan x})}\sin{4x}dx$ -QUESTION [10 upvotes]: This is a complicated integral, the numerical value appears to me correct.Therefore how to prove this result?$$I=\int_0^{\pi/2}\frac{x^2({1+\tan x})^2}{\sqrt{\tan x}({1-\tan x})}\sin({4x})dx=\frac{\pi\sqrt{2}}{192}(35{\pi^2}-150+132\ln 2-84\ln^22)$$ - -REPLY [11 votes]: $\displaystyle J=\int_0^{\tfrac{\pi}{2}}\dfrac{x^2({1+\tan x})^2}{\sqrt{\tan x}({1-\tan x})}\sin{(4x)}dx$ -Perform the change of variable $y=\tan x$, one obtains: -$\displaystyle J=4\int_0^{+\infty} \sqrt{x}(\arctan x)^2\left(\dfrac{1+x}{1+x^2}\right)^3 dx$ -Perform integration by parts: -$\displaystyle J=\dfrac{1}{2}\left[\left(\dfrac{\sqrt{x}(x-1)(7x^2+4x+7)}{(1+x^2)^2}+\dfrac{7}{\sqrt{2}}\left(\arctan(1+\sqrt{2x})-\arctan(1-\sqrt{2x})\right)\right)(\arctan x)^2\right]_0^{+\infty}-\int_0^{+\infty}\left(\dfrac{\sqrt{x}(x-1)(7x^2+4x+7)}{(1+x^2)^2}+\dfrac{7}{\sqrt{2}}\left(\arctan(1+\sqrt{2x})-\arctan(1-\sqrt{2x})\right)\right)\dfrac{\arctan x}{1+x^2}dx$ -Therefore, -$\displaystyle J=\dfrac{7\pi^3}{8\sqrt{2}}-\int_0^{+\infty}\left(\dfrac{\sqrt{x}(x-1)(7x^2+4x+7)}{(1+x^2)^2}+\dfrac{7}{\sqrt{2}}\left(\arctan(1+\sqrt{2x})-\arctan(1-\sqrt{2x})\right)\right)\\\dfrac{\arctan x}{1+x^2}dx$ -Let $\displaystyle A=\int_0^{+\infty}\dfrac{\sqrt{x}(x-1)(7x^2+4x+7)\arctan x}{(1+x^2)^3}dx$ -Let $\displaystyle B=\int_0^{+\infty}\dfrac{\left(\arctan(1+\sqrt{2x})-\arctan(1-\sqrt{2x})\right)\arctan x}{1+x^2}dx$ -Thus, $J=\dfrac{7\pi^3}{8\sqrt{2}}-A-\dfrac{7}{\sqrt{2}}B$ -Perform integration by parts: -$\displaystyle A=-\dfrac{1}{4}\left[\left(\dfrac{11}{2\sqrt{2}}\Big(\log(x-\sqrt{2x}+1)-\log(x+\sqrt{2x}+1)\Big)+\dfrac{\sqrt{x}(x+1)(11x^2+4x+11)}{(1+x^2)^2}\right)\\\arctan x\right]_0^{+\infty}+\dfrac{1}{4}\times\int_0^{+\infty} \left(\dfrac{11}{2\sqrt{2}}\Big(\log(x-\sqrt{2x}+1)-\log(x+\sqrt{2x}+1)\Big)+\dfrac{\sqrt{x}(x+1)(11x^2+4x+11)}{(1+x^2)^2}\right)\\\dfrac{1}{1+x^2}dx$ -Therefore, -$\displaystyle A=\dfrac{1}{4}\times\int_0^{+\infty}\left(\dfrac{11}{2\sqrt{2}}\Big(\log(x-\sqrt{2x}+1)-\log(x+\sqrt{2x}+1)\Big)+\dfrac{\sqrt{x}(x+1)(11x^2+4x+11)}{(1+x^2)^2}\right)\\\dfrac{1}{1+x^2}dx$ -Let $\displaystyle C=\int_0^{+\infty}\dfrac{\left(\log(x-\sqrt{2x}+1)-\log(x+\sqrt{2x}+1)\right)}{1+x^2}dx$ -Let $\displaystyle D=\int_0^{+\infty} \dfrac{\sqrt{x}(x+1)(11x^2+4x+11)}{(1+x^2)^3}dx$ -Thus, $\displaystyle J=\dfrac{7\pi^3}{8\sqrt{2}}-\dfrac{11}{8\sqrt{2}}C-\dfrac{1}{4}D-\dfrac{7}{\sqrt{2}}B$ -$\displaystyle D=\dfrac{1}{4}\times\left[\dfrac{\sqrt{x}(x-1)(25x^2+4x+25)}{(1+x^2)^2}+\dfrac{25}{\sqrt{2}}\Big(\arctan(1+\sqrt{2x})-\arctan(1-\sqrt{2x})\Big)\right]_0^{+\infty}$ -Therefore, -$D=\dfrac{25\pi}{4\sqrt{2}}$ -Define for $a\in [0,\sqrt{2}]$, -$\displaystyle F(a)=\int_0^{+\infty}\dfrac{\left(\log(x-a\sqrt{x}+1)-\log(x+a\sqrt{x}+1)\right)}{1+x^2}dx$ -Note that, -$F(\sqrt{2})=C$ and $F(0)=0$ -$\displaystyle F'(a)= \int_0^{+\infty} \dfrac{-2\sqrt{x}(1+x)}{(x-a\sqrt{x}+1)(x+a\sqrt{x}+1)(x^2+1)}dx$ -$F'(a)=\left[\dfrac{-4\left( \mathrm{arctan}\left( \dfrac{2\sqrt{x}-a}{\sqrt{4-{{a}^{2}}}}\right) +\mathrm{arctan}\left( \dfrac{2\sqrt{x}+a}{\sqrt{4-{{a}^{2}}}}\right) \right) }{\sqrt{4-{{a}^{2}}}\cdot \left( {{a}^{2}}-2\right) }+\\\dfrac{2\sqrt{2} \Big( \mathrm{arctan}\left( \sqrt{2x}-1\right) +\mathrm{arctan}\left( \sqrt{2x}+1\right) \Big) }{{{a}^{2}}-2}\right]_0^{+\infty}$ -Therefore, -$F'(a)=\dfrac{-4\pi}{(a^2-2)\sqrt{4-a^2}}+\dfrac{2\sqrt{2}\pi}{a^2-2}$ -And then, -$\displaystyle C=F(\sqrt{2})=\int_0^{\sqrt{2}} F'(a)da=-\pi\left[\mathrm{log}\left( \dfrac{\left( \sqrt{2}+a\right)\left( \sqrt{4-{{a}^{2}}}-a\right) }{\left( \sqrt{2}-a\right)\left( \sqrt{4-{{a}^{2}}}+a\right) }\right)\right]_0^\sqrt{2}=-\pi\ln 2$ -Define for $a\in [0,\sqrt{2}]$, -$\displaystyle G(a)=\int_0^{+\infty}\dfrac{\Big(\arctan(a\sqrt{x}+1)+\arctan(a\sqrt{x}-1)\Big)\arctan x}{1+x^2}dx$ -Note that $G(\sqrt{2})=B$ and $G(0)=0$ -$\displaystyle G'(a)=\int_0^{+\infty}\dfrac{\sqrt{x}}{1+x^2}\arctan(x)\left(\dfrac{1}{1+(a\sqrt{x}+1)^2}+\dfrac{1}{1+(a\sqrt{x}-1)^2}\right)dx$ -Perform integration by parts, -$\displaystyle G'(a)=\\\left[\left(\dfrac{2a\mathrm{log}\left( \dfrac{a^2x-2a\sqrt{x}+2}{{{a}^{2}} x+2 a\sqrt{x}+2}\right) }{{{a}^{4}}-4}+\dfrac{\mathrm{log}\left( \dfrac{x+\sqrt{2x}+1}{x-\sqrt{2x}+1}\right) }{\sqrt{2}\left( {{a}^{2}}-2\right) }+\dfrac{\sqrt{2}\left( \mathrm{arctan}\left( \sqrt{2x}-1\right) +\mathrm{arctan}\left( \sqrt{2x}+1\right) \right) }{{{a}^{2}}+2}\right)\arctan(x)\right]_0^{+\infty}\\-\displaystyle \int_0^{+\infty}\left(\dfrac{2a\mathrm{log}\left( \dfrac{a^2x-2a\sqrt{x}+2}{{{a}^{2}} x+2 a\sqrt{x}+2}\right) }{{{a}^{4}}-4}+\\\dfrac{\mathrm{log}\left( \dfrac{x+\sqrt{2x}+1}{x-\sqrt{2x}+1}\right) }{\sqrt{2}\left( {{a}^{2}}-2\right) }+\dfrac{\sqrt{2}\left( \mathrm{arctan}\left( \sqrt{2x}-1\right) +\mathrm{arctan}\left( \sqrt{2x}+1\right) \right) }{{{a}^{2}}+2}\right)\dfrac{1}{1+x^2}dx$ -Therefore, -$\displaystyle G'(a)=\dfrac{\pi^2}{\sqrt{2}(a^2+2)}-\dfrac{\pi\log 2}{\sqrt{2}(a^2-2)}+\int_{0}^{+\infty}\dfrac{2a\mathrm{log}\left( \dfrac{a^2x+2a\sqrt{x}+2}{{{a}^{2}} x-2a\sqrt{x}+2}\right) }{(a^4-4)(1+x^2)}dx-\int_{0}^{+\infty}\dfrac{\sqrt{2}\left( \mathrm{arctan}\left( \sqrt{2x}-1\right) +\mathrm{arctan}\left( \sqrt{2x}+1\right) \right) }{(a^2+2)(1+x^2)}dx$ -Let $\displaystyle E=\int_{0}^{+\infty}\dfrac{\mathrm{arctan}\left( \sqrt{2x}-1\right) +\mathrm{arctan}\left( \sqrt{2x}+1\right)}{(1+x^2)}dx$ -Define for $a\in [0,\sqrt{2}]$, -$\displaystyle H(a)=\int_{0}^{+\infty}\dfrac{\mathrm{log}\left( \dfrac{a^2x+2a\sqrt{x}+2}{{{a}^{2}} x-2a\sqrt{x}+2}\right) }{1+x^2}dx$ -Note that $H(0)=0$ -$\displaystyle H'(a)=\int_{0}^{+\infty}\dfrac{-4\sqrt{x}(a^2x-2)}{(x^2+1)(a^4x^2+4)}dx$ -$\displaystyle H'(a)=\\\left[\dfrac{\sqrt{2}\mathrm{log}\left(\dfrac{x-\sqrt{2x}+1}{x+\sqrt{2x}+1}\right) }{{{a}^{2}}+2}+\dfrac{8a\Big( \mathrm{arctan}\left(a\sqrt{x}-1\right)+\mathrm{arctan}\left(a\sqrt{x}+1\right) \Big) }{{{a}^{4}}-4}-\dfrac{2\sqrt{2}\Big( \mathrm{arctan}\left(\sqrt{2x}-1\right) +\mathrm{arctan}\left( \sqrt{2x}+1\right) \Big) }{{{a}^{2}}-2}\right]_0^{+\infty}$ -$\displaystyle H'(a)=\dfrac{8a\pi}{a^4-4}-\dfrac{2\sqrt{2}\pi}{a^2-2}$ -$\displaystyle H(a)=\pi\log\left(\dfrac{(2-a^2)(\sqrt{2}+a)}{(2+a^2)(\sqrt{2}-a)}\right)=2\pi\log(a+\sqrt{2})-\pi\log(a^2+2)$ -Define for $a\in [0,\sqrt{2}]$, -$\displaystyle K(a)=\int_{0}^{+\infty}\dfrac{\mathrm{arctan}\left( a\sqrt{x}-1\right) +\mathrm{arctan}\left( a\sqrt{x}+1\right)}{(1+x^2)}dx$ -Note that $K(\sqrt{2})=E$ and $K(0)=0$ -$\displaystyle K'(a)=\int_{0}^{+\infty} \dfrac{2\sqrt{x}\left( 2+{{a}^{2}}x\right) }{\left( {{a}^{2}}x-2a\sqrt{x}+2\right)\left( {{a}^{2}}x+2a\sqrt{x}+2\right)\left( {{x}^{2}}+1\right) }dx$ -$\displaystyle K'(a)=\\\left[\dfrac{2a\mathrm{log}\left( \dfrac{{{a}^{2}}x-2a\sqrt{x}+2}{{{a}^{2}} x+2a\sqrt{x}+2}\right) }{{{a}^{4}}-4}+\dfrac{\mathrm{log}\left( \dfrac{x+\sqrt{2x}+1}{x-\sqrt{2x}+1}\right) }{\sqrt{2}\left( {{a}^{2}}-2\right) }+\dfrac{\sqrt{2}\left( \mathrm{arctan}\left( \sqrt{2x}-1\right) +\mathrm{arctan}\left( \sqrt{2x}+1\right) \right) }{{{a}^{2}}+2}\right]_{0}^{+\infty}$ -$\displaystyle K'(a)=\dfrac{\sqrt{2}\pi}{a^2+2}$ -$\displaystyle E=K(\sqrt{2})=\int_0^{\sqrt{2}}\dfrac{\sqrt{2}\pi}{a^2+2}da=\pi\left[\arctan\left(\dfrac{a}{\sqrt{2}}\right)\right]_0^{\sqrt{2}}=\dfrac{\pi^2}{4}$ -Thus, -$\displaystyle G'(a)=\dfrac{\pi^2}{\sqrt{2}(a^2+2)}-\dfrac{\pi\log 2}{\sqrt{2}(a^2-2)}+\dfrac{2a\pi}{a^4-4}\Big(2\log(a+\sqrt{2})-\log(a^2+2)\Big)-\dfrac{\pi^2}{2\sqrt{2}(a^2+2)}$ -$\displaystyle G'(a)=\dfrac{\pi^2}{2\sqrt{2}(a^2+2)}+\dfrac{\pi a \log(a^2+2)}{2(a^2+2)}-\dfrac{\pi a\log 2}{2(a^2+2)}-\dfrac{\pi a \log\left(\dfrac{a}{\sqrt{2}}+1\right)}{2\left(\left(\dfrac{a}{\sqrt{2}}\right)^2+1\right)}+\dfrac{\pi}{2(a^2-2)}\Big(2a\log(a+\sqrt{2})-\sqrt{2}\log 2-a\log(a^2+2)\Big)$ -$\displaystyle \int_0^{\sqrt{2}}\dfrac{\pi^2}{2\sqrt{2}(a^2+2)}da=\pi^2\left[\dfrac{1}{4}\arctan\left(\dfrac{x}{\sqrt{2}}\right)\right]_0^{\sqrt{2}}=\dfrac{\pi^3}{16}$ -$\displaystyle \int_0^{\sqrt{2}}\dfrac{\pi a\log(a^2+2)}{2(a^2+2)}da=\dfrac{\pi}{8}\left[\left(\log(a^2+2)\right)^2\right]_0^{\sqrt{2}}=\dfrac{3\pi(\log 2)^2}{8}$ -$\displaystyle \int_0^{\sqrt{2}}\dfrac{-\pi a\log 2}{2(a^2+2)}da=-\dfrac{\pi \log 2}{4}\Big[\log(a^2+2)\Big]_0^{\sqrt{2}}=-\dfrac{\pi (\log 2)^2}{4}$ -$\displaystyle \int_0^{\sqrt{2}} \dfrac{-\pi a \log\left(\dfrac{a}{\sqrt{2}}+1\right)}{2\left(\left(\dfrac{a}{\sqrt{2}}\right)^2+1\right)}da=-\pi\int_0^1\dfrac{a\log(a+1)}{a^2+1}da$ -Let $\displaystyle L=\int_0^1\dfrac{a\log(a+1)}{a^2+1}da$ -Define for $t\in[0,1]$, -$\displaystyle M(t)=\int_0^1\dfrac{a\log(at+1)}{a^2+1}da$ -Note that $M(1)=L$ and $M(0)=0$. -$\displaystyle M'(t)=\int_0^1\dfrac{a^2}{(a^2+1)(at+1)}da$ -$\displaystyle M'(t)=\left[\dfrac{\log(at+1)}{t^3+t}+\dfrac{t\log(a^2+1)}{2(t^2+1)}-\dfrac{\arctan(a)}{t^2+1}\right]_0^1=\dfrac{\log(1+t)}{t}-\dfrac{t\log(1+t)}{1+t^2}-\dfrac{\pi}{4(t^2+1)}+\dfrac{t\log 2}{2(t^2+1)}$ -$\displaystyle\int_0^1\dfrac{\log(1+t)}{t}dt=\sum_{n=0}^{+\infty}\int_0^1 \left(\dfrac{(-1)^n t^{n+1}}{(n+1)t}\right)dt=-\sum_{n=1}^{+\infty} \dfrac{(-1)^n}{n^2}=\sum_{n=0}^{+\infty} \dfrac{1}{(2n+1)^2}-\sum_{n=1}^{+\infty} \dfrac{1}{(2n)^2}=\left(\zeta(2)-\sum_{n=1}^{+\infty} \dfrac{1}{(2n)^2}\right)-\sum_{n=1}^{+\infty} \dfrac{1}{(2n)^2}=\dfrac{1}{2}\zeta(2)=\dfrac{\pi^2}{12}$ -$\displaystyle\int_0^1 \dfrac{-\pi}{4(t^2+1)}dt=-\dfrac{\pi}{4}\Big[\arctan t\Big]_0^1=-\dfrac{\pi^2}{16}$ -$\displaystyle\int_0^1\dfrac{t\log 2}{2(t^2+1)}dt=\dfrac{\log 2}{4}\Big[\log(t^2+1)\Big]_0^1=\dfrac{(\log 2)^2}{4}$ -$\displaystyle L=\int_0^1 M'(t)dt=\dfrac{\pi^2}{12}-\dfrac{\pi^2}{16}+\dfrac{(\log 2)^2}{4}-L$ -Thus, -$L=\dfrac{\pi^2}{96}+\dfrac{(\log 2)^2}{8}$ -And therefore, -$\displaystyle B=G\left(\sqrt{2}\right)=\int_0^{\sqrt{2}}G'(a)da=\dfrac{5\pi^3}{96}+\dfrac{\pi}{2}\int_0^{\sqrt{2}}\dfrac{2a\log(a+\sqrt{2})-\sqrt{2}\log 2-a\log(a^2+2)}{a^2-2}da$ -Let $\displaystyle P=\int_0^{\sqrt{2}}\dfrac{2a\log(a+\sqrt{2})-\sqrt{2}\log 2-a\log(a^2+2)}{a^2-2}da$ -$\displaystyle P=\int_0^{\sqrt{2}}\dfrac{a\log(a^2+2)}{2\sqrt{2}(a+\sqrt{2})}da-\int_0^{\sqrt{2}}\dfrac{a\log(a+\sqrt{2})}{\sqrt{2}(a+\sqrt{2})}da+\\\displaystyle \int_0^{\sqrt{2}}\dfrac{\log 2}{2(a+\sqrt{2})}da+\dfrac{1}{2\sqrt{2}}\int_0^{\sqrt{2}}\dfrac{2a\log(a+\sqrt{2})-\sqrt{2}\log 2-a\log(a^2+2)}{a-\sqrt{2}}da$ -$\displaystyle \int_0^{\sqrt{2}}\dfrac{-a\log(a+\sqrt{2})}{\sqrt{2}(a+\sqrt{2})}da=\\\left[\sqrt{2}\log(a+\sqrt{2})+\dfrac{\sqrt{2}}{2}\Big(\log(a+\sqrt{2})\Big)^2+\log(a+\sqrt{2})\Big (a-\sqrt{2}\log(a+\sqrt{2})\Big)-a\right]_0^{\sqrt{2}}=(\log 2)^2-\dfrac{5}{2}\log 2+1$ -$\displaystyle \int_0^{\sqrt{2}}\dfrac{\log 2}{2(a+\sqrt{2})}da=\left[\dfrac{1}{2}\log(a+\sqrt{2})\log 2\right]_0^{\sqrt{2}}=\dfrac{1}{2}\Big(\log(2)\Big)^2$ -Perform change of variable $x=\dfrac{a}{\sqrt{2}}$ -$\displaystyle \int_0^{\sqrt{2}}\dfrac{a\log(a^2+2)}{2\sqrt{2}(a+\sqrt{2})}da=\\\displaystyle\dfrac{1}{2}\log 2\int_0^1\dfrac{x}{x+1}dx+\dfrac{1}{2}\int_0^1\dfrac{x\log(1+x^2)}{x+1}dx=\dfrac{1}{2}\log 2\Big[x-\log(1+x)\Big]_0^1+\dfrac{1}{2}\Big[(x-\log(1+x))\log(1+x^2)\Big]_0^1-\dfrac{1}{2}\int_0^1 \dfrac{2x(x-\log(1+x))}{1+x^2}da=\dfrac{1}{2}\log 2\Big[x-\log(1+x)\Big]_0^1+\dfrac{1}{2}\Big[(x-\log(1+x))\log(1+x^2)\Big]_0^1-\int_0^1 \dfrac{x^2}{1+x^2}dx+L=\dfrac{1}{2}\log 2\Big[x-\log(1+x)\Big]_0^1+\dfrac{1}{2}\Big[(x-\log(1+x))\log(1+x^2)\Big]_0^1-\Big[x-\arctan x\Big]_0^1+L=\\-\dfrac{7}{8}\Big(\log 2\Big)^2+\log 2+\dfrac{\pi^2}{96}+\dfrac{\pi}{4}-1$ -Let $\displaystyle Q=\dfrac{1}{2\sqrt{2}}\int_0^{\sqrt{2}}\dfrac{2a\log(a+\sqrt{2})-\sqrt{2}\log 2-a\log(a^2+2)}{a-\sqrt{2}}da$ -Perform change of variable $x=1-\dfrac{a}{\sqrt{2}}$ -$\displaystyle Q=\int_0^1 \left(\dfrac{\log 2}{2x}-\dfrac{\log(2-x)}{x}+\log(2-x)+\dfrac{1}{2}\left(\dfrac{1}{x}-1\right)\log(x^2-2x+2)\right)dx$ -$\displaystyle Q=\int_0^1 \log(2-x)dx-\dfrac{1}{2}\int_0^1\log(x^2-2x+2)dx+\int_0^1 \dfrac{1}{2x}\log\left(\dfrac{x^2}{2}-x+1\right)dx-\int_0^1 \dfrac{\log\left(1-\dfrac{x}{2}\right)}{x}dx$ -$\displaystyle\int_0^1 \log(2-x)dx=\Big[(x-2)\log(2-x)-x\Big]_0^1=2\log 2-1$ -$\displaystyle -\dfrac{1}{2}\int_0^1\log(x^2-2x+2)dx=\left[-\dfrac{1}{2}x\log(x^2-2x+2)+\dfrac{1}{2}\log(x^2-2x+2)-\arctan(x-1)+x\right]_0^1=\\ --\dfrac{1}{2}\log 2-\dfrac{\pi}{4}+1$ -$\displaystyle \int_0^1 \dfrac{-\log\left(1-\dfrac{x}{2}\right)}{x}dx= \\ -\displaystyle\int_0^1\left(\dfrac{1}{x}\sum_{n=1}^{+\infty} \dfrac{1}{n}\left(\dfrac{x}{2}\right)^n\right)dx=\displaystyle \sum_{n=1}^{+\infty} \dfrac{1}{2^n n}\int_0^1 x^{n-1}dx=\sum_{n=1}^{+\infty} \dfrac{1}{2^n n^2}=Li_2\left(\dfrac{1}{2}\right)=\dfrac{\pi^2}{12}-\dfrac{\Big(\log 2\Big)^2}{2}$ -Let $R=\displaystyle\int_0^1 \dfrac{\log\left(\dfrac{1}{2}x^2-x+1\right)}{2x}dx$ -Define for $a\in [0,1]$, -$\displaystyle S(a)=\int_0^1 \dfrac{\log(ax^2-2ax+1)}{2x}dx$ -Note that $S\left(\dfrac{1}{2}\right)=R$ and $S(0)=0$ -$\displaystyle S'(a)=\int_0^1 \dfrac{x-2}{2(ax^2-2ax+1)}dx$ -$\displaystyle S'(a)=\int_0^1 \dfrac{x-2}{2(ax^2-2ax+1)}dx$ -$\displaystyle S'(a)=\left[\dfrac{\log(ax^2-2ax+1}{4a}-\dfrac{1}{2\sqrt{a(1-a)}}\arctan\left(\sqrt{\dfrac{a}{1-a}}(x-1)\right)\right]_0^1$ -$\displaystyle S'(a)=\dfrac{\log(1-a)}{4a}-\dfrac{1}{2\sqrt{a(1-a)}}\arctan\left(\sqrt{\dfrac{a}{1-a}}\right)$ -Perform the change of variable $x=2a$, -$\displaystyle \int_0^{\tfrac{1}{2}} \dfrac{\log(1-a)}{4a} da=\int_0^1 \dfrac{\log\left(1-\dfrac{x}{2}\right)}{4x}dx=\dfrac{\Big(\log 2\Big)^2}{8}-\dfrac{\pi^2}{48}$ -$\displaystyle \int_0^{\tfrac{1}{2}} \dfrac{-1}{2\sqrt{a(1-a)}}\arctan\left(\sqrt{\dfrac{a}{1-a}}\right)da=-\left[\dfrac{1}{2}\left(\arctan\left(\sqrt{\dfrac{a}{1-a}}\right)\right)^2\right]_0^{\tfrac{1}{2}}=-\dfrac{\pi^2}{32}$ -Thus, -$R=\dfrac{\Big(\log 2\Big)^2}{8}-\dfrac{5\pi^2}{96}$ -$Q=\dfrac{\pi^2}{32}-\dfrac{\pi}{4}+\dfrac{3\log 2}{2}-\dfrac{3\Big(\log 2\Big)^2}{8}$ -$P=\dfrac{\Big(\log 2\Big)^2}{4}+\dfrac{\pi^2}{24}$ -$B=\dfrac{\pi\Big(\log 2\Big)^2}{8}+\dfrac{7\pi^3}{96}$ -And finally, -$J=\dfrac{11\pi\log 2}{8\sqrt{2}}-\dfrac{7\pi\Big(\log 2\Big)^2}{8\sqrt{2}}+\dfrac{35\pi^3}{96\sqrt{2}}-\dfrac{25\pi}{16\sqrt{2}}$<|endoftext|> -TITLE: The Reals are not interpretable in the complex numbers -QUESTION [6 upvotes]: Let $L=\{+,\dot{},0,1\}$ be the language of fields. I wish to show that the reals ($N=(\mathbb{R},+,\dot{},0,1)$) are not interpretable in the structure $M = (\mathbb{C},+,\dot{},0,1)$. -I have the following solution to show that $N$ is not definable in $M$: Suppose that the reals were definable in $M$. Then we can write a formula that defines a linear order inside of $M$ exploiting the fact that $<$ is definable in $N$. This gives a contradiction since $Th(M)$ is $\aleph_{1}$- categorical and hence $\omega$-stable but the above implies its unstable. -I believe the same proof holds with definability replaced with interpretability. -My questions is: Is there a more elementary proof of this fact? -Edit1: I saw a lot of comments below. I'm sorry about the typo. It has been corrected but it probably read $Th(N)$ where it should have been $Th(M)$. - -REPLY [9 votes]: OK, here's an argument that avoids stability theory: -Suppose we could interpret $\mathbb{R}$ in a field $K\models \text{ACF}_0$. Note that $K$ must be uncountable. By elimination of imaginaries in $\text{ACF}_0$, we may assume that the domain of the interpretation is a definable subset of $K$, defined by $\varphi(\overline{x})$, rather than a quotient of a definable set. -Let $A$ be the finite set of parameters used in the interpretation (i.e. in $\varphi$, as well as in the formulas defining the field operations on the interpreted copy of $\mathbb{R}$), and let $F$ be the algebraic closure in $K$ of the subfield of $K$ generated by $A$. Note that $F$ is countable, and for any $b\in K\setminus F$, there is an automorphism of $K$ which fixes $F$ pointwise but moves $b$ (extend $\{b\}$ to a transcendence basis for $K$ over $F$, and permute the basis). -Now since $\mathbb{R}$ is uncountable, there is a tuple $\overline{a}$ satisfying $\varphi(\overline{x})$ such that one of the entries $a_i$ of the tuple is not in $F$. Let $\sigma\in \text{Aut}(K)$ fix $F$ pointwise but move $a_i$. Then $\sigma$ induces an automorphism of the interpreted copy of $\mathbb{R}$ which moves $\overline{a}$. This contradicts the fact that $\mathbb{R}$ has no nontrivial automorphisms, QED. -Note that this argument does not show that $\text{Th}(\mathbb{R})$ is not interpretable in $\text{Th}(\mathbb{C})$, just that no uncountable rigid model of this theory (e.g. $\mathbb{R}$ or any uncountable subfield) is in the image of any such interpretation. Hence it also shows that $\text{Th}(\mathbb{R})$ and $\text{Th}(\mathbb{C})$ are not bi-interpretable. -As explained in the question, $\text{Th}(\mathbb{R})$ is not in fact interpretable in $\text{Th}(\mathbb{C})$ - it would be interesting to see a proof of this which avoids stability theory.<|endoftext|> -TITLE: Discriminant of Elliptic Curves -QUESTION [8 upvotes]: In the study of elliptic curves, specifically in Weierstrass form, you have the equation -$E : y^2 = x^3 +ax +b$. -However I have found the discriminant comes in two different forms: -$\Delta = -16(4a^3 + 27b^2) $ or $\Delta = 4a^3 + 27b^2$ -I understand how to get the second equation, but where does the $-16$ come from? -From the Wiki page: "Although the factor −16 is irrelevant to whether or not the curve is non-singular, this definition of the discriminant is useful in a more advanced study of elliptic curves." - -REPLY [9 votes]: A cubic over $k$ in Weierstrass form (affine form) is given by $$y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6.$$ The discriminant is defined by $$\Delta = -b_2^2b_8-8b_4^3-27b_6^2+9b_2b_4b_6,$$ where $ b_2=a_1^2+4a_2$, $ b_4=2a_4+a_1a_3 $, $b_6=a_3^2+4a_6$ and $b_8 = a_{1}^{2} a_{6}+4 a_{2} a_{6}-a_{1} a_{3} a_{4}+a_{2} a_{3}^{2}-a_{4}^{2}$. -Finally, a elliptic curve over $k$ is a cubic in Weiertrass form, where $\Delta \neq 0$ (i.e., is a non-singular cubic in Weiertrass form). -We can make some substitutions to simplify the equation of cubic in Weiertrass form, the first assumes $char(k)$ is not $2$. Replacing $y$ by $ \frac{1}{2} \left(y-a_1x-a_3\right)$, the result is $$y^2=4x^3+b_2x^2+2b_4x+b_6.$$ -The second assumes in addition that $char(k) \neq 3$. Replace $(x,y)$ by $\left( \frac{x-3b_2}{36}, \frac{y}{108} \right) $, and the result is $$y^2=x^3-27c_4-54c_6,$$ where $c_4=b_2^2-24b_ 4$ and $c_6=-b_2^3+36b_2b_4-216b_6.$ -Moreover, when $char(k)$ not is $2$ or $3$, we have $$ 1728\Delta=c_4^3 - c_6^2. $$ -Now, consider the cubic $y^2=x^3+ax+b$ over $k$. If $char(k)$ not is $2$ or $3$, we have $c_4=-48a $ and $c_6=-864b$, so $$\Delta = \frac{(-48a)^3-(-864)^2}{1728} = -16(4a^3+27b^2).$$ -Thus, assuming that $char(k)\neq2$ and $char(k) \neq 3$, an elliptic curve over $k$ is given by $$y^2=x^3+ax+b,$$ where $\Delta=-16(4a^3+27b^2) \neq 0$. -Note that, $\Delta=-16(4a^3+27b^2) = 0$ if, and only if, $4a^3+27b^2 = 0$, because $16=2^4 \neq 0$ in $k$ with $char(k) \neq 2$. Thus, the factor $−16$ is irrelevant in this case. -See Chapter III of the book 'Elliptic Curves' of Anthony Knapp for more information.<|endoftext|> -TITLE: Chinese New Year Equation 2016 -QUESTION [7 upvotes]: In the spirit of Chinese New Year, here's a problem to commemorate the year. - -$\color{black}{\text{Solve the following equation for positive integers $a$ and $b$:}}$ - $$\color{red}{a^2+b^2+(a+8)^2+(b+8)^2=100a+b}$$ - - -Edit 1 -As this question has been put on hold and the OP asked to improve the question by providing additional context", please find below some additional details. -Having read about Lagrange's four-square theorem and after some work, I noticed an interesting four-square combination for the year. Hence I thought I would formulate it as a problem - which turns out to be a diophantine equation - to see what different approaches there might be. A straightforward approach would be to rearrange the terms into a the standard circle formula -$(a-21)^2+\left(b+\frac {15}4\right)^2=\left(\frac{\sqrt{6257}}4\right)^2\approx19.77^2$ and then testing integer values of $a$ within $21\pm 19.77$, but there should be other more interesting approaches. -Hopefully the explanation above will be sufficient for the question to be reopened. Moderators - please advise if there is additional context required. -Thanks. - -Edit 2 -Thanks for the nice answers by Hagen and Daniel. The interesting point to note here is that: - -$$\color{red}{20}^2+\color{red}{16}^2+(\color{red}{20}+8)^2+(\color{red}{16}+8)^2=\color{red}{2016}$$ - -$\color{black}{\text{To all readers who celebrate, a very Happy Chinese New Year!!}}$ - -Note: This question has been put on hold. If you find it interesting or useful, please vote to reopen it (by clicking on "reopen" at the bottom of this post), so that others can post their solutions. Thanks! - -REPLY [2 votes]: First write it as $$(a-21)^2 +(b+\frac{15}{4})^2= \frac{6257}{16} = 1+\frac{79^2}{4^2}$$ -$$\Rightarrow (a-21)^2-1 = \frac{79^2}{4^2}-(b+\frac{15}{4})^2$$ -Now use difference of squares to get -$$(a-22)(a-20) = \bigg( \frac{4b+94}{4}\bigg)\bigg( \frac{64-4b}{4}\bigg)$$ -We see we get corresponding positive integer solutions in $b$ for $a = 22$ and $a = 20$(since we only need one of the factors on the RHS to be $0$), namely $b = 16$. -Else we need RHS to be an integer (since the LHS clearly is), which will happen only when$$(4b+94)(64-4b) \equiv 0 \pmod{16} $$ $$\Rightarrow b = 18n$$ -However it is easy to check that we get no new solutions from this. To see this write it as $$(a-21)^2 = \frac{6257}{16}-(18n+\frac{15}{4})^2$$ -$$\color{red}{\text{Happy 2016!}}$$<|endoftext|> -TITLE: All $\mathsf{Set}$-valued presheaves on a cocomplete category are representable -QUESTION [6 upvotes]: We often hear that, for a category $\mathcal{C}$, the presheaf category $\mathsf{PSh}(\mathcal{C})=[\mathcal{C}^{\text{op}},\mathsf{Set}]$ is (up to equivalence of categories) the free colimit completion of $\mathcal{C}$. -See, for example, the last point in this Wikipedia article. -If $\mathcal{C}$ is cocomplete then is it true that every presheaf is representable? -We can assume that $\mathcal{C}$ is locally small, or even small, if needs be. -This sounds very farfetched to me, and so in the likely event that it's wrong, what conditions can we place on $\mathcal{C}$ to ensure that every presheaf is representable? -Or is it the case that all such conditions are really so strong that we lose 'a lot' (whatever that might mean to you) of generality. - -As a sketch inspiration (which I am not overly convinced is without mistakes), assume that $\mathcal{C}$ is bicomplete, total, and small. -We use freely some of the results from here. -Then it can be shown that the continuous presheaves $\mathcal{C}^\text{op}\to\mathsf{Set}$ are exactly the representable presheaves, and the fact that the Yoneda embedding $Y\colon\mathcal{C}\hookrightarrow\mathsf{Cont}(\mathcal{C}^\text{op},\mathsf{Set})$ has a left adjoint (by totality) tells us that it also has a right adjoint, and so $Y$ preserves colimits. -Any presheaf $F$ is the colimit of representable presheaves. -Say (for ease of notation) that $F=\mathrm{colim}_iF_i$ where $F_i=\mathrm{Hom}(-,c_i)$ for $c_i\in\mathcal{C}$. -Define (since $\mathcal{C}$ is cocomplete) $c=\mathrm{colim}_ic_i$. -Then -\begin{align*} -F &= \mathrm{colim}_iF_i\\ -&= \mathrm{colim}_iY(c_i)\\ -&= Y(\mathrm{colim}_ic_i) = Y(c), -\end{align*} -and so $F$ is representable. - -REPLY [8 votes]: No, it is not true. For example take the presheaf $F$ on $\mathsf{Set}$ itself (which is bicomplete, locally small, everything we could want) given by $F(X) = \{0,1\}$ for all sets $X$ (and all maps are mapped to the identity). Then $F$ is not representable, because there is no set $Z$ such that $\operatorname{Map}(X,Z)$ has two elements for all sets $X$. -The flaw in your argument can be made evident here: $F$ itself is the coproduct (as a functor!) of the representable presheaf $\operatorname{Map}(-, *)$ with itself, where $* \in \mathsf{Set}$ is some singleton, i.e. $F \cong Y(*) \sqcup Y(*)$. But $F$ is not isomorphic to $Y(* \sqcup *)$. The fact is, $Y$ does not preserve colimits: it preserves limits. It's not true that $\operatorname{Hom}(-, \operatorname{colim}_i X_i) \cong \operatorname{colim}_i \operatorname{Hom}(-, X_i)$, but it's true that $\operatorname{Hom}(-, \lim_i X_i) \cong \lim_i \operatorname{Hom}(-, X_i)$. - -REPLY [3 votes]: No. The Yoneda embedding preserves almost no colimits: the colimits you get are "free" and so have nothing a priori to do with any colimits that already exist. Work out explicitly what a coproduct of representable presheaves looks like: it is almost never representable. -Your argument cannot work for the simple reason that the category of presheaves is never (essentially) small. Also, Freyd showed that any cocomplete small category is a preorder, so even if your argument worked it would only apply, essentially, to suplattices.<|endoftext|> -TITLE: Is there an explicit left invariant metric on the general linear group? -QUESTION [18 upvotes]: Let $\operatorname{GL}_n^+$ be the group of real invertible matrices with positive determinant. - -Can we construct an explicit formula for a metric on $\operatorname{GL}_n^+$ which is left-invariant, i.e. - -$$d(A,B)=d(gA,gB) \, \,\forall A,B,g \in\operatorname{GL}_n^+$$ -and which induces the standard topology on $\operatorname{GL}_n^+$. -(Without the last requirement the discrete metric will do). - -Even finding a concrete example of a metric which is only "scale invariant" (i.e. $d(A,B)=d(rA,rB) \, \,\forall r \in \mathbb{R}$) will be nice; Actually, even finding a metric which is invariant under multiplication by $r=2$ seems non-trivial. -A Riemannian approach: -It's easy to prove existence of left-invariant metrics: Just left-translate any metric on the tangent space at the identity. The problem is that this usually does not induce an explicit formula for the distance. -One can take the left translation of the standard Frobenius metric on $T_I\operatorname{GL}_n^+ \simeq M_n$, and to use its induced distance. I don't know how to compute explicitly this distance: -For any symmetric positive-definite matrix $P$ $$d(I,P)=\|\log P\|_F \tag{*},$$ -where $\|\cdot \|_F$ is the Frobenius norm, and $\log P$ is the unique symmetric logarithm of the matrix $P$. -This is proved here in section 3.3. -The point is that it's easier to compute $d(A,\operatorname{SO}(n))$ than $d(A,B)$: A minimizing geodesic from a point to a submanifold must intersect that submanifold orthogonally, hence we have more constraints on its velocity; this simplifies the analysis, and makes it tractable. -It can be shown that -$$ d(A,\operatorname{SO}(n))=d(A,Q(A))=\|\log \sqrt{A^TA}\|_F,$$ where $Q(A)$ is the orthogonal polar factor of $A$. In particular for positive matrices $P$, $Q(P)=I$, so we obtain $(*)$. -As far as I know, computing the distance $d(I,X)$ for an arbitrary $X \in \operatorname{GL}_n^+$ is open. -Additional partial results: -Any such left invariant metric is determined by $f(X)=d(X,I)$, since -$$ d(A,B)=d(I,A^{-1}B)=f(A^{-1}B) \tag{1}$$ -Rephrasing the requirements from a metric in terms of $f$, we see that if $d$ is given in terms of $f$ as in $(1)$, then $d$ is a metric if and only if -Positivity: $f(X)=0 \iff X=I \tag{2}$ -Symmetry: $f(X)=f(X^{-1}) \tag{3}$ -Triangle inequality: $f(XY) \le f(X) + f(Y) \tag{4}$ -Thus, we obtained an equivalent formulation of the problem: - -Find a non-negative function $f:\operatorname{GL}_n^+ \to \mathbb{R}$ satisfying requirements $(2)-(4)$. - -Reduction of the problem to $\operatorname{SL}_n^+$: -Consider $f(X)=|\ln (\det X)|$. $\, \,f$ satisfies $(3),(4)$, and $f(X)=0 \iff X \in \operatorname{SL}_n^+$. -Now, suppose we constructed a function $\tilde f:\operatorname{SL}_n^+ \to \mathbb{R}^+$ satisfying $(2)-(4)$ above. -Then, by defining -$$ \hat f(X)=f(X)+\tilde f(\frac{X}{\det(X)^{\frac{1}{n}}})$$ -it is easy to see that $\hat f:\operatorname{GL}_n^+ \to \mathbb{R}$ also satisfies $(2)-(4)$ as required. - -Discussion: -Is it true that in some sense the space of left-invariant metrics is "finite-dimensional"? (I refer to arbitrary metrics not just those which are induced by Riemannian metrics). -To make this notion more precise, some care should be taken. For instance, there are ways to generate new invariant metrics from old one (e.g. by applying the map $d \to \sqrt{d}$), but for this discussion we can identify two metrics if one is a function of the other. -Edit: -I now think that this space is always infinite-dimensional. Since any left translation of a smooth norm, will induce a Finsler norm, and the space of smooth norms is not 'finite-dimensional' in any reasonable way, the space of metrics is also infinite-dimensional. (Since different Finsler norms give rise to different induced distances). - -For $n=1$, $\operatorname{GL}_n^+=\mathbb{R}^{>0}$, and the fomula: -$d(x,y)=|\ln(\frac{y}{x})|$ does the job. (It is in fact induced by the Riemannian metric obtained from left translation of the standrad metric on $T_1\mathbb{R}$). The obvious problem with generalizing this to higher dimensions is that there is no global matrix logarithm on $\operatorname{GL}_n^+$, one has to choose a branch. - -REPLY [8 votes]: Let $P$ be a convex polytope in $R^n$ whose interior contains $0$ and which has no nontrivial linear symmetries (i.e. if $A$ is an invertible linear map, $AP=P$ implies that $A$ is the identity). In particular, $P\ne -P$. You can easily construct such $P$ by taking a suitable simplex or a cube. Then $P$ defines a nonsymmetric norm $||\cdot||$ on -$R^n$ for which $P$ is the unit ball, by the usual procedure: $||v||=t$, where $t\in R_+$ is such that $t^{-1}v$ is on the boundary of $P$, and setting $||0||=0$. Using this norm we define the standard operator norm on linear endomorphisms of $R^n$: -$$ -||A||= \max \{||Av||: v\in P\}. -$$ - This norm satisfies $||AB||\le ||A||\cdot ||B||$ and for every invertible matrix $A$, -$$ -g(A)=\max(||A||, ||A^{-1}||)\ge 1$$ -with equality if and only if $A=I$, the identity matrix. - The function $g$ is "explicit" in the sense that $||A||$ is easily computable: - It equals maximum of the norms $||Av_i||$, where $v_i$'s are the vertices of $P$. - Now, set $f(A):= \log(g(A))$. This is your function. (All the required properties are clear.) If it is of any use, I do not know.<|endoftext|> -TITLE: Where is the error in this proof of the Hodge theorem? -QUESTION [6 upvotes]: Let $(M,g)$ be a closed smooth Riemannian manifold. The following is the decomposition part of the Hodge theorem: - -Theorem -The canonical map $\mathscr{H}^k(M)\to H^k(M)$ from harmonic $k$ forms into the De Rahm cohomology is an isomorphism. - -Let us consider $\Omega^*(M)\otimes \mathbb C$ with the following scalar product: -$$\langle\omega,\eta\rangle := \int_M \omega \wedge *\eta$$ -This is a pre-Hilbert space and from the definition $d^*$ is the adjoint of $d$. Since $\Delta = d^\mathstrut d^* + d^*d^\mathstrut=(d^\mathstrut+d^*)^2$, if $(d^\mathstrut + d^*)\omega \neq 0$ then from $$\langle \omega, \Delta \omega \rangle =\langle (d^\mathstrut + d^*) \omega, (d^\mathstrut + d^*)\omega\rangle =\|(d^\mathstrut + d^*)\omega\|^2$$ -you get that $\Delta \omega$ also is not zero and $\ker(\Delta)=\ker(d^\mathstrut+d^*)$. Now suppose $\omega \notin \ker(d)$: -$$\langle d\omega , (d^\mathstrut + d^*) \omega \rangle = \langle d \omega, d\omega \rangle + \langle d^2 \omega, \omega \rangle = \|d\omega\|^2$$ -And we get $\omega \not\in \ker(\Delta)$. Doing the same with $\ker(d^*)$ you get $\ker(\Delta)\subset\ker(d)\cap\ker(d^*)$. This means that the inclusion defined above is well defined. -On the other hand if $\omega \in \ker(d)\cap\ker(d^*)$ cleary $(d^\mathstrut + d^*)\omega=0$. So a form is harmonic if and only if it lies in the joint kernel of $d$ and $d^*$. -Write $\ker(d)=\text{im}(d)\oplus\text{im}(d)^\perp$, where $\text{im}(d)^\perp$ is the subspace of $\ker(d)$ (not $\Omega^*$) that is orthogonal to $\text{im}(d)$. -If $\omega$ is harmonic then $\omega \in \ker(d^*)$ and $\langle d \eta, \omega \rangle=\langle \eta, d^*\omega \rangle=0$ and $\omega$ lies in $\text{im}(d)^\perp$. On the other hand if $\omega \in \text{im}(d)^\perp$ then $0=\langle d^\mathstrut d^*\omega, \omega \rangle = \langle d^* \omega, d^* \omega\rangle$ and $\omega$ lies in $\ker(d^*)$. -This implies that the space of harmonic forms is equal to $\text{im}(d)^\perp$, which implies the theorem above. - -My problem is that this is a very elementary proof, it uses only that $\langle,\rangle$ is positive definite, completeness of the vector space is not needed. Yet in most discussions one sees constant reference to the fact that Hodge theorem is non-trivial and uses involved results about the theory of elliptic operators, but this was not used in this derivation. -Is there an error in the derivation or a missed subtlety? Or does this part of the Hodge theorem (which to me seems the more interesting part) indeed just follow from elementary considerations? - -REPLY [5 votes]: As Daniel Fischer pointed out in the comments, the gap in your proof is the claim that $\operatorname{ker}(d) = \operatorname{im}(d) \oplus \operatorname{im}(d)^\perp$. But it should be noted that simply working in a complete space is not enough to justify this claim; it's necessary to prove that $\operatorname{im}(d)$ is closed. All of the operators $\Delta$, $d$, and $d^*$ are unbounded with respect to the $L^2$ norm, and it's perfectly possible for such an operator to have non-closed image. -The real work in proving the Hodge theorem is proving that $\Delta$ is a Fredholm operator on the $L^2$ completion of $\Omega^*(M)$ (i.e., it has finite-dimensional kernel and closed image). From this it follows rather easily that both $d$ and $d^*$ have closed image, from which you can conclude that $\operatorname{ker}(d) = \operatorname{im}(d) \oplus \operatorname{im}(d)^\perp$. Then you can use elliptic regularity (i.e., if $\Delta \omega$ is smooth, then so is $\omega$) to transfer these results back to $\Omega^*(M)$.<|endoftext|> -TITLE: Eigenvalues of $MA$ versus eigenvalues of $A$ for orthogonal projection $M$ -QUESTION [10 upvotes]: Suppose that $M$ is symmetric idempotent $n\times n$ and has rank $n-k$. Suppose that $A$ is $n\times n$ and positive definite. Let $0<\nu_1\leq\nu_2\leq\ldots\nu_{n-k}$ be the nonzero eigenvalues of $MA$ and $0<\lambda_1\leq\lambda_2\leq\cdots\leq\lambda_n$ be the eigenvalues of $A$. I'm trying to show that - $$ -\forall i=1,\ldots,n-k:\quad 0<\lambda_i\leq\nu_i\leq\lambda_{i+k}\tag{$*$} -$$ - There will be a 300 bounty for the accepted answer. Can someone also please make all the (attempted) proofs below as spoilers? I can only do that for the first proof. - -Attempt: I have an attempt here using Durbin and Watson (1950) but I don't fully understand the authors' argument so the attempt is incomplete. Nonetheless, I'll present the attempt here. Step 3 below is where I am stuck. - -Step 1: One can write $M$ as $M_kM_{k-1}\cdots M_1$ where $M_i=I_n-p_ip_i'$ and $\{p_1,\ldots,p_k\}$ is a set of $n\times 1$ mutually orthogonal vectors s.t. $||p_i||=1$. - - Proof. $M$, by assumption, can be written as $M=I_n-X(X'X)^{-1}X'$ where $X$ is $n\times k$ with full column rank. Let $P=(p_1,\ldots,p_k)$ (dimension $n\times k$) be the $Q$ bit of the QR decomposition of $X$. - -Step 2: Let $T=(T_1,\ldots,T_n)$ be an $n\times n$ matrix of orthonormal eigenvectors of $A$ that corresponds to eigenvalues $\lambda_1,\ldots,\lambda_n$. Let $l_{1i}=T_i'p_1$. Then any nonzero eigenvalue $\theta$ of $M_1A$ satisfies -$$ -\sum_{i=1}^nl_{1i}^2\prod_{j\neq i}(\theta-\lambda_j)=0.\tag{$**$} -$$ - -Proof. For any eigenvalue (possibly $0$) $\theta$ of $M_1A$, we have - $$ -0=|I_n\theta-M_1A|=|I_n\theta-(I_n-p_1p_1')A|=|I_n\theta-(I_n-l_1l_1')\Lambda| -$$ - Here, $l_1$ is the $n\times 1$ column vector with entries $l_{1i}$ and $\Lambda=\text{diag}(\lambda_1,\ldots,\lambda_n)$. Write out $I_n\theta-(I_n-l_1l_1')\Lambda$ in full. Subtract $l_2/l_1$ times the first row from the second row, $l_3/l_1$ times the first row from the third row, and so on, and then execute the Laplace expansion along the first row. The result is - $$ -0=|I_n\theta-(I_n-l_1l_1')\Lambda|=\prod_{j=1}^n(\theta-\lambda_j)+\sum_{i=1}^nl_{1i}^2\lambda_{i}\prod_{j\neq i}(\theta-\lambda_j). -$$ - Plugging $\theta=0$ in the rightmost expression above gives $\sum_{i=1}^2l_{1i}^2=1$. Thus, - \begin{align*} -0&=\sum_{i=1}^nl_{1i}^2\prod_{j=1}^n(\theta-\lambda_j)+\sum_{i=1}^nl_i^2\lambda_{i}\prod_{j\neq i}(\theta-\lambda_j)\\ -&=\sum_{i=1}^nl_{1i}^2(\theta-\lambda_i)\prod_{j\neq i}(\theta-\lambda_j)+\sum_{i=1}^nl_i^2\lambda_{i}\prod_{j\neq i}(\theta-\lambda_j) -\end{align*} - which can be simplified and, for $\theta\neq 0$, divided by $\theta$ to get ($**$). - -Step 3: Let $0=\cdots =0<\theta_1^{(s)}\leq \theta_2^{(s)}\leq \theta_{n-s}^{(s)}$ be the eigenvalues of $M_sM_{s-1}\cdots M_1A$. Then, -$$ -\forall s=1,\ldots,k:\quad \theta_i^{(s-1)}\leq\theta_i^{(s)}\leq \theta_{i+1}^{(s-1)},\quad i=1,\ldots,n-s.\tag{$***$} -$$ -Here the $\lambda_i$'s are taken to be the $\theta_i^{(0)}$'s. - -Proof. Let's build the first step for the case $s=1$. Consider - $$ -f(\theta)=\sum_{i=1}^nl_{1i}^2\prod_{j\neq i}(\theta-\lambda_j) -$$ - and consider $[\lambda_r,\lambda_{r+1}]$ for $r=1,\ldots,n-1$. Either $f(\lambda_r)=0$ or $f(\lambda_{r+1})=0$ or $f(\lambda_r)f(\lambda_{r+1})\neq 0$. It's easy to show that in general $f(\lambda_r)f(\lambda_{r+1})\leq 0$ and so if $f(\lambda_r)f(\lambda_{r+1})\neq 0$ then $f(\lambda_r)f(\lambda_{r+1})< 0$ and so by the Intermediate Value Theorem, there is an zero of $f$ between $(\lambda_r,\lambda_{r+1})$. In sum, there is a zero of $f$ in each $[\lambda_r,\lambda_{r+1}]$ for each $r=1,\ldots,n-1$. It follows that - $$ -0<\lambda_1\leq\theta_1^{(1)}\leq\lambda_2\leq \theta_2^{(1)}\leq\cdots\leq \theta_{n-1}^{(1)}\leq\lambda_n. -$$ - This proves ($***$) for $s=1$. Proceed with $M_2M_1A$ as $M_2(M_1A)$ to get ($***$) for $s=2$. And so on. - -Step 4: -($*$) holds. - -Proof. By Step 3, for $i=1,\ldots,n-k$, - $$ -\nu_i=\theta_i^{(k)}\geq \theta_i^{(k-1)}\geq \cdots \geq \theta_i^{(1)} \geq \theta_i^{(0)}=\lambda_i. -$$ - Similarly, - $$ -\nu_i=\theta_i^{(k)}\geq \theta_{i+1}^{(k-1)}\geq \cdots \geq \theta_{i+k-1}^{(1)} \geq \theta_{i+k}^{(0)}=\lambda_{i+k}. -$$ - -Problem with Step 3. The case for $s=1$ and $M_1A$ relies on $A$ being diagonalizable. I don't think the same argument works for $M_2(M_1A)$ because we don't know the diagonalizability of $M_1A$. So I don't think the induction step in Step 3 works (Durbin and Watson (1950) claim it does.). Moreover, while I'm confident in the Intermediate Value Theorem argument, I'm not confident about the subsequent claim: -It follows that -$$ -0<\lambda_1\leq\theta_1^{(1)}\leq\lambda_2\leq \theta_2^{(1)}\leq\cdots\leq \theta_{n-1}^{(1)}\leq\lambda_n. -$$ - -REPLY [3 votes]: As $M$ is symmetric idempotent, with respect to its orthonormal eigenbasis, you may assume that $M=I_{n-k}\oplus0$. Then the eigenvalues of $MA$ are just the eigenvalues of the leading principal $(n-k)\times(n-k)$ submatrix of $A$. So, essentially, the inequality in question relates the eigenvalues of a positive definite matrix $A$ to the eigenvalues of its principal submatrix. This actually is a well-known result that is not only true for positive definite matrices, but for all Hermitian matrices. See, e.g. theorem 4.3.15 (p.189) of Horn and Johnson, Matrix Analysis, 1/e, Cambridge University Press, 1985.<|endoftext|> -TITLE: Mentally calculating trigonometric function values such as $\sin(47^\circ)$ -QUESTION [7 upvotes]: This may sound dumb, but does such a way exist to mentally (and quickly) determine the values of trigonometric functions such as $\sin(47^\circ)$ and so forth--quickly being a mere matter of seconds? My physics teacher suggested to our class that it is in fact possible, though I see no other ways apart from memorization and the standard methods involving triangles or trig identities (which are decidedly not quick). -I'm on my high school's 'mathletes' team, and though it's unnecessary minutiae I thought that it would be a fun thing to share with my teammates. That, and calculating logarithms mentally, though it appears that the means for doing so has already been answered on this site. (Feel free to comment on the logarithm issue as well, however, if you do have a particularly clever method that you'd be willing to share.) -Parameters: to 2-3 accurate decimal places is ideal. I'm not quite sure how that corresponds %-wise. - -REPLY [5 votes]: For sine in degrees, you can get an estimation using this formula (where x is a number of degrees from 0 to 90): -$y=\frac{10,000-((100-x)^2)}{10,000}$ -Simplified, this is just the following steps: - -Subtract the number of degrees from 100 -Square that number -Subtract the result from 10,000. -Divide step 3's result by 10,000. - -Note that, for step 2, you should be familiar with quick methods for squaring 2-digit numbers quickly. -Let's try sin(47°) with this method: - -$100 - 47 = 53$ -$53^2 = 2,809$ -$10,000 - 2,809 = 7,191$ -$\frac{7,191}{10,000} = 0.7191$ - -sin(47°), in actuality, is roughly 0.7314, so the above method isn't perfect. Here's the error margin (red=sin(x), blue=approximation, green=error) on this calculation, ranging from -0.019 to +0.03. - -Similarly, for cos in degrees from 0 to 90, you can use a similar formula: -$y=\frac{10,000-((x+10)^2)}{10,000}$ -This breaks down into: - -Add 10 to the number of degrees -Square that number -Subtract the result from 10,000. -Divide step 3's result by 10,000. - -Let's estimate cos(47°): - -$47 + 10 = 57$ -$57^2 = 3,249$ -$10,000 - 3,249 = 6,751$ -$\frac{6,751}{10,000} = 0.6751$ - -cos(47°) is roughly 0.682, so an estimate of 0.6751 isn't that bad. This shortcut has the same margin of error, from -0.019 to +0.03. - -Obviously, these approaches put more emphasis on speed than accuracy, but you'll find that most schortcut methods have the same challenge.<|endoftext|> -TITLE: Divergence in Riemannian Geometry (General Relativity) -QUESTION [5 upvotes]: I'm taking a course in General Relativity and I'm having some problems with the notation. -I know that Einstein's tensor verifies $\nabla_aG^{ab}=0$. In physics textbooks this consequence of Bianchi identity is phrased as "the tensor has 0 divergence". I don't understand this because for me that identity means: -If you take Einstein's tensor $G$ and take the covariant derivative $\nabla_a G$ then $(\nabla_a G)^{ab}=G^{ab}_{\quad;a}=0$. I cannot see the divergence in there. -I studied that the divergence and all of those classical differential operators could be understood through the external derivative which makes sense.. -So my question: Is $\nabla_a u^{a}$ called divergence because if you take the summation conventions it looks like a divergence? I come from a mathematical background and I think index notation is powerful but sometimes I feel like I'm missing the point. - -REPLY [3 votes]: Since you said you have a mathematical background and know about the unification in terms of the exterior derivative, I thought I should add some information about the connection to the exterior derivative here. -As it is often the case, the generalisation of concepts can be performed in various ways, because the aspects they rely on fall together in special cases. For the divergence of vector fields $V$ we have two coinciding definitions, namely as the trace of $\nabla V$, which is locally $\nabla_a V^a$ and motivated by the divergence on $\mathbb{R}^n$, and via the canoncial identification with 1-forms, namely $\operatorname{div} V=*d*V^\flat$. Here $V^\flat=g(V,\cdot)$ and $*$ is the Hodge dual. -Differential forms are only a subset of all tensor fields, so it is natural to use the first definition to define the divergence of a $(p,q)$-tensor $T$ by taking a trace of $\nabla T$. How this is precisely done depends on the author. In Riemannian geometry you sometimes encounter the definition -$$ -\operatorname{div}T(Y_1,\dots,Y_{q-1}) = \operatorname{tr}(X\mapsto (\nabla_XT(\cdot,Y_1,\dots,Y_{q-1}))^\sharp)=\sum_i\nabla_{E_i}T(E_i,Y_1,\dots,Y_{q-1}). -$$ -Here the dependance on forms is implicit. The last equality holds locally after choosing a local frame $\{E_i\}$. With this you can show that $\operatorname{div}(fg)=df$ for any function $f$ and that $2\operatorname{div} \operatorname{Ric}=dS$, so that -$$ -\operatorname{div}\left(\operatorname{Ric}-\frac{1}{2}Sg\right)=0, -$$ -which is the mentioned relation for the Einstein tensor (here $\operatorname{Ric}$ is the Ricci tensor and $S$ the Ricci scalar). -This also sometimes coincides with the GR convention, noting that the Einstein, energy-momentum and field strength tensors are naturally covariant tensors and you first have to raise indices to compute the divergence. -For $p$-forms, this tensor definition again coincides with the definition using the exterior derivative, so that we find, e.g. for the field strength tensor $F$: -$$ -\operatorname{div} F = *d*F, -$$ -which plays a role in the equations of motion of Einstein-Maxwell theory.<|endoftext|> -TITLE: Is the irreducibility of a ring preserved by localization at a prime ideal? -QUESTION [6 upvotes]: Let $R$ be a commutative ring and $\mathfrak p $ a prime ideal of $R$. -Suppose that $R$ satisfies the following property: the intersection of two nonzero ideals is always nonzero. Is the property also true for the ring $R_{\mathfrak p}$, the ring $R$ localized at $\mathfrak p$ ? -Can someone give a proof this fact or show a counterexample ? Thanks a lot in advance! - -REPLY [5 votes]: First I want to introduce a (standard) terminology: A ring $A$ for which $I \cap J = 0$ implies $I=0$ or $J=0$ for two ideals $I,J$ of $A$ is called "irreducible". So the question is, if for $A$ irreducible every localization $A_p$ is irreducible. -$\newcommand{\Ann}{\mathrm{Ann}}$ -$\newcommand{\Ass}{\mathrm{Ass}}$ -$\newcommand{\ideal}[1]{{\mathfrak{#1}}}$ -It can be shown, that for every noetherian irreducible ring $A$, also $A_p$ is irreducible. -First let $A$ be irreducible (but not necessarily noetherian). Then the set of associated primes $\Ass(A)$ is either empty or consists of exactly one prime $\mathfrak{p} \subseteq A$. Namely, let $\mathfrak{p} = \Ann (x)$ and $\mathfrak{p}' = \Ann( y)$ with $x, y \in A$ and $\ideal{p}, \ideal{p}'$ prime. Then, as $A$ is irreducible, we have $a, b \in A$ with $ax = b y \neq 0$. -Now $\Ann(x) = \Ann(a x) = \ideal{p}$ and $\Ann(y) = \Ann(by) = \ideal{p}'$, so $\ideal{p} = \ideal{p}'$. -Now let $A$ be irreducible and $\Ass(A) = \{\ideal{p}\}$. Furthermore let $S \subseteq A$ multiplicatively closed and $S \cap \ideal{p} = \emptyset$. -Then $S^{-1}A$ is irreducible too. -The proof goes as follows: -Let $I = (f/1)$ and $J = (g/1)$ be two non-zero principal ideals in $S^{-1}A$ and let $x \in A$ with $\Ann(x) = \ideal{p}$. Then, as $A$ is irreducible, we have -$a f = q x \neq 0$ and $b g = q' x \neq 0$ with $q, q' \notin \ideal{p}$. So -$a' f = a q' f = q'' x \neq 0$ and $b' g = b q g = q'' x \neq 0$ with $q'' = q q' \notin \ideal{p}$. So $a' f = b' g \neq 0$ in $A$. But it is even $a'f/1 = b' g/1 \neq 0$ in $S^{-1}A$. Otherwise $s a' f = 0$ with $s \in S$. So we had $s a' f = s q'' x = 0$ and $s q'' \in \ideal{p}$. But it is $s, q'' \notin \ideal{p}$. -Now if $A$ is noetherian and irreducible then $\Ass(A) =\ideal{p}$ where $\ideal{p}$ is the unique minimal prime of $A$. So for every other prime $\ideal{q} \subseteq A$ we have $S \cap \ideal{p} = (A - \ideal{q}) \cap \ideal{p} = \emptyset$ and $S^{-1}A = A_\ideal{q}$ is irreducible too. -So a counterexample must consist of a non-noetherian ring $A$ which has either $\Ass(A) = \emptyset$ or has $\Ass(A) = \{\ideal{p}\}$ and $\ideal{p} \cap (A-\ideal{q}) \neq \emptyset$ for a certain prime $\ideal{q} \subseteq A$. Furthermore it must not be integral (trivial) and not reduced: Let $A$ be reduced and irreducible: Let $I=(f)$ and $J=(g)$. If $f g = 0$ then $\sqrt{I \cap J} = \sqrt{I J} = \sqrt{0} = N_A$ with $N_A = 0$ the nilradical. So $I \cap J = 0$ and $I=0$ or $J = 0$, that is $f = 0$ or $g = 0$. So a reduced irreducible ring is an integral domain and has the permanence property sought for. -ADDENDUM: For the following I will refer to the two links: -1 [https://www.math.purdue.edu/~heinzer/preprints/irr15.pdf][ 1 ] -2 [http://www.math.lsa.umich.edu/~hochster/615W11/loc.pdf][ 2 ] -A counterexample can be constructed as follows: Let $(A,\ideal{m})$ be a noetherian local ring with minimal prime ideals $\ideal{p}, \ideal{q}$ and prime ideal $\ideal{r} \subseteq \ideal{m}$, such that -$$\ideal{p}, \ideal{q} \subsetneq \ideal{r} \subsetneq \ideal{m}$$ -and $\ideal{p} \cap \ideal{q} = (0)$. -Furthermore let $E=E(A/\ideal{m})$ be the injective hull of $A/\ideal{m}$, an $A$--module. -Construct the ring $A + E = B$ with componentwise addition and -$$(a,e) \cdot (a', e') = (a a', a e' + a' e)$$ -It is called in [1, Example 2.4] "idealization" and in modern terminology (Hartshorne, Algebraic Geometry, II, Ex. 8.7) would be called "trivial infinitesimal extension of $A$ by $E$". -In [1, Example 2.4] it is contended without proof that $B$ is an irreducible ring - I omit the proof here too, as it is easy to find. -From the exact sequence of $B$--Modules $0 \to E \to B \to A \to 0$ we see, that $\ideal{P} = (\ideal{p}, E)$, $\ideal{Q} = (\ideal{q}, E)$, $\ideal{R} = (\ideal{r},E)$ are prime ideals of $B$ with $\ideal{P}, \ideal{Q} \subseteq \ideal{R}$. Furthermore, we have $\ideal{P} \cap \ideal{Q} = (\ideal{p} \cap \ideal{q}, E) = (0, E) \subseteq B$. -I contend, that the ring $B_\ideal{R}$ is not irreducible, albeit it is the localization of the irreducible ring $B$: We have $\ideal{P}_\ideal{R}, \ideal{Q}_\ideal{R} \neq (0)$ as ideals of $B_\ideal{R}$. Now it is -$$\ideal{P}_\ideal{R} \cap \ideal{Q}_\ideal{R} = (\ideal{P} \cap \ideal{Q})_\ideal{R} = (0, E)_\ideal{R}$$ -Now from [2, Theorem 2.4], I draw the fact, that (in $A$ and for $A$--modules) for every $e \in E$ there is a power $\ideal{m}^t$ that annihilates $e$. As $\ideal{m}^t \not\subseteq \ideal{r}$, we can find a $s \notin \ideal{r}$, $s \in \ideal{m}^t$, $s \in A$, such that $s e = 0$. So $(s, 0) \in B - \ideal{R}$ and $(0,e) = 0$ in $B_\ideal{R}$. So $(0,E)_\ideal{R} = 0$ and we have found in $\ideal{P}_\ideal{R}$ and $\ideal{Q}_\ideal{R}$ two ideals of $B_\ideal{R}$ which are nonzero, but have intersection zero. Q.E.D.<|endoftext|> -TITLE: Almost sure bounded imply finite expectation? -QUESTION [7 upvotes]: Suppose that the random variable $X$ is $\mid X \midM]=0$, i.e., the set $\{\omega: |X(\omega)|>M\}$ has measure zero. -Remark An essentially bounded function is not necessarily bounded. You can find relevant examples here. -A very easy way to give an affirmative answer to your question is the following: Let $X:(\Omega, \mathcal{F}, \mathrm{P})\to [0,+\infty)$ be a finite-valued non-negative continuous random variable. In this case we may use the following representation of the expectation (see this Wikipedia article): -$$\mathbb{E}[X]=\int_{\Omega}X(\omega)\mathrm{P}(\mathrm{d}\omega)=\int_{0}^{\infty}\mathrm{P}[X\geq x]\mathrm{d}x,$$ -where $f(x) = \mathrm{P}[X\geq x]$ has finite support because there is $M>0$ so that $\mathrm{P}[X\geq x]=0$ for all $x>M$ because of our hypothesis that $X$ is non-negative and essentially bounded (almost-surely bounded). As a result, $\mathbb{E}[X]$ will be finite. -We also have the following result (without assuming continuity): -Let $X\in \mathcal{L}^{\infty}(\Omega, \mathcal{F}, \mathrm{P})$ and $Z\in \mathcal{L}^{1}(\Omega, \mathcal{F}, \mathrm{P})$. Then, $XZ$ is measurable and $XZ$ is integrable and -$$ -\|XZ\|_1 \leq \|X\|_{\infty}\|Z\|_{1}, -$$ -take $Z=1$ and we get -$$ -\|X\|_1 \equiv \mathbb{E}[|X|] \leq \|X\|_{\infty} < \infty, -$$ -Therefore $\mathbb{E}[|X|]$ is finite, but by Jensen's inequality -$$ -|\mathbb{E}[X]| \leq \mathbb{E}[|X|], -$$ -so, $\mathbb{E}[X]$ is also finite. As a result, essentially bounded random variables have finite expectation. -With some little abuse of notation we can write -$$ -\mathcal{L}^{\infty}(\Omega, \mathcal{F}, \mathrm{P}) \subseteq \mathcal{L}^{1}(\Omega, \mathcal{F}, \mathrm{P}) -$$ -In general, we can also show that -$$ -\mathcal{L}^{p}(\Omega, \mathcal{F}, \mathrm{P}) \subseteq \mathcal{L}^{p'}(\Omega, \mathcal{F}, \mathrm{P}), \text{ whenever } p' \leq p, -$$ -but the converse is not true. A random variable with finite expectation does not have finite second moment. A random variable with all moments up to $p>1$ may not have finite moments of order $p'>p$ and, further, may not be essentially bounded. -A necessary clarification: -A typical example of an essentially bounded function that is not bounded is the following: - -This is given by (here $\Omega=\mathbb{R}$) -$$ -X(\omega)=\begin{cases} -1, &\text{if} x\in \mathbb{R}\setminus \mathbb{N},\\ -x, &\text{otherwise} -\end{cases} -$$ -Indeed, notice that $\mathrm{P}[X>1]=0$. -The notion you mentioned in your comment below is not essential boundedness: for $X:\Omega\to \mathbb{R}\cup\{+\infty\}$ the property $\mathrm{P}[X=\infty]=0$ (equivalently $\mathrm{P}[X<\infty]=1$) is that $X$ is almost-everywhere finite or almost-surely finite-valued. No way, should this be confused with essential boundedness. Take for instance the following very simple function -$$ -X(\omega) = \omega, -$$ -where $\Omega = \mathbb{R}$. Notice that $\{\omega, X(\omega) = +\infty\}=\varnothing$, but $X$ is not essentially bounded and is not bounded. -If a random variable is almost-everywhere finite, it does not necessarily have finite expectation. Take for instance the counterexample provided in this answer. Another example, if you prefer an example not in discrete space, is the Pareto distribution with parameter $\alpha=1$. There are numerous other examples.<|endoftext|> -TITLE: Is there a way to prove this exponential inequality: if $a>b$ then $a^a>b^b$ for $a,b>1$? -QUESTION [10 upvotes]: I came across this proposition while trying to prove that a function was injective: if $a>b$ then $a^a>b^b$, where $a$ and $b$ are real numbers bigger than $1$. Intuitively it (somehow) makes sense but I wonder if a rigorous proof can be made. -But, the initial problem I was trying to solve was to show that $f(x)=x^x$, where $x$ is just a positive real number, is injective. As the "contrapositive method" from the definition of an injective function didn't work out, I figured I could just show that my function was strictly increasing or decreasing, therefore the function would be injective. I looked at the graph of this function and I noticed I have a turning point at $x=1/e$ (as the user MXYMXY pointed out). Thus I had two cases for my function. - -REPLY [3 votes]: Since $a > b > 1$, and since $\log(x)$ is an increasing function of $x$, then -$$\log(a) > \log(b) > 0.$$ -Multiplying these two inequalities together: -$$a > b > 1$$ -and -$$\log(a) > \log(b) > 0$$ -gives you -$$a\log(a) > b\log(b).$$ -By a property of logarithms ($x\log(x) = \log(x^x)$), this implies -$$\log(a^a) > \log(b^b).$$ -Again, since $\log(x)$ is an increasing function of $x$, we obtain -$$a^a > b^b.$$ -QED<|endoftext|> -TITLE: Finite dimensional division algebra over $\Bbb{C}$ -QUESTION [5 upvotes]: Another abstract algebra question from my university days that has me stumped at where to start! -I know what a division ring is and I think I understand what a division algebra over $\mathbb C$ is. (A division ring $D$ where there is an additional operation of scalar multiplication of elements of $D$ with elements of $\mathbb C$. I have come across this question and don't really know where to start. I'm not asking for a full proof, but a good starting point would be greatly appreciated! -Let $D$ be a finite-dimensional division algebra over $\mathbb C$. Show that $D=\mathbb C$. -Am I right in thinking that a finite-dimensial division algebra is one where $D$ is spanned by a certain (finite) number of (linearly independent?) elements of $D$? -Thanks in advance, -Andy. - -REPLY [5 votes]: Another way to start is to choose a basis $\{d_1,\ldots,d_n\}$ for $D$. Left multiplication by $d\in D$ is an linear transformation $D\to D$. Write -$$d.d_i=\sum_j\lambda_{ji}d_j,\;\;\;\lambda_{ji}\in \mathbb{C}$$ -This yields a matrix $(\lambda_{ji})$ whose characteristic polynomial satisfies $\chi(d)=0$. The polynomial splits into linear factors over $\mathbb{C}$ so $$0=\chi(d)=\prod_\mu(d-\mu)^{n_\mu}.$$ Since $D$ is a domain, we deduce that $d-\mu=0$ for some eigenvalue $\mu$. Hence $n=1$ and $D\cong \mathbb{C}$. -To connect this with Wedderburn-Artin, observe that $D$ is a simple $\mathbb{C}$-algebra since it has no nontrivial proper ideals. Now, semisimple $\mathbb{C}$ algebras are isomorphic to -$$M_{n_1}(\mathbb{C})\oplus\cdots\oplus M_{n_k}(\mathbb{C}).$$ -Of these, the simple ones satisfy $k=1$ (i.e. are isomorphic to $M_n(\mathbb{C}))$. Of those, the division algebras satisfy $n=1$.<|endoftext|> -TITLE: Rotating a sphere -QUESTION [5 upvotes]: I'm trying to rotate a sphere, and I'm having a bit of a problem calculating the angle to rotate it by. I wonder if anyone can help me? -On my sphere I've marked three points. If the centre of the sphere is (0,0,0), then the points are where the x, y and z axis exit the sphere. -For example: - -What I would like to do, is rotate the sphere so that an axis (lets say the z axis) exits the sphere such that these three points are all exactly the same distance from the z axis. -For example, the z axis would exit the sphere approximately here: - -This is what I've got so far. -First I rotate the sphere by 45 degrees around the x axis: - -So far, no problem. -I then rotate the sphere by -45 degrees around the y axis. -At first glance, it appears to have worked: - -But if I enlarge the circles marked on the sphere, it's obvious that the z axis is not exiting the sphere at the right point: - -Now I've done a bit of experimenting, and if I rotate the sphere by -35.1, not -45 degrees around the y axis, then it is roughly in the right position. -I've spent the afternoon with pen and paper trying to figure out what I should be rotating by, but I just can't figure the exact angle to rotate by. -Note: the application for this, is I'm trying to design a small stand to be 3d printed. I would like the stand to be exactly level. -If anyone can help, it'd be much appreciated! -Thanks in advance! -David. - -REPLY [4 votes]: You can use the well-known Rodriguez rotation formula (https://en.wikipedia.org/wiki/Rodrigues%27_rotation_formula), which states -$ -\textbf{v}' = \textbf{v}\cos\theta + (\textbf{r}\times\textbf{v})\sin\theta + \textbf{r}(\textbf{v}\cdot\textbf{r})(1-\cos\theta) -$ -where $\textbf{v}$ is the original vector, $\textbf{r}$ is the vector about which the rotation on angle $\theta$ is performed, and $\textbf{v}'$ is the vector after rotation. -Now, by applying this formula twice (first, for example, for the rotation on angle $\alpha$ about $x-$axis and then on angle $\beta$ around $y-$axis), we get -$\textbf{k}'' = \textbf{i}\cos\alpha\sin\beta -\textbf{j}\sin\alpha + \textbf{k}\cos\alpha\cos\beta$. -Now, to find the angle between $\textbf{k}$ and $\textbf{k}''$, you can take the dot-product of them -$\cos \angle(\textbf{k},\textbf{k}'') = \textbf{k}\cdot\textbf{k}'' = \cos\alpha\cos\beta$. -Alternatively, the same can be found as (suggested by Jonas above) -$\cos \angle(\textbf{k},\textbf{k}'') = \langle (0,0,1),(1,1,1)\rangle/\sqrt{3} = \frac{1}{\sqrt{3}}$. -Now, let consider specific example of $\alpha = 45^\circ$ and $\beta = X^\circ$. So, we have -$\frac{\sqrt{2}}{2}\cos X = \frac{1}{\sqrt{3}}$, -from where $X =\cos^{-1}\sqrt{\frac{2}{3}} \approx 35.2644^\circ$. -You were close when you experimented with angles. -Hope this helps.<|endoftext|> -TITLE: Is there any integral for the Golden Ratio? -QUESTION [163 upvotes]: I was wondering about important/famous mathematical constants, like $e$, $\pi$, $\gamma$, and obviously the golden ratio $\phi$. -The first three ones are really well known, and there are lots of integrals and series whose results are simply those constants. For example: -$$ \pi = 2 e \int\limits_0^{+\infty} \frac{\cos(x)}{x^2+1}\ \text{d}x$$ -$$ e = \sum_{k = 0}^{+\infty} \frac{1}{k!}$$ -$$ \gamma = -\int\limits_{-\infty}^{+\infty} x\ e^{x - e^{x}}\ \text{d}x$$ -Is there an interesting integral* (or some series) whose result is simply $\phi$? -* Interesting integral means that things like -$$\int\limits_0^{+\infty} e^{-\frac{x}{\phi}}\ \text{d}x$$ -are not a good answer to my question. - -REPLY [3 votes]: By Euler's reflection formula, it follows that -$$ -\int_0^\infty{x^{s-1}\over1+x}\mathrm dx={\pi s\over\sin(\pi s)}\tag1 -$$ -Accordingly, we can find an $s$ such that $\sin(\pi s)$ can be associated with $\phi$. As it turns out, we do have some special angle that allows us to do so. - -By observing the geometric properties of this triangle, we can deduce the following relationship -$$ -\triangle ABC\sim\triangle BDA\cong\triangle DBA -$$ -which implies -$$ -{BC\over AB}={AB\over BD} -$$ -Now, due to the properties of isosceles triangles, we get -$$ -AB=AD=CD\Rightarrow BC=BD+CD=BD+AB -$$ -Thus, we obtain -$$ -1+{BD\over AB}={AB\over BD} -$$ -To convenience the derivation, set $AB=1,BD=y$ so that the above identity becomes -$$ -1+y=\frac1y\Rightarrow y^2+y-1=0\Rightarrow y={-1+\sqrt5\over2}=\frac1\phi -$$ -Again, by the properties of isosceles, we deduce -$$ -CE={1+y\over2} -$$ -As a result, we obtain $\cos36^\circ$ from its definition: -$$ -\cos36^\circ={CE\over CD}={CE\over AB}={1+y\over2}={1+\sqrt5\over4}=\frac\phi2 -$$ -Now, due to the conversion that -$$ -90^\circ-36^\circ=54^\circ={3\pi\over10} -$$ -we obtain -$$ -\sin\left(3\pi\over10\right)=\frac\phi2 -$$ -Therefore, setting $s=3/10$ in (1), we obtain -$$ -\fbox{$\Large\int_0^\infty{x^{-7/10}\over1+x}\mathrm dx={2\pi\over\phi}$} -$$<|endoftext|> -TITLE: Possible method to prove infinite twin prime conjecture -QUESTION [9 upvotes]: I have an idea looking more and more promising that may lead to proving the infinite twin prime conjecture. My idea would set up a correspondence between primes and twin prime pairs. Since primes have been proven infinite, twin primes would be shown infinite as well. Here it is: -For every prime $p>7$ there exists at least one unique twin prime pair $(p_t,p_t+2)$ created using only primes less than $p$ as follows: -$$(p_t,p_t+2)=(3\times5\times P_p\times p-4,\ \ 3\times 5\times p\times P_p-2)$$ -or -$$(p_t,p_t+2)=(3\times5\times P_p\times p+2,\ \ 3\times5\times p\times P_p+4)$$ -where $P_p$ is some product of individual primes ($p_n$) and their powers (although recent developments indicate powers may be unnecessary!) such that each fits the following condition: -$$5 -TITLE: Show that among all quadrilaterals of a given perimeter the square has the largest area -QUESTION [5 upvotes]: Show that among all quadrilaterals of a given perimeter the square has the largest area. - -By Ptolemy's theorem we have that if $a,b,c,d$ are the side lengths of the quadrilateral then $ac+bd \geq d_1d_2$, which implies that $\text{Area}_{\text{quadrilateral}} \leq \dfrac{1}{2}d_1d_2$ where $d_1,d_2$ are the lengths of the diagonals. I then want to show for a given perimeter the maximal area is obtained for equality of the last inequality. We can't just say that the maximal area is for that of a square based on the last inequality since the maximal may not be achieved for a given perimeter. How do I continue? - -REPLY [3 votes]: We have -$$S_{ABCD}\le S_{ABC}+S_{CDA}\le \frac12(AB\cdot BC+CD\cdot DA)\tag{1}$$ -Similarly -$$S_{ABCD}\le S_{BCD}+S_{BAD}\le \frac12(BC\cdot CD + BA\cdot AD)\tag{2}.$$ -Adding (1) and (2), we get -$$4S_{ABCD}\le (AB + CD)(BC+AD)\le \frac{(AB+BC+CD+DA)^2}4.\tag{3}$$ -Finally, equalities hold in (1) and (2) only if all four angles of $ABCD$ are right angles, which makes $ABCD$ a rectangular; while equality holds in (3) only if $AB+CD=BC+AD$, which makes a rectangular a square.<|endoftext|> -TITLE: Numbers that are clearly NOT a Square -QUESTION [26 upvotes]: Although I have never studied math very seriously, I have heard of Brocard's Problem, which asks for integer solutions for the following Diophantine Equation:$$n!+1=m^2$$ -The only solutions are conjectured to be $(4,5),(5,11),(7,71)$, and there are no solutions for $n < 10^{9}$. -However, it is clear there are an infinite amount of $n$ where it is simple to verify $n!+1$ is not a square. -For example, take $n=81$. Assume that $81!+1$ is a square. -This implies that $81!-1=a^2-2$ for some integer $a$. -Note that by Wilson's Theorem, $82!\equiv -1 \pmod {83}$, implying $81! \equiv 1 \pmod {83}$. -Thus, $a^2-2 \equiv 0 \pmod {83}$. A contradiction, since $2$ is not a quadratic residue of $83$. -In a similar way, we can claim that for all $n>5$ where $n+2$ is a prime number $p$ such that $ p \equiv 3,5 \pmod 8$, then $n!+1$ is not a square. -My question is, are there any other integers $n$ where it is simple to verify $n!+1$ is not a square? Any help would be appreciated. - -REPLY [5 votes]: Your nice argument can be generalized. -For example, take $n=80$. You deduced from Wilson's Theorem that $81!\equiv 1\pmod {83}$. Now $$80!\equiv \frac{1}{81}\equiv -\frac{1}{2}\pmod{83}.$$ Here $\frac{1}{2}$ means the inverse of $2$ modulo $83$. It follows that $80!+1\equiv -\frac{1}{2}+1=\frac{1}{2}\pmod{ 83}$. Because $2$ is not a quadratic residue modulo $83$, neither is its inverse, so $80!+1$ cannot be a square. Of course, this works for any $n$ for which $n+3$ is prime congruent to $3$ or $5$ modulo $8$. -More generally, for any prime $p$ and positive integer $k\leq p-1$, -$$ -(p-k)!+1\equiv \frac{(-1)^k}{(k-1)!}+1=\frac{(k-1)! \big[(k-1)!+(-1)^k\big]}{((k-1)!)^2}\mod p. -$$ -So $(p-k)!+1$ is a quadratic residue modulo $p$ if and only if $(k-1)! \big[(k-1)!+(-1)^k\big]$ is a quadratic residue. Via quadratic reciprocity this can be translated to a condition on $p$. -For each $k\geq 1$, there is a condition similar to yours. I'll list the first three (they start looking complicated very fast). The quantity $n!+1$ cannot be a square if any of the following conditions hold: - -$n+2$ is a prime congruent to $3$ or $5$ mod $8$ -$n+3$ is a prime congruent to $3$ or $5$ mod $8$ -$n+4$ is a prime congruent to $5$, $9$, $21$, $23$, $31$, $37$, $43$, $45$, $49$, $51$, $55$, $57$, $59$, $65$, $67$, $69$, $71$, $73$, -$77$, $81$, $83$, $85$, $87$, $91$, $95$, $97$, $99$, $101$, $103$, $109$, $111$, $113$, $117$, $119$, $123$, $125$, $131$, $137$, $145$, $147$, $159$, or $163$ mod $168$. - -The primes in the third case are exactly those for which $42$ is a quadratic non-residue.<|endoftext|> -TITLE: Prove that $2^{n(n+1)}>(n+1)^{n+1}\left(\frac{n}{1}\right)^n\left(\frac{n-1}{2}\right)^{n-1}\cdots \left(\frac{2}{n-1}\right)^{2}\frac{1}{n}$ -QUESTION [9 upvotes]: If $n$ be a positive integer $>1$, prove that -$$2^{n(n+1)}>(n+1)^{n+1}\left(\frac{n}{1}\right)^n\left(\frac{n-1}{2}\right)^{n-1}\left(\frac{n-2}{3}\right)^{n-2}\cdots \left(\frac{2}{n-1}\right)^{2}\frac{1}{n}$$ -Please help me to prove the above. I have to use laws of inequality like AM-GM. But how to use it for this particular problem. -Edit: -Only use laws of inequality. - -Edit 2 -I want to solve this by using laws of inequality like weighted AM-GM. My attempt is the following -Consider positive numbers $\left(\frac{n}{1}\right), \left(\frac{n-1}{2}\right), \left(\frac{n-2}{3}\right), \cdots \left(\frac{2}{n-1}\right), \frac{1}{n}$ with corresponding weights $n, n-1, n-2, \cdots 2,1$, respectively and applying weighted AM>GM, we get, -$$\frac{\left(\frac{n^2}{1}\right)+\left(\frac{(n-1)^2}{2}\right)+\left(\frac{(n-2)^3}{3}\right)+\cdots \left(\frac{2^2}{n-1}\right)\frac{1^1}{n}}{n+(n-1)+\cdots +2+1}>\left[\left(\frac{n}{1}\right)^n\left(\frac{n-1}{2}\right)^{n-1}\left(\frac{n-2}{3}\right)^{n-2}\cdots \left(\frac{2}{n-1}\right)^{2}\frac{1}{n}\right]^{\frac{n(n+1)}{2}}$$ -I am unable to get the result because I am unable to get the sum $\left(\frac{n^2}{1}\right)+\left(\frac{(n-1)^2}{2}\right)+\left(\frac{(n-2)^3}{3}\right)+\cdots \left(\frac{2^2}{n-1}\right)\frac{1^1}{n}$ -Please suggest me some possible approach. - -REPLY [6 votes]: Notice that -$$ -RHS = (n+1)^{n+1} \cdot -\left(\frac{n}{1}\right) \cdot -\left(\frac{n}{1}\cdot \frac{n-1}{2}\right) \cdots -\left(\frac{n}{1}\cdot \frac{n-1}{2} \cdots \frac{1}{n}\right) = -\prod_{k=1}^n \binom{n}{k} = -\prod_{k=0}^n \binom{n}{k}. -$$ -Then, by AM-GM, -$$ -RHS = (n+1)^{n+1} \cdot \prod_{k=0}^n \binom{n}{k} \le -(n+1)^{n+1} \cdot \left( \frac{\binom{n}{0}+\binom{n}{1}+\ldots+\binom{n}{n}}{n+1}\right)^{n+1} = LHS. -$$ -Equality holds only if $\binom{n}0=\ldots=\binom{n}{m}$; i.e. for $n=1$.<|endoftext|> -TITLE: Finding coefficient of polynomial? -QUESTION [11 upvotes]: The coefficient of $x^{12}$ in $(x^3 + x^4 + x^5 + x^6 + …)^3$ is_______? - -My Try: -Somewhere it explain as: -The expression can be re-written as: $(x^3 (1+ x + x^2 + x^3 + …))^3=x^9(1+(x+x^2+x^3))^3$ -Expanding $(1+(x+x^2+x^3))^3$ using binomial expansion: -$(1+(x+x^2+x^3))^3 $ -$= 1+3(x+x^2+x^3)+3*2/2((x+x^2+x^3)^2+3*2*1/6(x+x^2+x^3)^3…..$ -The coefficient of $x^3$ will be $10$, it is multiplied by $x^9$ outside, so coefficient of $x^{12}$ is $10$. - -Can you please explain? - -REPLY [3 votes]: Another way: For $|x|<1$, we have: -$$(x^3+x^4+x^5+...)^3=x^9(1+x+x^2+...)^3=x^9(1-x)^{-1}.$$ -Now $(1-x)^{-3}$ is half of the second derivative of $(1-x)^{-1}.$ The second derivative of $(1-x)^{-1}=(1+x+x^2+...)$ is $(1.2+2.3 x+3.4 x^2+4.5 x^3+...). $ Half the co-efficient of $x^3$ in this, which is $(1/2).4.5=10,$ is therefore the co-efficient of $x^{12}$ in $x^9(1+x+x^2+...). $<|endoftext|> -TITLE: Integration involving greatest integer function : $\int_0^{\pi} [\cot(x)] \, dx$ -QUESTION [6 upvotes]: What the integral of $$\int_0^{\pi} [\cot(x)]dx$$ where $[\cdot]$ represents greatest integer function. I know integral of $\cot$ is $|\log(\sin(x))|$ but $\log$ is not defined for $0$ or is there something else I'm forgetting? - -REPLY [4 votes]: Outline -Observe the symmetry (up to sign) of $\cot x$ about $\frac{\pi}{2}$. -So the fundamental idea we use here is $[\cot x] = -[\cot (\pi - x)] - 1$ when $ x \in (0,\frac{\pi}{2})$ -So $$\int_0^{\frac{\pi}{2}} [\cot(x)]\,dx + \int_0^{\frac{\pi}{2}} [\cot(\pi - x)]\,dx = -\int_0^{\frac{\pi}{2}} 1 \,dx$$ which gives -$$\bbox[5px, border:2px solid #C0A000]{\int_0^{\pi} [\cot(x)]\,dx = \color{blue}{-\frac{\pi}{2}}}$$<|endoftext|> -TITLE: If $\tan x$ is not a differentiable function then why does its differentiation $\sec^2(x)$ exists? -QUESTION [5 upvotes]: $\tan x$ is not differentiable at $(2n + 1)90$ points, which means function itself is not differentiable. So, why does its differentiation $\sec^2(x)$ exists? - -REPLY [6 votes]: You derive $\tan(x)$ on its domain, not on any point of $\mathbb{R}$! -Also, note that $\sec^2(x) = \frac{1}{\cos^2 x}$ is not defined at $x= \frac{\pi}{2} + k\pi$, which is coherent with the fact that $\tan(x)$ isn't too.<|endoftext|> -TITLE: If $(x_1-a)(x_2-a)\cdots(x_n-a)=k^n$ prove by using the laws of inequality that $x_1x_2 \cdots x_n\geq (a+k)^n$ -QUESTION [9 upvotes]: If $x_i>a>0$ for $i=1,2\cdots n$ and $(x_1-a)(x_2-a)\cdots(x_n-a)=k^n$, $k>0$, prove by using the laws of inequality that $$x_1x_2 \cdots x_n\geq (a+k)^n$$. - -Attempt: -If we expand $(x_1-a)(x_2-a)\cdots(x_n-a)=k^n$ in the LHS, we get -$x_1x_2 \cdots x_n -a\sum x_1x_2\cdots x_{n-1} +a^2\sum x_1x_2\cdots x_{n-2} - \cdots +(-1)^na^n=k^n$. But it becomes cumbersome to go further. Please help me. - -REPLY [3 votes]: From HUYGEN’S INEQUALITY, stating that for $x_i\geq0$ -$$(1+x_1)(1+x_2)...(1+x_n)\geq\left(1+\left(x_1x_2...x_n\right)^{\frac{1}{n}}\right)^n \tag{1}$$ -and (as it was suggested in comments) noting $x_i-a=y_i>0$ we have -$$y_1y_2...y_n=k^n$$ -as a result -$$\color{red}{x_1x_2...x_n}=(y_1+a)(y_2+a)...(y_n+a)=\\ -a^n\left(1+\frac{y_1}{a}\right)\left(1+\frac{y_2}{a}\right)...\left(1+\frac{y_n}{a}\right)\color{red}{\geq}\\ -a^n\left(1+\left(\frac{y_1y_2...y_n}{a^n}\right)^{\frac{1}{n}}\right)^n= -a^n\left(1+\left(\frac{k^n}{a^n}\right)^{\frac{1}{n}}\right)^n=\\ -a^n\left(1+\frac{k}{a}\right)^n=\color{red}{(a+k)^n}$$<|endoftext|> -TITLE: Let $P$ be a 4-th degree real polynomial with 5 conditions given. How to compute $P(4)$? -QUESTION [5 upvotes]: Yesterday I was math tutoring a 18-years old girl. And she asked me for the following problem: given $P\in\Bbb R[X]_4$, i.e. $P$ a real polynomial of degree exactly $4$, such that: - -$P(1)=0$ -It has a relative extrema in the points $x=2,3$, which value is $3$. - -compute $P(4)$. -Now the second condition tells us that $P'(2)=P'(3)=0$ and $P(2)=P(3)=3$. Thus in total I have $5$ linear conditions on the $5$ real coefficients which define $P$, once we write it as -$$ -P(x)=ax^4+bx^3+cx^2+dx+e. -$$ -I.e. I have a linear system of $5$ equations in $5$ variables, which has (provided the conditions are all indipendent one each other) one solution: thus I'd have identified uniquely my polynomial, hence I could compute easily $P(4)$ and conclude my exercise. -My problem is: this girl doesn't know matrices, Gauss elimination and all the linear algebra tool which help to solve quickly this kind of problems, thus in order to solve such a system she should do it by subsitutions and so on, which is really tedious and not instructive (to me, at least), and it seems weird that her teacher gave her such an exercise to solve. -Moreover, what is asked is to compute $P(4)$, NOT to determine the polynomial $P$. -So I am asking myself: is there another way to do it? A way which avoids all that calculation? -I tried to write $P$ as -$$ -P(x)=a(x-x_0)(x-x_1)(x-x_2)(x-x_3) -$$ -but nothing good came out. Any idea? - -REPLY [2 votes]: Consider the polynomial $q(x)=p(x)-3$. Then, $q(2)=q(3)=q'(2)=q'(3)=0$ and $q(1)=-3$. So, $$q(x)=a(x-2)^2(x-3)^2$$ And $q(1)=-3$ implies that $a=\frac{-3}{4}$ and from here: $p(4)=q(4)+3=-3$.<|endoftext|> -TITLE: Why did the author warn 'Don't do it!' on evaluating the limit of $\lim_{x\to 0} \frac{1-\cos(1-\cos x)}{\sin ^4 x}$ this way? -QUESTION [6 upvotes]: This is taken from Differential Calculus by Amit M Agarwal: - -Evaluate $$\lim_{x\to 0} \frac{1-\cos(1-\cos x)}{\sin ^4 x}$$ - -The question is quite easy using trigonometric identity viz. $1-\cos x = 2\sin^2\frac{x}{2}$ and then using $\lim_{x\to 0} \frac{\sin x}{x}= 1\,.$ The answer is $\frac{1}{8}\,.$ -However, after evaluating the limit, the author cautioned as - -Don't do it! -\begin{align}\lim_{x\to 0} \lim_{x\to 0} \frac{1-\cos(1-\cos x)}{\sin ^4 x} & =\lim_{x\to 0} \frac{1-\cos\left(\frac{1-\cos x}{x^2}\cdot x^2\right)}{x^4}\\ &= \lim_{x\to 0}\frac{1-\cos\left(\frac{x^2}{2}\right)}{x^4}\qquad \left(\textrm{As}\, \lim_{x\to 0}\frac{1-\cos x}{x^2}= \frac{1}{2} \right)\\&= \lim_{x\to 0}\frac{2\sin^2 \frac{x^2}{4}}{\frac{x^4}{16}\times 16}\\&= \frac{1}{8}\qquad \textrm{is wrong although the answer may be correct}\,.\end{align} - -Where is the 'wrong' in the evaluation? -Edit: - - -[...] the limit as $x\to 0$ is taken for a subexpression. That's generally invalid. - -We can't evaluate a limit inside a limit like that. - -While evaluating limit of a complicated expression one should not replace a sub-expression by its limit and continue with further calculations. - - - -Now, consider these limits: -$$\bullet \lim_{x \to 4} \log(2x^{3/2}- 3x^{1/2}-1)$$ -my book solves this as: -$$\log\; [\lim_{x\to 4} 2 x^{3/2}- \lim_{x\to 4} 3x^{1/2} - \lim_{x\to 4} 1]= 2\log 3$$ -Another one: -$$\bullet \lim_{x\to 1} \sin(2x^2- x- 1)$$ -This is solved as; -$$\sin\;[\lim_{x\to 1} 2x^2 \lim_{x\to 1} x- \lim_{x\to 1} 1]= \sin 0= 0$$ -The following limits are evaluated by first evaluating the limits of sub-expressions. Do these contradict the statement _ you can't take limit of a sub-expression while evaluating the limit of the whole function_? - -REPLY [2 votes]: While evaluating limit of a complicated expression one should not replace a sub-expression by its limit and continue with further calculations. Thus the step where you replace $(1 - \cos x)/x^{2}$ by $1/2$ is not allowed. -However there are two situations where you can do such replacements: -1) If a sub-expression is connected in additive manner to the rest of the expression then this sub-expression can be replaced by its limit (provided the limit exists). More formally if $\lim_{x \to a}g(x)$ exists and is equal to $L$ then $$\lim_{x \to a}\{f(x) \pm g(x)\} = \lim_{x \to a}f(x) \pm L$$ irrespective of the fact whether $\lim_{x \to a}f(x)$ exists or not. -2) If a sub-expression is connected in multiplicative manner to the rest of the expression then this sub-expression can be replaced by its limit (provided the limit exists and is non-zero). More formally if $\lim_{x \to a}g(x)$ exists and is equal to $L \neq 0$ then $$\lim_{x \to a}f(x)g(x) = L\lim_{x \to a}f(x),\,\lim_{x \to a}\frac{f(x)}{g(x)} = \frac{1}{L}\lim_{x \to a}f(x)$$ irrespective of the fact whether $\lim_{x \to a}f(x)$ exists or not. -These are the only two situations where we can replace the sub-expression ($g(x)$ in the formal versions mentioned above) with its limit ($L$). -The above theorems help us a lot in simplifying the limit evaluation of a complicated expression because in each step we can replace a sub-expression by its limit without worrying about the limit of the remaining part of the expression ($f(x)$ in the formal version) and thereby effectively reducing complicated expressions to simpler ones. Also note that by the use of these rules we can infer the existence (or non-existence) of the limit of a complicated expression (consisting of both $f(x), g(x)$) from the existence (or non-existence) of the limit of a simpler expression ($f(x)$). -Update: OP has raised a very interesting point (via comments) where a sub-expression might not be related to the rest of the expression via arithmetical operations $+,-,\times,/$, but rather through functional symbol. In this case we have the rule that the order of a limit operation and the functional operation can be interchanged provided the function is continuous. More formally if $f$ is continuous then $$\lim_{x \to a}f(g(x)) = f(\lim_{x \to a}g(x))$$ Note that in this case we don't replace a sub-expression by its limit, here the main operation is interchanging the order of applying limit operation and functional operation. The example mentioned in your comment uses the fact that the $\log$ function is continuous wherever it is defined and hence the interchange of limit operation and $\log$ operation is justified.<|endoftext|> -TITLE: Geometric interpretation of different types of field extensions? -QUESTION [13 upvotes]: In a first course on rings and fields we met the concept of field extensions, especially algebraic ones. The presentation of the material was very algebraic and felt a little lifeless. I was wondering whether there is some geometric way to think of (different types) of field extensions. I am familiar with the basic formalism of schemes and varieties, but I don't know algebraic geometry. In particular, I am curious how to think of splitting fields in geometric terms. - -REPLY [9 votes]: Galois theory. -A more elaborate version of Zhen Lin's comment is the following: Galois theory studies certain types of finite field extensions (and you can also treat certain types of algebraic field extensions, as a limit of the finite case). The philosophy is that a finite field extension is the same thing as a finite morphism $\operatorname{Spec} L \to \operatorname{Spec} K$. -In algebraic geometry, finite morphisms of (say smooth projective) varieties correspond in the complex manifold world to proper maps $X \to Y$ with finite fibres. Such a map is close to being a covering space, but this is not always the case. For example, the map $\mathbb A^1 \to \mathbb A^1$ given by $x \mapsto x^2$ is not a covering space, because it is not a local homeomorphism near the origin. -It turns out that there is a super general algebraic notion of étale morphisms, and finite étale morphisms correspond to covering spaces. General étale morphisms include open immersions as well, which are neither finite nor covering spaces; this is a point that leads to a bit of confusion for the novice. -Then one could say that a finite Galois extension is a field extension such that the map $\operatorname{Spec} L \to \operatorname{Spec} K$ is finite étale. This is historically very inaccurate, and for most people this is also the wrong order to learn the material. Moreover, it requires a bit of work to show that a field extension is Galois if and only if it is étale; most courses on Galois theory do not touch on this. However, given that you indicated to know some algebraic geometry, this might be a useful way for you to think about it geometrically. -In this language, the analogue of the fundamental group $\pi_1(X)$ is the absolute Galois group $\operatorname{Gal}(\bar K/K)$, in the very precise sense that $\operatorname{Gal}(\bar K/K) \cong \pi_1^{\operatorname{alg}}(\operatorname{Spec} K)$. Galois theory can then be viewed as the study of finite covering spaces of $\operatorname{Spec} K$, and their deck transformations. -However, I should point out that this is a very beautiful and unexpected analogy, that was only pointed out by Grothendieck. Galois theory takes place centuries earlier, and is a very rich and well-developed theory in itself. It is crucial to number theorists, and the explicit knowledge of Galois cohomology is very important for the development of the much harder theory of étale cohomology. -Splitting fields. -Let me specifically address splitting fields because you ask about them. -Suppose $f \in k[x]$ is a separable polynomial (no repeated roots). Then we get a set $V(f) \subseteq \mathbb A^1_k$. The size of this set should equal $\deg f$: a polynomial of degree $n$ without multiple roots has $n$ roots. -However, if $k$ is not algebraically closed, it may happen that $f$ is irreducible, in which case set-theoretically $V(f)$ is just a point. However, over the splitting field $\ell$ of $f$, we know that $f$ factors as a product of linear factors, so $f$ really does have $n$ roots. -Geometrically: if $f$ is separable of degree $n$, then we get a finite morphism $$X = \operatorname{Spec} k[x]/(f) \to \operatorname{Spec} k$$ of degree $n$. Then $X$ might have fewer than $n$ points; however $X \times_{\operatorname{Spec} k} \operatorname{Spec} \ell$ always splits into $n$ distinct points. This is of course not what the name splitting field comes from, but it is another way to look at it. -Transcendence theory. -Of course, Galois theory does not nearly cover all the field extensions; however, it is a very nice case because there is so much you can say about it. -On the other hand, there is also a direct use of transcendental extensions in algebraic geometry. The field $k(x_1,\ldots,x_n)$ has transcendence degree $n$ over $k$, and the variety $\mathbb A^n_k$ is $n$-dimensional. This is no coincidence: one can prove that for an integral scheme of finite type over a field $k$, the dimension equals the transcendence degree of the function field. -Moreover, the category of algebraic varieties with dominant rational maps as morphisms turns out to be equivalent to the category of fields of finite type over $k$. A lot of questions in algebraic geometry (especially birational geometry) have been motivated by field theory. -For example, the only way I know how to prove that the field $\operatorname{Frac} k[x,y]/(y^2 - x^3 - x)$ is not isomorphic to $k[t]$ is to use the genus in algebraic geometry. -As for a much harder example, consider the following question: let $K/k$ be a finitely generated field extension, and suppose that $K(x_1,\ldots,x_n) \cong k(y_1,\ldots,y_m)$ for certain $m,n \in \mathbb Z_{\geq 0}$. Is it true that $K \cong k(z_1,\ldots,z_{m-n})$? -The answer is, somewhat surprisingly, no. What you can prove is that there exist varieties which are stably rational but not rational, and this settles the algebra problem by taking the function field. -These are just examples of the interplay between field theory and algebraic geometry; there are many more things one could say. I would certainly say that a good command of field theory is essential to a modern algebraic geometer. -Some references: -A popular introductory reference to the analogy between Galois groups and fundamental groups seems to be the lecture notes Galois theory of schemes by Hendrik Lenstra. These notes assume familiarity with Galois theory and algebraic geometry (it seems that for a large portion, one can get away with only knowing commutative algebra). On the other hand, no knowledge of étale morphisms or harder topics like étale cohomology is assumed. Another reference is the chapter Fundamental groups of schemes of the stacks project (online or pdf). This also gives further references, e.g. to books written on the subject. A great book is Szamuely's Galois groups and fundamental groups. -Transcendence theory belongs to the realm of (commutative) algebra. Three useful references are Chapter VIII of Lang's Algebra, the chapter on fields in the stacks project (online or pdf), and Appendix A1 of Eisenbud's Commutative algebra with a view towards algebraic geometry. However, the latter contains some mistakes (he gets confused between separable and separably generated extensions in the course of trying to prove the relation between them). -I am not aware myself of any reference other than the original research papers for the non-rationality of the cubic threefold (which is the example of a stably rational variety that is not rational); I would be very interested if someone else knows one. The original papers are: - -C. A. Clemens, P. A. Griffiths, The intermediate Jacobian of the cubic threefold (1972). Available on jstor here. -J.P. Murre, Reduction of the proof of the non-rationality of a non-singular cubic threefold to a result of Mumford (1973). Available on EUDML here. - -Both contain a proof of the non-rationality of the cubic threefold, but I'm not sure they prove stable rationality. Murre's proof is supposed to be a simplification of the proof by Clemens—Griffiths.<|endoftext|> -TITLE: Evaluate the integral $\int^{\infty}_{0} e^{-x}x^{100}dx$ -QUESTION [7 upvotes]: $$\int^{\infty}_{0} e^{-x}x^{100}dx$$ - -I am sure is something here I can not see, else it is integration by parts 100 times. - -REPLY [2 votes]: This is the Gamma function of $101$. Indeed by definition: -$$\Gamma[x] = \int_0^{+\infty} t^{x-1}\ e^{-t}\ \text{d}t$$ -And so -$$\int_0^{+\infty} t^{100}\ e^{-t}\ \text{d}t = \Gamma[101] = 100!$$ -About the integration -This integration can be done in several ways. By parts is surely the most intuitive one, even if it may be tedious (although you need just two-three passages to guess the whole behavior of the integration). -Otherwise, we can do it with the Feynman trick as shown below/above.<|endoftext|> -TITLE: How many sewings are there on a soccer ball? -QUESTION [16 upvotes]: A soccer ball is obtained by sewing $20$ hexagonal pieces of leather and $12$ pieces of leather of pentagonal shape. -A sewing joins together the sides of two adjacent pieces. How many sewings are there ? - - -My effort -I was able to solve this problem by realizing that if I count the number of sewings adjacent to the hexagons and the ones adjacent to the pentagons I will be counting each sewing twice. -So, the number of sewings is $$\cfrac{120 + 60}{2}=90.$$ - -Second Approach (this is the one I am asking about) - -If we count the sewings adjacent to the pentagons we have $12 \cdot 5 =60 $ sewings ,now to count the rest of the sewings I just observe that any other sewing starts at the edge of some pentagon,so I have $60$ other sewings,for a total of $120$ sewings . -However this doesn't quite work, but if I look at the picture I have posted above it seems to be correct as I don't have any pentagon sharing a sewing with another pentagon. -What am I missing? - -REPLY [15 votes]: The issue is that the picture depicts not the conventional soccer ball (a truncated icosahedron) but rather something a little different, the chamfered dodecahedron, also known as a truncated rhombic triacontahedron. This actually does have 120 edges. - -So, in a sense, you were right both times, but just thinking about different polyhedra! - -It's interesting to note that these are both examples of Goldberg Polyhedra, polyhedra made from only pentagons and hexagons -- although the faces are not necessarily regular (and in the chamfered dodecahedron, they are not).<|endoftext|> -TITLE: Hodge numbers of a cartesian product of copies of $\mathbb{C}P^1$ -QUESTION [8 upvotes]: I wonder if some works have been done in the context of cohomology space of projective complex manifolds. Specifically I want to study the Hodge diagrams of $\mathbb{C}P^1\times\mathbb{C}P^1$ and $\mathbb{C}P^1\times\mathbb{C}P^1\times\mathbb{C}P^1$. A reference would be very helpful to get started. - -REPLY [12 votes]: There is an analogue of the Künneth Theorem for Dolbeault cohomology. It can be found on page $105$ of Principles of Algebraic Geometry by Griffiths and Harris. -If $M$ and $N$ are compact complex manifolds, then we have the following equality of Hodge numbers -$$h^{u, v}(M\times N) = \sum_{\substack{p\, +\, r\, =\, u\\ q\, +\, s\, =\, v}}h^{p,q}(M)h^{r,s}(N).$$ -For $\mathbb{CP}^1\times\mathbb{CP}^1$, you should obtain the following Hodge diamond: -\begin{matrix} - & & 1 & & \\ - & 0 & & 0 & \\ -0 & & 2 & & 0\\ - & 0 & & 0 & \\ - & & 1 & & -\end{matrix} -For $\mathbb{CP}^1\times\mathbb{CP}^1\times\mathbb{CP}^1$, the Hodge diamond is -\begin{matrix} - & & & 1 & & & \\ - & & 0 & & 0 & & \\ - & 0 & & 3 & & 0 & \\ -0 & & 0 & & 0 & & 0\\ - & 0 & & 3 & & 0 & \\ - & & 0 & & 0 & & \\ - & & & 1 & & & -\end{matrix} -You can prove by induction that $h^{k,k}((\mathbb{CP}^1)^n) = \displaystyle\binom{n}{k}$ and for $p \neq q$, $h^{p,q}((\mathbb{CP}^1)^n) = 0$.<|endoftext|> -TITLE: Limit of $\left|\sin(n)\right|^{1/n}$ -QUESTION [6 upvotes]: I'm having trouble showing rigorously what is the limit of $x_n=|\sin(n)|^{1/n}$ in a rigorous manner. What I have shown is that, $x_n$ cannot converge to $0$ and is bounded by $1$, and that should suffice to show that $x_n$ effectively converges to $1$. -However, I can't figure out how to formalize this proof, and show it in a rigorous manner. My guess would be to try and show that the limit of $|a_n|^{1/n}$ can be $1$ if $|a_n|$ is bounded by $1$ and does not converge to $0$. I don't know if this more general statement holds, and if it would simplify or complexify the problem. - -REPLY [5 votes]: Hei, -the idea is to bound $\sin(n)$ for $n\in \mathbb{N}$ from below in such a way that you see that $\sin(n)$ is so far away from $0$ that $\left|\sin(n)\right|^\frac{1}{n}$ goes to $1$. Therefore we have to show that natural numbers have a certain distance to multiples of $\pi$. -For this, you can use the fact that $\pi$ is not a Liouville number (see http://mathworld.wolfram.com/LiouvilleNumber.html). -So, there is an $n_o\in\mathbb{N}$ such that $\left|\pi-\frac{p}{q}\right|\geq \frac{1}{q^{n_o}}$ for all $p,q\in \mathbb{N}$, or, equivalently $\left|q\pi-p\right|\geq \frac{1}{q^{n_o-1}}$. -Now choose $p=n$, and $q$ in a way that $q\pi$ is close to $n$, i.e. $q\in[\frac{n-\frac{\pi}{2}}{\pi}, \frac{n+\frac{\pi}{2}}{\pi}]$. -As now $q\leq \frac{n+\frac{\pi}{2}}{\pi}$ and $\left|q\pi-p\right|\leq\frac{\pi}{2}$, and as for $x\in[0,\frac{\pi}{2}]$ there is the estimate $\sin(x)\geq \frac{x}{2}$, we get the following series of inequalities: -$$ -|\sin(n)|=|\sin(q\pi-n)|=\sin|q\pi-n|\geq \frac{1}{2}|q\pi-n|\geq\frac{1}{2q^{n_o-1}}\geq \frac{1}{2}\cdot\left(\frac{\pi}{(n+\frac{\pi}{2})}\right)^{n_o-1}. -$$ -Taking the $n$-th root, we obtain -$$ -|\sin(n)|^{\frac{1}{n}}\geq \frac{1}{2^{\frac{1}{n}}}\cdot\left(\frac{\pi^{\frac{1}{n}}}{(n+\frac{\pi}{2})^{\frac{1}{n}}}\right)^{n_o-1}. -$$ -As the limit of $n^{\frac{1}{n}}$ for $n\rightarrow\infty$ is $1$ and $n_o$ is fixed, the right hand side goes to $1$ for $n\rightarrow\infty$. As the left hand side is bounded from above by $1$ aswell, it has to converge to $1$.<|endoftext|> -TITLE: Problem with inequality: $ \left| \sqrt{2}-\frac{p}{q} \right| > \frac{1}{3q^2}$ -QUESTION [6 upvotes]: Prove that for for all $p,q\in \mathbb{Z}$, $q>0$ we have: -$$ -\left| \sqrt{2}-\frac{p}{q} \right| > \frac{1}{3q^2}. -$$ -To be honest, I do not know where to start - any help would be appreciated. - -REPLY [8 votes]: You can assume that $p>0$ and $q>1$, and $\sqrt 2 + p/q ≤ 3$, otherwise this is easy: if $\sqrt 2 + p/q > 3$ then $\sqrt 2-p/q < 2\sqrt 2 - 3<0$, so $$\left|\sqrt2 - \frac{p}{q}\right| > 3-2\sqrt 2 > 1/12 ≥ 1/(3q^2)$$ -The highest power of $2$ dividing $2q^2$ is odd, while the highest power of $2$ dividing $p^2$ is even. Then, $p^2$ and $2q^2$ must be distinct integers, thus $|2 q^2 - p^2| \geq 1$. Then -$$\left|\sqrt2 - \frac{p}{q}\right| = \frac{|2p^2-q^2|}{q^2(\sqrt{2}+p/q)} \ge \frac{1}{q^2(\sqrt2 + p / q)} \ge \frac{1}{3q^2},$$ -as desired.<|endoftext|> -TITLE: Composition of a continuous function and a discontinuous function, can be continous. -QUESTION [6 upvotes]: Okay, I think I found an example of a continuous function $f$ composed with a discontinuous function $g$, that make a continuous function $h$. Okay let: - -$f:[0,1]\to [0,1)$ where $f(x)=\begin{cases}x \quad \textrm{if} \quad x\in[0,1)\\ 0 \quad \textrm{if} \quad x=1\end{cases}$ -$g:[0,1)\to \mathbb{R^2}$ where $g(x)= (\cos(2\pi x),\sin(2\pi x))$ - -I am thinking the $h(x)=g(f(x))$ is continuous because, the only discontinuity that could occur is at $x=1$ which doesn't because $\lim\limits_{x\to 1}h(x)=(1,0)=h(1)$. But, I am sort of confused as to if my justification is correct or not. - -REPLY [6 votes]: Your answer is correct: if $a≠1$, then $\lim\limits_{x \to a} g(f(x))=g(f(a))$ because $g$ is continuous at any point and $f$ is continuous at $a$. -You can create another example by taking $f$ to be any discontinuous function, and $g : x \mapsto c$ any constant function, so that $g \circ f$ is constant and, in particular, is continuous. - -In general, if $y_0 = \lim\limits_{x \to x_0} f(x)$ and $l = \lim\limits_{y \to y_0}g(y)$ exist and if there exists some $\epsilon>0$ such that $f(x)≠y_0$ for all $x$ with $0<|x-x_0|<\epsilon$, then $l=\lim\limits_{x \to x_0} (g \circ f)(x)=:L$. -The second hypothesis is important. For instance, if you consider the constant function $f \equiv 1$, $g = \mathbb 1_{\{1\}}$ and $x_0=0$, then $y_0=1,l=0$ but $L=1$.<|endoftext|> -TITLE: Give an example of a equicontinuous that does not converge uniformly -QUESTION [5 upvotes]: Give an example of a equicontinuous sequence of functions ($f_n$) over a non-compact set $S\subset\Bbb R^n$ converging pointwise to a function $f$ at each $x\in S$, but $f_n$ does not converge uniformly to $f$ over $S$. -I'm really stuck on this problem, and I thought about the cases of $f_n(x) = x^n$ with the domain ($0,1$) or $f_n(x) = sin(nx)$ over non-compact set, but I failed to derive an example. Could someone help me to find an example please? Thanks - -REPLY [2 votes]: $$f_1(x) = 0$$ -except in a "hump" between 0 and 1. -$$f_n(x) = f_1(x-n).$$<|endoftext|> -TITLE: Formula to find the first intersection of two arithmetic progressions -QUESTION [7 upvotes]: Formula to find the first intersection of two arithmetic progressions -I am not good in math, but I need to determine if two generic arithmetic progressions have an intersection point and, in that case, find the first intersection. I've searched the web and found some solutions, but I couldn't understand them. -Is it possible to have a simple formula or algorithm that finds the first intersection point of two arithmetic progressions? -Example 1: -$$ -A_n = A_1 + (n - 1)d \\ -AP1: A_1 = 1, d = 14 \Rightarrow \{1, 15, 29, 43, \dotsc \} \\ -AP2: A_1 = 8, d = 21 \Rightarrow \{8, 29, 50, 71, \dotsc \} \\ -$$ -Result: First intersection point on $A_n = 29$ -Example 2: -$$ -A_n = A_1 + (n - 1)d \\ -AP1: A_1 = 1, d = 14 \Rightarrow \{1, 15, 29, 43, \dotsc \} \\ -AP2: A_1 = 8, d = 28 \Rightarrow \{8, 36, 64, 92, \dotsc \} -$$ -Result: Does not have an intersection point -The reason I need this is because I am developing a calendar (like Google Calendar) but where it is not allowed to create two event series that intersect each other. I've posted a similar question here. - -REPLY [6 votes]: Assume the two progressions are -$$ -A_n = A_1 + (n-1) \, d \\ -B_m = B_1 + (m-1) \, D -$$ -You want to check -\begin{align} -A_n &= B_m \\ -A_1 + (n-1) \, d &= B_1 + (m-1) \, D \iff \\ -A_1 - B_1 + D - d &= -n \, d + m \, D \iff \\ --d \, n + D \, m &= A_1 - B_1 + D - d \quad (1) -\end{align} -Interpretation as Linear Diophantine Equation -Equation $(1)$ can be interpreted as a linear Diophantine equation -$$ -a X + b Y = c \quad (X, Y \in \mathbb{Z}) \quad (2) -$$ -with $a = -d$, $b = D$ and -$c = A_1 - B_1 + D - d \in \mathbb{Z}$ -and variables $X = n$ and $Y = m$, where one is only interested in the positive solutions $X > 0, Y > 0$. -For this kind of equations there exists an algorithm to determine the solutions. -Criterion for Solutions: -For solutions to exist, one needs $g = \gcd(a, b)$ to divide $c$, $g \mid c$. -Here we have -$$ -g = \gcd(-d, D) = \gcd(d, D) -$$ -So the criterion for a solution is -$$ -\gcd(d, D) \mid A_1 - B_1 + D - d \quad (3) -$$ -Because $\gcd(d, D) \mid D - d$, it suffices to check -$$ -\gcd(d, D) \mid A_1 - B_1 \quad (4) -$$ -Solution of the Homogeneous Equation: -The homogeneous equation $a X_h + b Y_h = 0$ has the solutions -$$ -(X_h,Y_h) = (t \, b', -t\, a') \quad (t \in \mathbb{Z}) \quad (5) -$$ -with $a' = a / g = -d / \gcd(d,D)$, $b' = b / g = D / \gcd(d, D)$. -Finding one Particular Solution: -A particular solution of $(2)$ is usually found by finding a solution $(u, v)$ to -$$ -a u + b v = g \iff \\ --d u + D v = \gcd(d, D) -$$ -these numbers are calculated by the extended Euclidean algorithm for $-d$ and $D$. -A particular solution is -$$ -(X_p, Y_p) = \left(\frac{c}{g} u, \frac{c}{g} v \right) \quad (6) -$$ -General Solution: -The general solution is: -\begin{align} -(X, Y) -&= (X_h + X_p, Y_h + Y_p) \\ -&= \left(\frac{c}{g} u + t \, b', \frac{c}{g} v -t\, a' \right) \\ -&= \left(\frac{c}{g} u + t \, \frac{D}{g}, \frac{c}{g} v + t\, \frac{d}{g} \right) \quad (7) -\end{align} -Reducing to the First Positive Solution: -We need to choose a $t$ with -$$ -X(t) = \frac{c}{g} u + t \, \frac{D}{g} > 0 \\ -Y(t) = \frac{c}{g} v + t\, \frac{d}{g} > 0 -$$ -Among those $t$ one needs to choose the one which minimizes $(X,Y)$: -$$ -t = \max \left\{ -\left\lfloor -\frac{c}{D} u \right\rfloor + 1, -\left\lfloor -\frac{c}{d} v \right\rfloor + 1 -\right\} \quad (8) -$$ -Example: -For your first example we had $A_1 = 1$, $B_1 = 8$, $d = 14$, $D = 21$. -We have $g = \gcd(14, 21) = 7$. -The extended Euclidean algorithm gives (e.g. see here) -$$ -(u, v) = (1, 1) \\ --14 \, 1 + 21 \, 1 = 21 - 14 = 7 = \gcd(14, 21) -$$ -We have $c = 1 - 8 + 21 - 14 = 0$, so we have just a homogenous equation here. -The solution is -$$ -(X_h, Y_h) = (t b', -t a') = (t (21/7), -t(-14/7)) = (3 t, 2 t) -\quad (t \in \mathbb{Z}) -$$ -These are all integer solutions. The first positive solution happens for $t = 1$ with -$$ -(X, Y) = (3, 2) = (n, m) -$$ -and indeed -$$ -A_3 = 1 + (3-1) \cdot 14 = 1 + 2 \cdot 14 = 29 \\ -B_2 = 8 + (2-1) \cdot 21 = 8 + 21 = 29 -$$<|endoftext|> -TITLE: Why does free imply torsion-free? -QUESTION [7 upvotes]: I want to verify that if $R$ is an integral domain and $M$ is an $R$-module, that if $M$ is free, $M$ must also be torsion-free. -Where can I start with this? I feel like it is obvious but I can't see it. I am just getting started with my course. - -REPLY [4 votes]: Fix a basis $\{b_i\}_{i \in I}$ of $M$. Take $m \in M$, $m \neq 0$, and $r\in R$ such that $rm = 0$. Write $m = m_{1}b_{i_1}+\cdots+m_k b_{i_k}$, for some scalars $m_j \in R$. Then $$rm = rm_1b_{i_1}+\cdots+ rm_k b_{i_k} = 0$$gives $rm_jb_{i_j} = 0$ for all $1 \leq j \leq k$. Since $m \neq 0$, there is $j^\ast$ with $m_{j^\ast} \neq 0$. Since $\{b_{i_{j^\ast}}\}$ is linearly independent, we have that $rm_{j^\ast}b_{i_{j^\ast}} = 0$ implies $rm_{j^\ast} = 0$. Since $R$ is an integral domain and $m_{j^\ast} \neq 0$, we get $r=0$ as wanted.<|endoftext|> -TITLE: Informal proof of Gödel's second incompleteness theorem -QUESTION [5 upvotes]: This relates to two previous threads: -Question about Godel's first incompleteness theorem and the theory within which it is proved -Explanation of proof of Gödel's Second Incompleteness Theorem (I'm using ideas from the proof given here.) -I'd like to know if the following informal proof of Gödel's 2nd incompleteness is correct. We accept Gödel's 1st incompleteness theorem as proven: - -We have a theory $\sf{T}$ capable of basic arithmetic. - -Theory $\sf{T}$ is capable of proving Gödel's 1st incompleteness theorem. (I'm suspicious about this) - -From 2, Theory $\sf{T}$ is capable of proving the following statement about itself: "If $\sf{T}$ is consistent, then $\sf{T}$'s Gödel statement $G$ is true but unprovable within $\sf{T}$" - -Assume $\sf{T}$ proves its own consistency. - -Then using 3 and 4, it proves its own Gödel statement $G$ is true but unprovable within T. - -Since $\sf{T}$ proves the Gödel statement $G$ is true... $\sf{T}$ can assert "$G$ is provable within $\sf{T}$" (suspicious?)... - -Using 5 and 6, $\sf{T}$ asserts $G$ is both provable and unprovable within $\sf{T}$, so $\sf{T}$ is inconsistent. - - -Is this essentially correct? Thanks in advance. - -REPLY [4 votes]: Your proof is essentially correct. You should have enough to start going a bit more in the details of the proof, which should convince you. -If you want some help in the process, you can read Smullyan's wonderful book "Gödel's Incompleteness Theorems". It tries to make the reader understand the ideas fully in a very intuitive way. -Note that there is no issue regarding your point 2. Since you have an encoding of arithmetic in your theory, then it is sufficient to use the arithmetical encoding of the theorem.<|endoftext|> -TITLE: What is the winning strategy for this Game on the Power Set -QUESTION [10 upvotes]: Given a finite set, players alternately choose proper subsets. Once a subset has been chosen, none of its subsets may be -chosen later. The last player to move wins. -I figured out that, with optimal play, Player 1 wins. This is because he can either choose the null set and give away his move, becoming the second player, or not choose the null set and do a "real" move, staying the first player. Because he can choose, and one player wins with optimal play, Player 1 must win. However, I can't figure out a strategy for him, other than choosing/not choosing the null set. - -REPLY [5 votes]: This game is very well known. Unfortunately, it has many different names, and very few known results. -Names/related games -Chomp is a game usually played on a rectangle where players take bites but cannot eat the top-left square. For the finite set of size $2$, this game is $2\times2$ chomp in this way: $$\begin{array}{|c|c|}\hline \boxed{\{1,2\}}& \{1\} \\ \hline \{2\}& \emptyset \\ \hline\end{array}$$ -This can be generalized to high dimensions, using the $n$-cube, so that your game is equivalent to "$n$-dimensional Chomp on a $2\times2\times\cdots\times2$ board". -Andries E. Brouwer has a nice page on Chomp with a lot of information. Your game is mentioned in the section "Chomp on a simplicial complex", although he uses the opposite convention, where the moves are the complements of the moves you describe, so that taking the whole set would be like taking the empty set and taking the empty set would make you lose (taking the whole set is disallowed in your description). He also mentions this game can be called "Schuh's game for square-free $N$" (here players select divisors), "subset takeaway", "hyperchomp" (as used on Jan Draisma's recreational maths page [broken]), and the "superset game". -Results -In Nim-type games by Gale and Neyman, it was conjectured that the winning move to the subset game is always to take the maximal element (in your game's terms, to take $\emptyset$ first). This is shown (after recasting the game so that it starts after that standardized move) for $n=1,2,3$ in Albert Meyer's notes for Mathematics for Computer Science [broken] and $n=4$ in the corresponding solutions [broken]. At Brouwer's page mentioned earlier, and in the paper On Three-rowed Chomp by Brouwer, Horváth, Molnár-Sáska, and Szabó, it is mentioned that this remains true for $n=5,6,7$. -I do not know of any further results, but this superset game was mentioned in Guy and Nowakowski's list of unsolved problems in CGT from More Games of No Chance so that any further progress would have been in the past 15 years or so, but I cannot find anything else. -Edit: The links to Meyer's notes are broken now, but the full textbook has the same thing without the solutions if you search for "game". I have no replacement link for Jan Draisma's page. Also, a lot more information and recent developments can be found at the MO question David Gale's subset take-away game<|endoftext|> -TITLE: Mersenne, Fibonacci... are there other cases in which the existence of a given prime implies the existence of another prime? -QUESTION [7 upvotes]: Thinking about why it seems impossible to find a case in which the existence of a given prime implies the existence of another bigger prime, I tried to make a list of cases in which knowing that a number is prime then automatically we know that other one is prime. So far I just can recall two well known non-trivial situations: - - -Mersenne numbers, if $p=2^n-1$ is prime then $n$ is prime as well. The opposite is not true. - -Fibonacci numbers, if $p=F_n$ is prime then $n$ is prime as well. The opposite is not true. - - - -In both cases $n$ is smaller than $p$. -I would like to ask the following questions: - - -Did I forget more cases? - -It seems that the existence of the bigger prime $p$ implies the existence of the smaller prime $n$ but I can not recall any cases in which a single (only one required) smaller prime $n$ implies that a bigger prime $p$ automatically exists. Are there any papers regarding that impossibility? Are there such cases? -Thank you! - - - -Update 16/02/2016: as kindly explained in the comments by @DanielFischer and answer by @TitoPiezasIII, there are generalizations of both the Mersenne and Fibonacci numbers that are also part of the list. - -REPLY [3 votes]: There is a prime-generating formula, but not a practical one. -Ghandhi's formula for the next prime: Let $Q$ be the product of the primes less than the odd prime $p$.(If $p=3$ then $Q=2.$)$$\text {Let }\quad S=-\frac {1}{2}+\sum_{d|Q}\frac {\mu (d)}{2^d-1}$$ where $\mu$ is the Mobius function.$$\text {Then }\quad 1<2^pS<2.$$ Since $2^{p-1}<1$ and $2<2^{p+1}$ this uniquely defines the natural number $p$. -To prove it write each term $\mu (d)/(2^d-1)=\mu (d)\sum_{n=1}^{\infty}2^{-n d}.$ Collect powers of $2$ in the sum over $d|Q.$ Using the property of $\mu$ that $\sum_{e|n}\mu(e)=0$ for $n>1$ we obtain $$S=\sum_{n\in T}2^{-n}$$ where $T=\{m:m>1\land \gcd (m,Q)=1\}.$ From the def'n of $Q$ we have $\min T=p$, and all members of $T$ are odd, so $$2^{-p} -TITLE: Can I take Linear Algebra without having learned Vector Calculus? -QUESTION [6 upvotes]: I need to take Linear Algebra to progress in my major and the course has Calculus III listed as a prerequisite. I've already taken Calculus III and passed with a C, although I didn't learn the smallest bit of it the entire semester (not a good semester for me). Should I reteach myself Calculus III and hold off on taking Linear Algebra until I do, or would that be a waste of time and should I just go ahead and take the Linear Algebra course? I really don't want to waste time teaching myself something I already took...but if I really need the knowledge, I might. - -REPLY [3 votes]: Go ahead, you can graduate without knowing how to integrate proprely in $n$ dimension... -Anyway usually in Italy Linear Algebra is taken the first Year in both Physics and Mathematics, while multivariable calculus is done the second Year. -Indeed not only it is not a prerequisite to understand the subject, but could very well be quite the contrary being Linear Algebra quite helpful in understanding Calculus III. -Indeed the prerequisites of a standard Linear Algebra course are very few, but there could be more advanced courses of Linear Algebra where the study of linear Algebra is applied to some specific topic (e.g. Differential Equations, Operator Theory, etc...). In this case it may be needed a knowledge of calculus, but usually just the very basics of one or multiple variables. Generally speaking even in this courses the most calculus-involving construction you'll see is the Jacobian.<|endoftext|> -TITLE: Why do we (mostly) restrict ourselves to Latin and Greek symbols? -QUESTION [20 upvotes]: 99% of variables, constants, etc. that I run into are named for either a Latin character (like $x$) or a Greek character (e.g. $\pi$). Sometimes I twitch a little when I have to keep two separate meanings for a symbol in my head at once. -Back during the Renaissance and prior, I can see why this would be the case (everybody who set today's conventions for notation spoke Latin or Greek). But it's 2016; thanks to globalization and technological advancements, when symbol overloading becomes an issue one could simply look at the Unicode standard (e.g. through a character map or virtual keyboard) and pick their favorite character. -So why do we (mostly) limit our choice of symbols to Latin and Greek characters? - -REPLY [6 votes]: It's in part for historical reasons and it's in part for practical reasons. You have already figured out the historical reasons. -One practical reason is that notation is meant to convey ideas, it's not meant to be used for its own sake. If instead of asking whether your ideas are correct your readers get stuck on whether your notation is correct, or worse, what your notation even means, then the notation has failed. -Symbol overloading can be a problem, especially in a long document. Suppose you're writing a book about prime numbers. First you need $p$ and $q$ to be any odd positive primes, then you need them to be any primes in $\mathbb{Z}$ whatsoever, next you need $q - 1$ to be a multiple of $p$, then you need to them to have quadratic reciprocity, and later on still you need them to not have quadratic reciprocity. -You could be tempted to declare $p$ and $q$ are any positive odd primes, ぱ、く are any primes in $\mathbb{Z}$, パ、ク are primes such that one is one more than a multiple of the other, $\hat{p}$ and $\hat{q}$ have quadratic reciprocity, ب and خ don't have quadratic reciprocity. You declare them at the beginning of the book and then use them without further explanation. -I'm sure these could be made a little less arbitrary, but if you have to read such a book, it's gonna get pretty damn tiresome to keep having to refer to the list of symbols at the beginning. It is better, in my opinion, to redefine the symbols at each theorem or other logical division of your document. -It might also help to think of it as somewhat analogous to accidentals in music notation: you have E$\flat$ in measure 20, no E's of any kind for ten bars then an E with no accidental next to it. Is it supposed to E$\flat$? Probably not. If there was only one intervening barline, you could reasonably think the composer actually meant E$\flat$ again. But ten barlines is quite enough to cancel, I think. -So if in Theorem 2.1 the author uses $p$ and $q$ to mean primes with quadratic reciprocity, then pages later in Theorem 2.7 he uses $p$ and $q$ just to mean "primes," I would think quadratic reciprocity is now no longer a requirement. -Font variations can help generate distinct symbols that still bear some recognizable relation to other symbols. $P$ and $Q$ could be specific sets of primes, $\mathcal{P}$ and $\mathcal{Q}$ could be the products of primes in those sets, $\mathbb{P}$ is the set of all primes in $\mathbb{Z}$, $\mathfrak{P}$ and $\mathfrak{Q}$ are ideals generated by those primes, etc. -But even going this route it's possible to get carried away. Better to use a few symbols judiciously than a lot of symbols recklessly. -And one last point: even within the Latin and Greek alphabets, we limit ourselves further. $l$ is barely used. Nor do certain Greek letters get much play here because of their similarity to Latin letters, e.g., compare the uppercase $A$ to the uppercase alpha. The potential for confusion multiplies in the broader Unicode palette: is ム from katakana, bopomofo or CJK unified ideographs? You can't tell just by looking at it.<|endoftext|> -TITLE: Why so many 'multi-part' definitions, as opposed to 'unified' ones? -QUESTION [8 upvotes]: Many definitions consist of multiple parts: an equivalence relation is symmetric AND reflexive AND transitive; a topology is closed over finite intersections AND over arbitrary unions; etc. However, I've seen a number of cases where it seems simpler to combine the parts into a single definition: the result is often shorter and easier to calculate with.$ -\newcommand{\ref}[1]{\text{(#1)}} -\newcommand{\inf}[1]{\text{inf}(#1)} -\newcommand{\sup}[1]{\text{sup}(#1)} -\newcommand{\then}{\Rightarrow} -\newcommand{\when}{\Leftarrow} -\newcommand{\true}{\text{true}} -\newcommand{\false}{\text{false}} -$ - -$\bullet\;$ As my most recent example, I discovered (through questions here on MSE) that $\;\inf{\cdots}\;$ can simply be defined by postulating $$ -z \leq \inf{A} \;\equiv\; \langle \forall a : a \in A : z \leq a \rangle -$$ for any $\;z\;$ and lower-bounded $\;A\;$. Contrast this with \begin{align} -& z \in A \;\then\; \inf{A} \leq z \\ -& \langle \forall a : a \in A : z \leq a \rangle \;\then\; z \leq \inf{A} \\ -\end{align} or even \begin{align} -& z \in A \;\then\; \inf{A} \leq z \\ -& \langle \forall \epsilon : \epsilon > 0 : \langle \exists a : a \in A : a < \inf{A} + \epsilon \rangle \rangle \\ -\end{align} -$\bullet\;$ For sets, the symmetric difference is often defined as $$ -A \triangle B \;=\; (A \setminus B) \cup (B \setminus A) -$$ or $$ -A \triangle B \;=\; (A \cup B) \setminus (A \cap B) -$$ while in practical proofs I find it much easier to work with $$ -x \in A \triangle B \;\equiv\; x \in A \;\not\equiv\; x \in B -$$ for all $\;x\;$, since $\;\not\equiv\;$ is the logic-level equivalent of $\;\triangle\;$. -$\bullet\;$ The textbook definition of '$\;\mathscr T\text{ is a topology on }X\;$' is that \begin{align} -& \mathscr T \subseteq \mathscr P(X) \\ -& \emptyset \in \mathscr T \\ -& X \in \mathscr T \\ -& \mathscr T\text{ is closed under }\cdots \cap \cdots \\ -& \mathscr T\text{ is closed under }\bigcup \\ -\end{align} However, given closure under $\;\bigcup\;$, the first three conditions can be unified to just $$ -\bigcup \mathscr T = X -$$ which has the very intuitive reading '$\;\mathscr T\;$ covers $\;X\;$'. -$\bullet\;$ In logic, I almost aways see the 'uniqueness quantifier' $\langle \exists! x :: P(x) \rangle$ ('there exists exactly one') defined as $$ -\langle \exists x :: P(x) \rangle \;\land\; \langle \forall x,y : P(x) \land P(y) : x=y \rangle -$$ where $$ -\langle \exists y :: \langle \forall x :: P(x) \;\equiv\; x = y \rangle \rangle -$$ is shorter and often seems much easier to work with. And it has a nice symmetry: the $\;\then\;$ direction of the equivalence is uniqueness, which the $\;\when\;$ direction is existence. -$\bullet\;$ Finally, as an example from various domains, a statement of the form $\;P \equiv Q\;$ is very often seen as an invitation to give separate proofs for $\;P \then Q\;$ and $\;Q \then P\;$; and similarly for mutual inclusion for sets, and for proving equality of numbers using $\;\le\;$ and $\;\ge\;$, or even $\;\lt,=,\gt\;$. - -The common pattern in all of the above, is that people seem to prefer 'multi-part' definitions over 'unified' definitions. And I'm wondering why this is. -Does a proof which is split in parts perhaps have a proof-practical advantage? As a kind of counterexample, a while ago I discovered that a relation $\;R\;$ on $\;A\;$ is an equivalence relation exactly when $$ -aRb \:\equiv\: \langle \forall x :: aRx \equiv bRx\rangle -$$ holds for all $\;a,b\;$ (where $\;a,b,x\;$ range over $\;A\;$). However, when I tried to actually use this definition to prove some relation to be an equivalence relation, then almost always the resulting proof was more complex than a proof of the three parts (reflexivity, symmetry, transitivity). So in this specific example, the 'unified' definition did not really help me. But in my experience, this has been the exception: 'unified' definitions almost always really work in practice for me. -Do the parts perhaps have an educational value? Perhaps, at least initially, it is easier to build an intuition using separate parts, and then both those proofs and also later proofs are structured around that 'multi-part' intuition. -Is there perhaps an 'implicational bias'? In other words, is it perhaps that I've been brought up in the 'school' of Dijkstra-Feijen, Gries-Schneider, et al., where there is an emphasis on equality and equivalence and symmetry, while most people approach proofs 'sequentially' based on inferences? -Or is something else at work here? - -REPLY [2 votes]: Regarding binary relations, there are many important different types. An equivalence relation is symmetric, reflexive, and transitive. A linear order < is anti-symmetric, irreflexive, transitive, and satisfies trichotomy. A well-order is a linear order with an additional condition. A poset (the kind used in the set-theoretic topic called Forcing) is reflexive and transitive. And there are of course many others. Instead of trying to compress the definitions, it is often more useful to list the parts, as it can then be seen how varying the parts results in other structures. -Chess players say "To win you must use all your pieces". When attempting a proof, a list of properties, even if logically redundant, can help you to see some important property that you haven't used. -Sometimes a defining list is easier to use because it incorporates more data: Let $\times$ be an associative binary operation on a set $G\ne \phi$ such that $\forall x,y\in G\;[\;(\exists! z\in G\;(x\times z=y)\land (\exists!z'\in G\; (z'\times x=y) \;].$ It takes some work to show that this meets all the "usual list" of conditions for a group. The usual def'n mentions an identity and unique two-sided inverses. -On the other hand, some writers do present def'ns that are much longer than most people would deem necessary.<|endoftext|> -TITLE: For which values $a, b \in \mathbb{R}$ the function $u(x,y) = ax^2+2xy+by^2$ is it the real part of a holomorphic function in $\mathbb{C}$ -QUESTION [8 upvotes]: For which values $a, b \in \mathbb{R}$ the function $$u(x,y) = - ax^2+2xy+by^2$$ is the real part of a holomorphic function in - $\mathbb{C}$. - -I think we have to take Cauchy-Riemann theorem, but I don't know how to find these two constant from a certain function $f(x,y) = u(x,y)+i v(x,y)$. -Is anyone could help me? - -REPLY [3 votes]: This Result and Cauchy Riemann Equations shows that $u(x,y)$ is real part of holomorphic function iff $u$ is harmonic. -So,$u_{xx}+u_{yy}=0$ i.e. $b=-a$. QED<|endoftext|> -TITLE: Nontrivial subring with identity of a ring without identity -QUESTION [6 upvotes]: I'm looking for an example a ring and a subring with $R \subset S$ such that $R$ has 1 but $S$ does not. Its easy to choose R to be the trivial ring with $0=1$, but are there any more exotic examples of this phenomenon? - -REPLY [4 votes]: Let K be any nontrivial unital ring. -Let $R = \left\{\left(\begin{smallmatrix} a & 0 \\ 0 & 0 \end{smallmatrix}\right) : a \in K \right\}$, and let $S = \left\{\left(\begin{smallmatrix} a & b \\ 0 & 0 \end{smallmatrix}\right) : a,b \in K \right\}$. Note that, S is a rng under the standard operations in $M_2(K)$ whereas R is a ring with identity $\left(\begin{smallmatrix} 1 & 0 \\ 0 & 0 \end{smallmatrix}\right)$. - -REPLY [3 votes]: Let $R$ be your favorite ring with $1$, let $T$ be your favorite ring without $1$, and let $S=R\times T$ (identifying $R$ with $R\times\{0\}\subset S$). Your trivial example is just the special case of this when $R$ is the trivial ring.<|endoftext|> -TITLE: Is there a formula for $\sin(xy)$ -QUESTION [18 upvotes]: Can we express a trigonometric function for the product of two angles as a function of trigonometric functions of its factors? -For example: Is there a formula for $\sin(xy)$ as a function of $\sin x$ and $\sin y$ or other trigonometric functions of $x$ and $y$. - -REPLY [5 votes]: One can use binomial expansion in combination with the complex extension of trig functions: -$$\cos(xy)=\frac{e^{xyi}+e^{-xyi}}{2}=\frac{a^{xy}+a^{-xy}}2$$ -Using $a=e^i$ for simplicity. -We also have: -$$(a+a^{-1})^n=\sum_{i=0}^{\infty}\frac{n!a^{n-i}a^{-i}}{i!(n-i)!}=\sum_{i=0}^{\infty}\frac{n!a^{n-i}}{i!(n-2i)!}$$ -Which is obtained by binomial expansion. -We also have: -$$(a+a^{-1})^n=(a^{-1}+a)^n=\sum_{j=0}^{\infty}\frac{n!a^{2j-n}}{j!(n-j)!}$$ -And, combining the two, we get: -$$(a+a^{-1})^n=\frac{\sum_{i=0}^{\infty}\frac{n!a^{n-2i}}{i!(n-i)!}+\sum_{j=0}^{\infty}\frac{n!a^{2j-n}}{j!(n-j)!}}2=\frac12\sum_{i=0}^{\infty}\frac{n!}{i!(n-i)!}(a^{n-2i}+a^{-(n-2i)})$$ -If we have $\cos(n)=\frac{a^n+a^{-n}}2$, then we have -$$(2\cos(n))^k=(a^n+a^{-n})^k=\sum_{i=0}^{\infty}\frac{k!}{i!(k-i)!}\frac{a^{n(k-2i)}+a^{-n(k-2i)}}2$$ -Furthermore, the far right of the last equation can be simplified back into the form of cosine: -$$\sum_{i=0}^{\infty}\frac{k!}{i!(k-i)!}\frac{a^{n(k-i)}+a^{-n(k-i)}}2=\sum_{i=0}^{\infty}\frac{k!}{i!(k-i)!}(\cos(n(k-2i)))$$ -Thus, we can see that for $\cos(ny)$, it simply the first of the many terms in $\cos^n(y)$ and we may rewrite the summation formula as: -$$(2\cos(n))^k=\cos(nk)+\sum_{i=1}^{\infty}\frac{k!}{i!(k-i)!}(\cos(n(k-2i)))$$ -And rearranging terms, we get: -$$\cos(nk)=2^k\cos^k(n)-\sum_{i=1}^{\infty}\frac{k!}{i!(k-i)!}(\cos(n(k-2i)))$$ -This becomes explicit formulas for $n=0,1,2,3,\dots$ -I note that there is no way by which you may reduce the above formula without the knowledge that $n,k\in\mathbb{Z}$. -Also, it is quite difficult to produce the formulas for, per say, $\cos(10x)$ because as you proceed to do so, you will notice that it requires knowledge of $\cos(8x),\cos(6x),\cos(4x),\dots$, which you can eventually solve, starting with $\cos(2x)$ (it comes out to be the well known double angle formula), using this to find, $\cos(4x)$, use that to find $\cos(6x)$, etc. all the way to $\cos(10x)$. -Notably, this can be easier than Chebyshev Polynomials because it only requires that you know the odd/even formulas less than the one you are trying to solve. (due to $-2i$) -But this is the closest I may give to you for the formula of $\cos(xy)$, $x,y\in\mathbb{R}$. -It is also true for $x,y\in\mathbb{C}$. -As others have noted, this can also be solved in terms of the Chebyshev Polynomial: -$$T_n(\cos(x))=\cos(nx)$$ -Trivially, -$$T_0(\cos(x))=1$$ -$$T_1(\cos(x))=\cos(x)$$ -Through the sum of angles formula, it is derivable that we have: -$$T_n(\cos(x))=2T_{n-1}(\cos(x))-T_{n-2}(\cos(x))$$ -A much easier recursive formula for $n\in\mathbb{Z}$. -The formula for $\sin(nk)$ is easily derivable with binomial expansion: -$$\sin(nk)=\frac{e^{-nki}-e^{nki}}{2i}=\frac{c^{nk}-c^{-nk}}2$$ -The solution is very similar to the cosine, with the exception that complex numbers will appear more than one may like. -Also, there is no Chebyshev polynomial for sine as far as I have seen. Probably easier to use $\sin(nk)=\cos(nk+\frac12\pi)$ -Addendum -I shall proceed to attempt to explain how to further use my recursive definition. -Start with -$$\cos(nk)=2\cos^k(n)-\sum_{i=1}^{\infty}\frac{k!}{i!(k-i)!}(\cos(n(k-2i)))$$ -We also have: -$$\cos(n(k-2j))=2\cos^{k-2j}(n)-\sum_{i=1}^{\infty}\frac{(k-2j)!}{i!(k-2j-i)!}(\cos(n(k-2j-2i)))$$ -Combine the above two to get: -$$\cos(nk)=2\cos^k(n)-\sum_{i=1}^{\infty}\frac{k!}{i!(k-i)!}(2\cos^{k-2j}(n)-\sum_{j=1}^{\infty}\frac{(k-2i)!}{j!(k-2i-j)!}(\cos(n(k-2j-2i))))$$ -I'm going to call all of the numbers with the factorials $\beta_i$: -$$\cos(nk)=2\cos^k(n)-\sum_{i=1}^{\infty}\beta_i(2\cos^{k-2j}(n)-\sum_{j=1}^{\infty}\beta_j\cos(n(k-2j-2i))))$$ -Through a very painful process, you may factor out each and every term this way, I just imagine it isn't so much of a beautiful process. -You will most likely also run into the problem of divergence, which may be fixed using $\cos(k-2i)=\cos(2i-k)$, allowing the terms in the binomial expansion to always have positive exponents so you don't run into $0^{-1}$ or divergence problems.<|endoftext|> -TITLE: Prob. 2, Sec. 20 in Munkres' TOPOLOGY, 2nd ed: The dictionary order topology on $\mathbb{R} \times \mathbb{R}$ is metrizable. -QUESTION [8 upvotes]: Here's Prob. 2, Sec. 20 in the book Topology by James R. Munkres, 2nd edition: - -Show that $\mathbb{R}\times \mathbb{R}$ in the dictionary order topology is metrizable. - -The dictionary order on the set $\mathbb{R} \times \mathbb{R}$ is defined as follows: - -For any two points $x_1\times y_1$, $x_2 \times y_2$ $\in \mathbb{R} \times \mathbb{R}$, we define $$x_1 \times y_1 \prec x_2 \times y_2$$ - if and only if either $x_1 < x_2$, or if $x_1 = x_2$ and $y_1 < y_2$. - -Now the dictionary order topology on $\mathbb{R} \times \mathbb{R}$ is the one having as a basis all sets of the fomm -$$\left( a \times b, a \times c \right) \colon= \left\{ \ x \times y \in \mathbb{R} \times \mathbb{R} \ \colon \ a \times b \prec x \times y \prec a \times c \ \right\},$$ -where $a, b, c \in \mathbb{R}$ such that $b < c$. -Now if we define the funtion $d \colon \left(\mathbb{R} \times \mathbb{R} \right) \times \left( \mathbb{R} \times \mathbb{R} \right) \to \mathbb{R}$ as -$$d\left( x_1 \times y_1, x_2 \times y_2 \right) \colon = \begin{cases} 1 \ \mbox{ if } \ x_1 \neq x_2; \\ \min \left(\ \vert y_1 - y_2 \vert, \ 1 \ \right) \ \mbox{ otherwise }, \end{cases} $$ -then is this function $d$ a metric? How to verify the triangle inequality? -Does this function give the dictionary order topology on $\mathbb{R} \times \mathbb{R}$? -If $x_1 = x_2 = x_3$, then we have -$$ -\begin{align} -& \ d\left(x_1 \times y_1, x_3 \times y_3 \right) \\ -&= \min\left( \vert y_1 - y_3 \vert, \ 1 \right) \\ -&\leq \min\left( \vert y_1 - y_2 \vert, \ 1 \right) + \min\left( \vert y_2 - y_3 \vert, \ 1 \right) \\ -& \ \ \mbox{ [using the fact that this minimum is the same as the standard bounded metric on $\mathbb{R}$]} \\ -&= d\left(x_1 \times y_1, x_2 \times y_2 \right) + d\left(x_2 \times y_2, x_3 \times y_3 \right). -\end{align} -$$ -If $x_1 \neq x_2$ and $x_2 \neq x_3$, then we have -$$ -\begin{align} -d\left(x_1 \times y_1, x_3 \times y_3 \right) &\leq 1 < 1 + 1 = d\left(x_1 \times y_1, x_2 \times y_2 \right) + d\left(x_2 \times y_2, x_3 \times y_3 \right). -\end{align} -$$ -If $x_1 = x_2$ and $x_2 \neq x_3$, then $x_1 \neq x_3$ either, and so -$$ -\begin{align} -d\left(x_1 \times y_1, x_3 \times y_3 \right) &= 1 \\ -&\leq d\left(x_1 \times y_1, x_2 \times y_2 \right) + 1 \\ -&= d\left(x_1 \times y_1, x_2 \times y_2 \right) + d\left(x_2 \times y_2, x_3 \times y_3 \right). \end{align}$$ -And, similarly for the case when $x_1 \neq x_2$ and $x_2 = x_3$. -Is this demonstration of the triangle inequality complete and correct? -PS: -Assuming that the above $d$ is a metric, here is my attempt at showing that the dictionary order topology on $\mathbb{R} \times \mathbb{R}$ is indeed the one induced by the metric $d$ above. - -Let - $$B \colon= \{ a \} \times (b, c) = \{ \ a \times t \in \mathbb{R} \times \mathbb{R} \ \colon \ b < t < c \ \} = ( a \times b, a \times c) $$ - be a basis element for the dictionary order topology on $\mathbb{R} \times \mathbb{R}$, and let $x \times y \in B$. Then of course $x = a$ and $b < y < c$. Let us put - $$ \epsilon \colon= \min \{ \ y-b, c-y, 1 \}. $$ - Then if $s \times t \in B_d ( x \times y, \epsilon)$, then $s \times t \in \mathbb{R} \times \mathbb{R}$, and - $$ d( s \times t, x \times y ) < \epsilon. \tag{1}$$ - and, as $\epsilon \leq 1$, so - $$ d( s \times t, x \times y ) < 1, $$ - which implies that $s = x$, that is, $s = a$, and also from (0) we can conclude that - $$ d( s \times t, x \times y ) = \min \{ \ \lvert t-y \rvert, 1 \ \}. $$ - Then (1) implies that - $$ -d( s \times t, x \times y ) = \min \{ \ \lvert t-y \rvert, 1 \ \} < \epsilon = \min \{\ y - b, c - y , 1 \ \}. $$ - So - $$ d( s \times t, x \times y ) = \lvert t-y \rvert < \min \{ \ y-a, b-y \} , $$ - The last relation implies that $b < t < c$. So $s \times t \in B$. -Thus for any basis set $B$ for the dictionary order topology and for any element $x \times y \in B$, we have a basis element $B_d ( x \times y, \epsilon)$ for the $d$-metric topology such that - $$ x \times y \in B_d ( x \times y, \epsilon ) \subset B. $$ - So the $d$-metric topology is finer than the dictionary order topology on $\mathbb{R} \times \mathbb{R}$. -Now let us consider an open ball $B_d( a \times b, \epsilon )$, where $a \times b \in \mathbb{R} \times \mathbb{R}$ and $\epsilon > 0$ are arbitrary. Let $x \times y \in B_d ( a \times b, \epsilon )$. Then if we choose a real number $\delta$ such that - $$ 0 < \delta < \min \{ \ \epsilon - d( a \times b, x \times y), \ 1 \ \}, $$ - then we note that $\delta < 1$ and also that - $$ B_d ( x \times y, \delta ) \subset B_d ( a \times b, \epsilon ). \tag{2} $$ -Now let us put - $$ B \colon= \{ \ x \ \} \times ( y-\delta, y + \delta) = \big( \ x \times (y-\delta), \ x \times (y+ \delta) \ \big). $$ - Then this $B$ is a basis element for the dictionary order topology on $\mathbb{R} \times \mathbb{R}$ such that $x \times y \in B$. -Moreover, if $s \times t \in B$, then - $s = x$ and $y-\delta < t < y+\delta$ and hence $\lvert t-y \rvert < \delta$. But $\delta < 1$. - So - $$ d( s \times t, x \times y ) = \min \{ \lvert t-y \rvert, 1 \} = \lvert t-y \rvert < \delta, $$ - which implies that $ s \times t \in B_d( x \times y, \delta )$ and hence also that $s \times t \in B_d( a \times b, \epsilon)$ by virtue of (2) above. Therefore $B \subset B_d( a \times b, \epsilon)$. -Thus we have shown that for any basis element $B_d( a \times b, \epsilon)$ for the $d$-metric topology and for any element $x \times y \in B_d( a \times b, \epsilon)$, there is a basis element $B$ for the dictionary order topology on $\mathbb{R} \times \mathbb{R}$ such that - $$ x \times y \in B \subset B_d( a \times b, \epsilon). $$ - Thus the dictionary order topology is finer than the $d$-metric topology. -The preceding few paragraphs show that the $d$- metric topology is the same as the dictionary order topology on $\mathbb{R} \times \mathbb{R}$. - -Is this proof correct? Is each and every step of it correct in its logic and presentation? If not, then where lies the problem? - -REPLY [8 votes]: Here's an outline: -It might be enlightening to convince yourself that the dictionary order topology on $\mathbb{R}\times\mathbb{R}$ is the homeomorphic to the disjoint union of continuum many copies of $\mathbb{R}$. -One may check, and my recollection is that Munkres does this, that if $d$ is a metric, then $d' = \min(d,1)$ is a metric inducing the same topology. Thus, as far as the topology of metric spaces is concerned, it is entirely sufficient to consider metrics which are $\leq 1$. -Now, suppose that $(X_\alpha)_{\alpha \in I}$ are metric spaces and that the corresponding metrics $d_\alpha$ all have $d_\alpha \leq 1$. Form the disjoint union $X = \bigsqcup_{\alpha \in I} X_\alpha$ and give it its natural topology (open sets in $X$ are disjoint unions $\bigsqcup_{\alpha \in I} U_\alpha$ where $U_\alpha$ is open in $X_\alpha$). You may find it less distracting to check in this setting that $d$ defined by $d(x,y) = \begin{cases} d_\alpha(x,y) & \text{ if } x,y \in X_\alpha \\ 1 & \text{ otherwise} \end{cases}$ is a metric on $X$ and induces the aforementioned topology. -To put it in a slogan: "the disjoint union of metrizeable spaces is metrizeable".<|endoftext|> -TITLE: If $f:[0,\infty)\to [0,\infty)$ and $f(x+y)=f(x)+f(y)$ then prove that $f(x)=ax$ -QUESTION [19 upvotes]: Let $\,f:[0,\infty)\to [0,\infty)$ be a function such that $\,f(x+y)=f(x)+f(y),\,$ for all $\,x,y\ge 0$. Prove that $\,f(x)=ax,\,$ for some constant $a$. - -My proof : -We have , $\,f(0)=0$. Then , -$$\displaystyle f'(x)=\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}=\lim_{h\to 0}\frac{f(h)}{h}=\lim_{h\to 0}\frac{f(h)-f(0)}{h}=f'(0)=a\text{(constant)}.$$ -Then, $\,f(x)=ax+b$. As, $\,f(0)=0$ so $b=0$ and $f(x)=ax.$ -Is my proof correct? - -REPLY [9 votes]: By induction, $f(nx)=nf(x)$ for an integer $n$. -Now take any real $x$. From -$$\lfloor nx\rfloor\le nx\le\lceil nx\rceil,$$applying the non-decreasing function $f$, we deduce -$$f\left(\lfloor nx\rfloor\right)\le f(nx)\le f\left(\lceil nx\rceil\right).$$ -By the above induction property, -$$\lfloor nx\rfloor f(1)\le nf(x)\le \lceil nx\rceil f(1),$$ and -$$\frac{\lfloor nx\rfloor}nf(1)\le f(x)\le \frac{\lceil nx\rceil}nf(1).$$ -As $n$ can be arbitrarily large, by squeezing -$$f(x)=f(1)x.$$<|endoftext|> -TITLE: Find all $a,b,c\in\mathbb{Z}_{\neq0}$ with $\frac ab+\frac bc=\frac ca$ -QUESTION [10 upvotes]: As the title implies, I'm looking for triples $(a,b,c)$, where $a,b,c$ are nonzero integers, with $$\frac ab+\frac bc=\frac ca$$ - -I checked the cases $-100 -TITLE: Proving equality - a sum including binomial coefficient $\sum_{k=1}^{n}k{n \choose k}2^{n-k}=n3^{n-1}$ -QUESTION [5 upvotes]: I want to prove the following equality: -$$\displaystyle\sum_{k=1}^{n}k{n \choose k}2^{n-k}=n3^{n-1}$$ -So I had an idea to use $((1+x)^n)'=n(1+x)^{n-1}$ -So I could just use the binomial theorem and let $$x=2 \implies (n3^{n-1})$$ and then modify the sum into that one on the left side. -So I need to prove that : -$$\displaystyle\sum_{k=1}^{n}k{n \choose k}2^{n-k}=n(1+x)^{n-1}$$ if$$x=2$$ -Any help would be appreciated. - -REPLY [3 votes]: We can do it without derivatives using the identity $\color{red}{\binom{n+1}{k+1}={n+1\over k+1}\binom{n}{k}}$, then using the change of variable $\color{blue}{k=r+1}$ and lastly using the binomial theorem. -$$\begin{align} -S&=\sum_{k=1}^{n}k{n \choose k}2^{n-k}\color{red}=\sum_{k=1}^{n}n{n-1 \choose k-1}2^{n-k}\color{blue}=n\sum_{r=0}^{n-1}{n-1\choose r}2^{(n-1)-r}=n(1+2)^{n-1}=n3^{n-1} -\end{align}$$<|endoftext|> -TITLE: proving convergence and calculating sum of a series -QUESTION [6 upvotes]: So I have this series: -$$\displaystyle\sum_{n=1}^{\infty}\frac{3n^2+15n+9}{n^4+6n^3+9n^2}$$ -I noticed: -$$\displaystyle\sum_{n=1}^{\infty}\frac{1}{n^2}(\frac{3n^2+15n+9}{n^2+6n+9})$$ -So can I say that it's convergent already or should I use a critera on the second fraction in the sum? -Also for sum I could use partial fractions or am I mistaken? -Any help would be appreciated. - -REPLY [10 votes]: You can conclude that the series converges by the comparison test. We can also calculate the sum. Note that $$\frac{3n^{2}+15n+9}{n^{4}+6n^{3}+9n^{2}}=\frac{1}{n^{2}}+\frac{1}{n}-\frac{1}{n+3}-\frac{1}{\left(n+3\right)^{2}} - $$ hence $$\sum_{n\geq1}\frac{3n^{2}+15n+9}{n^{4}+6n^{3}+9n^{2}}=\sum_{n\geq1}\left(\frac{1}{n^{2}}+\frac{1}{n}-\frac{1}{n+3}-\frac{1}{\left(n+3\right)^{2}}\right)$$ $$=2+\frac{1}{2}+\frac{1}{4}+\frac{1}{3}+\frac{1}{9} - =\frac{115}{36} - $$ because the series telescopes. - -REPLY [5 votes]: Consider -\begin{align} -a_n=\frac{3n^2+15n+9}{n^4+6n^3+9n^2} &\le \frac{3n^2+15n^2+9n^2}{n^2(n+3)^2} \\ -&= \frac{27}{(n+3)^2}\\ -&\le \frac{27}{n^2} = b_n -\end{align} -It is well know that -$$\sum_{n=1}^\infty \frac{1}{n^2}$$ -converges. Since $0\le a_n\le b_n$ and $\sum b_n$ converges, we may conclude that $\sum a_n$ converges.<|endoftext|> -TITLE: Nontrivial element in first homology of Hawaiian earring -QUESTION [5 upvotes]: I am stuck at the following exercise from Hatcher section 3.3: - -14. Let $X$ be the shrinking wedge of circles in Example 1.25, the subspace of $\mathbb{R}^2$ consisting of the circles of radius $1/n$ and center $(1/n, 0)$ for $n = 1, 2, \dots$ -(a) If $f_n : I \to X$ is the loop based at the origin winding once around the nth circle, show that the infinite product of commutators $[f_1, f_2] [f_3, f_4]\dots$ defines a loop in $X$ that is nontrivial in $H_1(X)$. [Use Exercise 12.] - -I am fine with the fact that $[f_1,f_2][f_3,f_4]\cdots$ actually defines a loop in $X$, but don't know how to prove that it's non-trivial in $H_1(X)$. -I tried following the hint. Exercise 12 is the following: - -12. As an algebraic application of the preceding problem, show that in a free group $F$ with basis $x_1, \dots, x_{2k}$, the product of commutators $[x_1,x_2]\dots[x_{2k-1},x_{2k}]$ is not equal to a product of fewer than $k$ commutators $[v_i,w_i]$ of elements $v_i, w_i \in F$. - -So from the fact that there exists a retraction $X\to\bigvee_{i=1}^n S^1$ for all $n\in \Bbb N$, we get that the inclusion $i:\bigvee_{i=1}^n S^1\to X$ is injective on $\pi_1$. Thus, from exercise 12, we can conclude that loops in $X$ defined by finite commutators $[f_1,f_2]\cdots[f_{2k-1},f_{2k}]$ are not homotopic to loops in $X$ expressed by fewer than $k$ commutators. -How can we use that to prove the original claim? How do we eventually pass to $H_1$? Of course all finite products of commutators are still trivial in $H_1(X)$, so I don't really know what to do with the conclusion above. - -REPLY [9 votes]: Since $H_1(X) = \pi_1(X)_{ab} = \pi_1(X) / [\pi_1(X), \pi_1(X)]$, if $f$ were trivial in $H_1(X)$, then it would be conjugate (in $\pi_1(X)$) to a finite product of commutators: -$$f = g [u_1, v_1] \dots [u_k, v_k] g^{-1},$$ -for some $g, u_i, v_i \in \pi_1(X)$. The retraction $X \to \bigvee_{i=1}^{k+1} S^1$ onto the $(k+1)$st circles induces on the fundamental group a map that sends $f$ to $[x_1, x_2] [x_3,x_4] \dots [x_{2k+1},x_{2k+2}]$, while it sends $g [u_1, v_1] \dots [u_k, v_k] g^{-1}$ to the conjugate of a product of $k$ commutators. -Using the same technique as in Exercise 12, you can use this relation to construct a map $M_k \to M_{k+1}$ of degree $1$, which is a contradiction by Exercise 11 (the conjugate does not really change much). So $f$ cannot be expressed as a conjugate of a finite number of commutators, and so is nontrivial in $H_1(X)$.<|endoftext|> -TITLE: The 2-norm of the integral vs the integral of the 2-norm -QUESTION [7 upvotes]: I`m currently having some issues with a seemingly innocent problem. I would like to show that -$$\Bigg\|\int_\mathbb{R}\begin{pmatrix}A(x)\\B(x)\end{pmatrix}dx\Bigg\|_2 \leq \int_{\mathbb{R}}\Bigg\|\begin{pmatrix}A(x)\\B(x)\end{pmatrix}\Bigg\|_2dx$$ -Where $A(x),B(x) \in L^2(\mathbb{R})$ and the two norm is defined as -$$\Bigg\|\begin{pmatrix}A(x)\\B(x)\end{pmatrix}\Bigg\|_2=\sqrt{|A(x)|^2+|B(x)|^2}$$ -I've asked around and people have tended to say "that's very simple" and then spent half an hour staring at it. I've tried plugging stuff in and it seems to hold but I do need a proof. Any help would be much appreciated! -Thanks in advance - -REPLY [8 votes]: To give a more general picture, that does not use the special form of the $2$-norm being induced by the scalar product, we will show: - -Proposition. Suppose $X$ is a Banach space, $(S,\mathcal A, \mu)$ a measure space, and $f \colon S \to X$ is integrable. Then we have - $$ \def\norm#1{\left\|#1\right\|}\norm{\int_S f \, d\mu} \le \int_S \norm {f(s)} \, d\mu(s) $$ - -Proof. We use the definition of the integral. If $f = \sum_i x_i\chi_{A_i}$ is a simple function, where the $A_i$ are disjoint, then $\norm{f(s)} = \sum_i \norm{x_i} \chi_{A_i}(s)$ and hence -\begin{align*} - \norm{\int_S \sum_{i} x_i \chi_{A_i}d\mu} &= \norm{\sum_i \mu(A_i)x_i}\\ - &\le \sum_i \mu(A_i)\norm{x_i}\\ - &= \int_S \sum_{i}\norm{x_i}\chi_{A_i} d\mu\\ - &= \int_S \norm{f(s)}\, d\mu(s) -\end{align*} -If $f$ is integrable, choose simple functions $f_n$ such that $f_n \to f$ almost everywhere and $\lim_n\int_S f \, d\mu = \int_S f_n \, d\mu$ we have -\begin{align*} - \norm{\int_S f\, d\mu} &= \lim_n \norm{\int_S f_n\, d\mu}\\ - &\le \lim_n \int_S \norm{f_n(s)}\, d\mu(s)\\ - &= \int_S \norm{f(s)}\, d\mu(s) -\end{align*}<|endoftext|> -TITLE: Triangle angles. -QUESTION [16 upvotes]: For $\vartriangle$ABC it is given that $$\frac1{b+c}+\frac1{a+c}=\frac3{a+b+c}$$ Find the measure of angle $C$. - -This is a "challenge problem" in my precalculus book that I was assigned. How do I find an angle from side lengths like this? I have tried everything I can. I think I may need to employ the law of cosines or sines. Thanks. - -REPLY [13 votes]: Short answer: -According to the problem, the solution is unique, so any triple of values that satisfies the equation provides a solution. We immediately see that $a=b=c=1$ is a solution, hence the angle is 60 degrees. -Medium answer. -What count are the ratios between sides. So we can assume that $c=1$. -The equation is symmetric in $a$ and $b$, and we know it's unique, so as $a$ varies, $b$ varies. We can try to see if this infinite family of solutions has an intersection with isoscele triangles, so we put $b=a$ and we solve -$$\frac{1}{a+1}+\frac{1}{a+1}=\frac{3}{2a+1}$$ -finding $a=b=c=1$. So the angle is 60 deg. -Notice that the fun thing is that $k\cdot (1,1,1)$ is not the unique solution. They are indeed infinite. Example, $a=15$, $b=8$, $c=13$.<|endoftext|> -TITLE: How to prove $dxdy = r dr d \theta$? -QUESTION [10 upvotes]: $x = r \cos \theta$, $y = r \sin \theta$ -I got $dx = \cos \theta dr - r \sin \theta d \theta $ -$ dy = \sin \theta dr + r \cos \theta d \theta$ -How to get $dx dy = r dr d \theta$?? -I saw the same question Rigorous proof that $dx dy=r\ dr\ d\theta$. -But I am not getting where vectors are coming in to the picture thanks. - -REPLY [3 votes]: A piece of an annulus swept out by a change of angle $\Delta \theta$ and a change of radius $\Delta r$, starting from a point given by $(r,\theta)$, has area $\Delta \theta \int_r^{r+\Delta r} s ds = \Delta \theta \frac{(r+\Delta r)^2-r^2}{2} = \Delta \theta \left ( r \Delta r + \frac{\Delta r^2}{2} \right )$. (This is computed by integrating the length of circular arcs.) -As $\Delta r \to 0$ the second term is asymptotically much smaller than the first, which heuristically justifies the change of variables formula. Showing that this procedure, which is equivalent to the more general procedure based on the Jacobian determinant, actually makes integrals do the correct thing takes some more work. The details can be found in a typical undergraduate real analysis text.<|endoftext|> -TITLE: Is the self-adjoint condition required in the definition of a positive operator? -QUESTION [6 upvotes]: I'm reading Linear Algebra Done Right and it defines a positive operator $T$ as one which is self adjoint and has the property -$$\langle Tv,v \rangle \geq 0$$ -for all $v\in V$. -I am confused as to why the self adjoint condition must be included. Here is what I came up with: -Suppose $T$ is an operator such that $\langle Tv, v\rangle \geq 0$ for all $v$. This implies that $\langle Tv, v\rangle$ is a real number, since the greater than sign doesn't make sense for complex numbers. Then, using the definition of adjoint, -$$\langle Tv, v\rangle = \langle v, T^*v\rangle = \overline{\langle T^*v,v\rangle} = \langle T^*v, v\rangle$$ -for all $v\in V$. Therefore, $Tv=T^*v$ for all $v$ and $T$ is self adjoint. -Where did I go wrong? - -REPLY [16 votes]: As stated in Linear Algebra Done Right immediately after the definition of a positive operator, the requirement that $T$ is self-adjoint can be dropped from the definition in the case of a complex inner-product space. However, the self-adjoint condition is needed on real inner-product spaces. Consider, for example, the operator $T$ on $\mathbf{R}^2$ of rotation by $90^\circ$. For this operator $T$ we have $\langle Tv, v \rangle \ge 0$ for all $v \in \mathbf{R}^2$ (because $\langle Tv, v \rangle = 0$ for all $v \in \mathbf{R}^2$), but $T$ is not self-adjoint and $T$ definitely should not be considered to be a positive operator (it has no real eigenvalues).<|endoftext|> -TITLE: Approximating $x=\sqrt{2}+1$ -QUESTION [7 upvotes]: Suppose $y>1$ is some approximation to $x=\sqrt{2}+1$. Give a brief reason (not a proof) why one should expect $(1/y)+2$ to be a closer approximation to $x$ than $y$ is. - -After testing this out for a bit, it looks like we can let $y_{n+1}=\frac{1}{y_n}+2$ and $\lim_{n\to\infty}y_n=\sqrt{2}+1$, but this does not give me any intuitive idea as to why $y_{n+1}$ should be a better approximation to $x$ than $y_n$ is. -Can anyone give a brief reason for this improvement in aproximation, especially a more "intuitive" one than simple numerical data? - -REPLY [3 votes]: Let$ y<1+\sqrt{2}$, -So $1+\sqrt{2} - y=D$ (say). -$1/y > \sqrt{2}-1$ -And $1/y+2>1+\sqrt{2}$ -So let $1/y+2-(1+\sqrt{2}) =d.$ -So we have to prove d < D. -Let D-d>0. -Simplifying you get : -2$\sqrt{2} >y+1/y $ -Which is true as $1-\sqrt{2}1+ \sqrt{2}$<|endoftext|> -TITLE: Integration of $\int\frac{\sin^4x+\cos^4x}{\sin^3x+\cos^3x}dx$ -QUESTION [6 upvotes]: How can we integrate: -$$ -\int\frac{\sin^4x+\cos^4x}{\sin^3x+\cos^3x}dx -$$ -Using simple algebraic identities I deduced it to -$$ -\int\frac{1-2\sin^2x\cdot\cos^2x}{(\sin x+\cos x)(1-\sin x\cdot\cos x)}dx -$$ but can't proceed further. Please provide some directions? - -REPLY [3 votes]: $\displaystyle\frac{\sin^4x+\cos^4x}{\sin^3x+\cos^3x}=\sin x+\cos x-\frac{\sin x\cos x}{\sin^3 x+\cos^3x}=\sin x+\cos x-\frac{\sin x\cos x}{(\sin x+\cos x)(1-\sin x\cos x)}$, -and $\;\;\displaystyle\frac{\sin x\cos x}{(\sin x+\cos x)(1-\sin x\cos x)}=A\left(\frac{\sin x+\cos x}{1-\sin x\cos x}\right)+B\left(\frac{1}{\sin x+\cos x}\right)$ -where $\sin x\cos x=A(\sin x+\cos x)^2+B(1-\sin x\cos x)=(A+B)+(2A-B)\sin x\cos x$. -Then $A=\frac{1}{3}$ and $B=-\frac{1}{3}$, -so $\displaystyle\int\frac{\sin^4x+\cos^4x}{\sin^3x+\cos^3x}dx=\int\left(\sin x+\cos x-\frac{1}{3}\cdot\frac{\sin x+\cos x}{1-\sin x\cos x}+\frac{1}{3}\cdot\frac{1}{\sin x+\cos x}\right)dx$ -$\displaystyle=-\cos x+\sin x-\frac{1}{3}\int\frac{2\sin x+2\cos x}{2-2\sin x\cos x}dx+\frac{1}{3}\int\frac{1}{\sqrt{2}\sin(x+\frac{\pi}{4})}dx$ -$\displaystyle=-\cos x+\sin x-\frac{2}{3}\int\frac{\sin x+\cos x}{1+(\sin x-\cos x)^2}dx+\frac{1}{3\sqrt{2}}\int\csc\left(x+\frac{\pi}{4}\right)dx$ -$\displaystyle=-\cos x+\sin x-\frac{2}{3}\arctan(\sin x-\cos x)+\frac{1}{3\sqrt{2}}\ln\big|\csc\left(x+\frac{\pi}{4}\right)-\cot\left(x+\frac{\pi}{4}\right)\big|+C$ -$\displaystyle=-\cos x+\sin x-\frac{2}{3}\arctan(\sin x-\cos x)+\frac{1}{3\sqrt{2}}\ln\left\vert\frac{\sqrt{2}-\cos x+\sin x}{\sin x+\cos x}\right\vert+C$<|endoftext|> -TITLE: Axiomatizability of some classes of groups -QUESTION [9 upvotes]: I want to check which of the following classes are axiomatizable and which are even finitely axiomatizable. - -the class of finite groups -the class of infinite groups -the class of groups of order $n$ for some fixed $n$ -the class of torsion groups -the class of torsion-free groups - -Attempts: - -Not axiomatizable due to compactness. -I think the group axioms plus the sequence of formulae ("there are $n$ distinct elements") should give an axiomatization. If it was finitely axiomatizable then so was the complement (ie all structures that don't describe infinite groups), which seems wrong but I'm not sure how to argue this. -The group axioms plus "there are $n$ distinct elements" plus "there are not $n+1$ distinct elements" should give a finite axiomatization? -/ 5. I'm not sure how to tackle the torsion thing. - -REPLY [4 votes]: You're correct for 1, 2, and 3. Below are some sketches for how to do the others. -To see why 2 is not finitely axiomatizable, you can take an ultraproduct of $\mathbb{Z}_p$ for $p\in{}\mathbb{N}$ prime. This is an infinite group, so the complement is not closed under ultraproducts, which means that the class of infinite groups is not finitely axiomatizable. -For 4, note that in a torsion group, each element has finite order. Let $C$ be the class of torsion groups and $T$ the theory. There are members of this class with elements of arbitrarily large order (look at $\mathbb{Z}_n$), so you can write sentences that say "there exists an element of order $n$" for each $n\geq{}1$. Then let $T'=T\cup{}\{{}\phi_n\}{}$ for each $n$. Since any finite subset of $T'$ is consistent, by compactness there is a model of $T'$ with an element of infinite order, thus is not a torsion group. So it is not axiomatizable. -For 5, we can do something similar to 2. For torsion-free groups we can include sentences $\phi_n$ that say the only element raised to the $n$th power that equals $e$ is $e$. However it is not finitely axiomatizable since we can take an ultraproduct of groups that have torsion elements and get a torsion free group. For example, take the ultraproduct of $\mathbb{Z}_p$. The structure with universe $\mathbb{Z}_p$ is a torsion-group for each $p$, but the ultraproduct will not be one.<|endoftext|> -TITLE: Show that $\mathbb{R}^2$ can't be written as the union of disjoint topolocial circles -QUESTION [10 upvotes]: My attempt was to think about cohomology. But I think that there is some failure on my thought. -Suppose that the claim holds. Then $\mathbb{R}^2 = \cup_i S^1_i$. We know that $\mathbb{R}^2$ minus a circle has the comology isomorphic to $\mathbb{R}$. But does not make any difference take off a circle of such union, then the cohomology of $\mathbb{R}^2$ is not the trivial. What is a contradiction. - -REPLY [13 votes]: Suppose you had such a decomposition. Pick a circle $C_1$. By Schoenflies it bounds a disc $D_1$. Inductively, pick a circle $C_n$ contained in $D_{n-1}$ such that the area (measure, if you like) of the disc $D_n$ it bounds is at most half the area of the area of $D_{n-1}$. -This is the key step in the proof, so we should carefully see why we can do this. For notational convenience I'm going to call $D=D_{n-1}$. -First note that any disc properly contained in the interior of $D$ has strictly smaller area (its complement in the interior of $D$ is open hence has positive area). Now suppose I could not find a disc of arbitrarily small area bounded by one of our circles; then let the infimum of the areas of discs bounded by our circles be $t>0$. As above we see that we cannot actually achieve $t$, or else we would be able to achieve areas smaller than it by looking at circles inside the disc of area $t$. We will contradict this by showing that we can represent $t$. -Pick a sequence $S_n$ of circles with area at most $t+1/n$. Passing to a suvsequence if necessary and invoking the finite area of the disc I can assume the $S_n$ are nested downwards. Take the intersection of the discs they bound. By Cantor's intersection theorem this is nonempty; pick a point $x$ in it; because the circle containing $x$ must have been in the disc $S_n$ bounds for all $n$, that circle is contained in the infinite intersection; and so is the disc that circle bounds. But this disc must have area at most $t+1/n$ for all $n$, hence have area at most $t$, as desired. -The above discussion proved, recall, that we may always pick a circle $C_n$ contained in the previous, such that the area of the disc it bounds is at most half the previous. Now let's apply the previous argument again: take the intersection of all the $D_n$, it contains a disc, that disc has positive area, which is nonsense since its area is at most $\text{Area}(D_1)/2^n$ for all $n$. This contradicts our most crucial assumption: the existence of a decomposition into circles! Thus proven what we wanted to prove. -I see no way to do this using cohomological arguments. If you demand that the circles are a foliation of the plane it's much easier to prove impossibility.<|endoftext|> -TITLE: L2 Norm of Pseudo-Inverse Relation with Minimum Singular Value -QUESTION [7 upvotes]: Consider a matrix $A \in\mathbb R^{n\times m}$ with $n>m$. It has full column rank, i.e. $\operatorname{rank}(A)=m$. Its left pseudo-inverse is given by; $$A^{-1}_\text{left}=(A^TA)^{-1}A^T $$ -From two different results during my studies, I have realized the following: -$$ \|A^{-1}_\text{left}\|_2 = \frac{1}{\sigma_{\min}(A)} $$ -just like the case as if $A$ is square invertible matrix. -I have seen a similar question, however I couldn't relate the answer with the equality given above. -My question is: How can we show that the L2 norm of left pseudo-inverse of $A$ is related to its minimum singular value? -Thank you in advance for your help. - -REPLY [2 votes]: For brevity, denote this left inverse $A^{-1}_{\text{left}}$ by $B$. It is well-known that $AB$ is the orthogonal projection onto the column space of $A$. Therefore, $ABy\perp(I-AB)y$ for every vector $y$. It follows that -\begin{aligned} -\|B\|_2&=\max_{y\ne0}\frac{\|By\|}{\|y\|}\\ -&=\max_{y\ne0}\frac{\|By\|}{\|ABy+(I-AB)y\|}\\ -&=\max_{By\ne0}\frac{\|By\|}{\|ABy+(I-AB)y\|}\\ -&\le\max_{By\ne0}\frac{\|By\|}{\|ABy\|}\\ -&\le\max_{x\ne0}\frac{\|x\|}{\|Ax\|}\\ -&=\left(\min_{x\ne0}\frac{\|Ax\|}{\|x\|}\right)^{-1}\\ -&=\frac{1}{\sigma_\min(A)}. -\end{aligned}<|endoftext|> -TITLE: Find all ideals of ${\mathbb{Z}_n}$ -QUESTION [7 upvotes]: The task is to find all ideals of ${\mathbb{Z}_n}$, where $n$ is - positive integer, greater than one. - -My effort -Let $I$ be an ideal of ${\mathbb{Z}_n}$. It is obvious that $I$ is an additive subgroup of ${\mathbb{Z}_n}$. Consider $G$ as an additive subgroup of ${\mathbb{Z}_n}$. Then $G$ is a cyclic additive subgroup generated by $\left\langle d \right\rangle $, where $d \mid n$. We know that for a finite cyclic group of order $k$, every subgroup's order is a divisor of $k$, and there is exactly one subgroup for each divisor. It follows that all ideals of ${\mathbb{Z}_n}$ are of form $\left\langle {{d_1}} \right\rangle ,\left\langle {{d_2}} \right\rangle , \ldots \left\langle {{d_i}} \right\rangle $, where ${d_1},{d_2}, \ldots ,{d_i}$ are positive divisors or $n$. -Questions -Is my proof correct? - -REPLY [4 votes]: You are almost right! You have to pay attention to the fact that an ideal of $\mathbb{Z}_n = \mathbb{Z}/n\mathbb{Z}$ is in particular a subset: it is a set of equivalences classes (in $\mathbb{Z}_n$). But $\left\langle {{d_i}} \right\rangle$ is a subset of $\Bbb Z$. -So you should write $\left\langle {{[d_i]_n}} \right\rangle$ where $[d_i]_n$ denotes the equivalence class of $d_i$, where $d_i \mid n$. -If you are conviced that all the additive subgroups of $\Bbb Z_n$ are of the form $\left\langle {{[d_i]_n}} \right\rangle$ for some $d_i \mid n$, then it just remains to show that these subgroups are actually ideals of $\Bbb Z_n$. -More precisely, you have shown that: if $I$ is an ideal of $\Bbb Z_n$, then it is of the form $I = \langle [d_i]_n \rangle = d_i\mathbb{Z}/n\mathbb{Z}$ for some $d_i \mid n$. It just remains to show the converse: if $I = \left\langle {{[d_i]_n}} \right\rangle$ for some $d_i \mid n$ then $I$ is an ideal of $\Bbb Z_n$.<|endoftext|> -TITLE: What's the point in being a "skeptical" learner -QUESTION [157 upvotes]: I have a big problem: -When I read any mathematical text I'm very skeptical. I feel the need to check every detail of proofs and I ask myself very dumb questions like the following: "is the map well defined?", "is the definition independent from the choice of representatives" etc... Even if the author of the paper/book says that something is a easy to check, I have this impulse to verify by myself. -I think that this approach is philosophically a good thing, but it leads to severe drawbacks: - -I waste a lot of time in reading few lines of mathematics, and at the end of the day I look at what I've done and I realize that I managed to go through a few theorems without learning enough. Remember that when one is a (post)graduate student (s)he has plenty of things to learn, so the time is almost never enough. -This kind of learning could be affordable for undergraduate texts, but very often is almost impossible to read a paper with a skeptics point of view. At a certain point things become very complicated and the only way out is to accept results on faith. - -And finally the real object of my question: -3. Despite the big effort I've employed in reading very carefully something, after few weeks or months I obviously forget the details. So, for example if I try to read again a proof after a while, maybe I would remember the big picture but probably I would check again the details as though I'd never done it yet. -Therefore, even if the common rules for a mathematician say that "learning" should ideally be done skeptically, I've finally realized that maybe this is not very healthy. Now, could you recommend a sort of royal road for reading mathematics? It should be a middle way between accepting every result as true and going through every detail. I'd like to know what to do in practice. - -REPLY [9 votes]: @EricTowers answer is excellent, and it lines up the best with what I see asserted by top mathematicians about how they read proofs; there should be an initial "scanning" process for the big ideas and the overall structure of the proof. This should tell you whether the proof is "interesting" or novel in a way that deserves your time to dig into the gritty details. -Richard Lipton writes about this sometimes on his blog: - -Proofs and Elevator Rides (A guide to tell if a proof is a proof) -Proving a Proof (How to convince someone your proof is really a proof) -Is This a Proof? (Seeking the limits of mathematical proofs) -Facts No One Really Checks (Basic theorems that rarely get proved in full detail) - -I think that one of the points some of these guys make is that mathematics is still (or especially) a work of community, and to some extent you still have to trust/ communicate/ convince/ stand on the shoulders of other people as part of the work. I'm still wrestling with that idea myself, and you'll have to find your own way through that thicket, but it's food for thought.<|endoftext|> -TITLE: State space for 8-queen problem -QUESTION [5 upvotes]: While reading Artificial Intelligence a Modern Approach I came across the following formulation for the 8-queen problem: - -Initial state: No queens on the board. -States: Arrangements of n queens (0 <= n <= 8), one per column in the leftmost n columns, with no queen attacking another are states. -Successor function: Add a queen to any square in the leftmost empty -colum such that it is not attacked by any other queen. - -I have been trying to find some pattern that would allow me to sort out the number of states for each n given the constraints above but I have not been succeeded for n > 2. -So, for: - -n=1 => 8 states -n=2 => 42 states -... - -Even though I could generate all the possible combinations for each n and then filter out all those that do not represent a valid state, I would rather not go that way because it would be really time and space consuming for n, say, greater or equal than 10. -To sum up, is there any sort of formula to find the number of states given n and at the same time taking into account the following constrains? - -REPLY [2 votes]: Here is a modest contribution while we wait for a professional answer -to appear. You may be interested to know that even though the state -space is easy to enumerate by backtracking, the numbers have not yet -appeared in the OEIS! - -The following Perl script implements the enumeration: - -#! /usr/bin/perl -w -# - -sub search { - my ($sofar, $n, $sref) = @_; - - my $placed = scalar(@$sofar); - - $sref->{$placed}->{join('-', @$sofar)} = 1; - return if $placed == $n; - - for(my $nxt = 0; $nxt < $n; $nxt++){ - my $ind; - - for($ind = 0; $ind < $placed; $ind++){ - last if $sofar->[$ind] == $nxt || - $ind + $sofar->[$ind] == $placed + $nxt || - $ind - $sofar->[$ind] == $placed - $nxt; - } - next if $ind != $placed; - - push @$sofar, $nxt; - search($sofar, $n, $sref); - pop @$sofar; - } - - return; -} - - -MAIN: { - my $mx = shift || 8; - - - for(my $n=1; $n <= $mx; $n++){ - my $states = {}; - - search([], $n, $states); - - printf "%02d: ", $n; - - for(my $placed = 1; $placed <= $n; $placed++){ - printf " %d", - scalar(keys %{ $states->{$placed} }); - } - - print "\n"; - } -} - -This yields the following data: - -$ ./qs.pl 14 -01: 1 -02: 2 0 -03: 3 2 0 -04: 4 6 4 2 -05: 5 12 14 12 10 -06: 6 20 36 46 40 4 -07: 7 30 76 140 164 94 40 -08: 8 42 140 344 568 550 312 92 -09: 9 56 234 732 1614 2292 2038 1066 352 -10: 10 72 364 1400 3916 7552 9632 7828 4040 724 -11: 11 90 536 2468 8492 21362 37248 44148 34774 15116 2680 -12: 12 110 756 4080 16852 52856 120104 195270 222720 160964 68264 14200 -13: 13 132 1030 6404 31100 117694 335010 707698 1086568 1151778 813448 350302 73712 -14: 14 156 1364 9632 54068 241484 835056 2211868 4391988 6323032 6471872 4511922 1940500 365596 - -The OEIS has the number of solutions at OEIS -A000170. I will try to see to it to enter -the above triangular array into the OEIS giving the MSE post as one of -the references. -Addendum. These data can now be found at OEIS A269133 where we hope additional useful references will appear (there are quite a number already which can be found in the linked to sequences).<|endoftext|> -TITLE: Is there a closed-form of $\frac{1}{1}+\frac{1}{1+2^2}+\frac{1}{1+2^2+3^2}+.....$ -QUESTION [5 upvotes]: How can I find the closed-form of? -$$\frac{1}{1}+\frac{1}{1+2^2}+\frac{1}{1+2^2+3^2}+.....$$ -Any help thanks - -REPLY [3 votes]: hypergeometric has a good idea, but we cannot work with divergent series this way. Similar rearrangements can give wrong answers. But taking hypergeometric's ideas, a valid proof looks like this: -$$ -\begin{align} -\log 2 &= \sum_{n=1}^\infty\left(\frac{1}{2n-1}- \frac{1}{2n}\right) -\\ -\sum_{n=1}^N \frac{1}{\sum_{r=1}^n r^2} &= -\sum_{n=1}^N \frac{6}{n(n+1)(2n+1)} -\\ &= -12 \sum_{n=1}^N \left(\frac{1}{2n} + \frac{1}{2n+2} - \frac{2}{2n+1}\right) -\\ &= -12 \sum_{n=1}^N \frac{1}{2n} + 12\left(-\frac{1}{2}+\frac{1}{2N+2}+\sum_{n=1}^N \frac{1}{2n}\right) -- 24\left(-1+\frac{1}{2N+1}+\sum_{n=1}^N\frac{1}{2n-1}\right) -\\ &= -12\left(-\frac{1}{2}+\frac{2}{2N+2}+2+\frac{1}{2N+1}\right) --24\sum_{n=1}^N\left(\frac{1}{2n-1}-\frac{1}{2n}\right) -\\ -\lim_{N \to \infty}\sum_{n=1}^N \frac{1}{\sum_{r=1}^n r^2} &= -18-24\log 2 -\end{align} -$$<|endoftext|> -TITLE: How to solve $ \sqrt{x^2 +\sqrt{4x^2 +\sqrt{16x^2+ \sqrt{64x^2+\dotsb} } } } =5\,$? -QUESTION [10 upvotes]: How to find $x$ in: -$$ -\sqrt{x^2 +\sqrt{4x^2 +\sqrt{16x^2+ \sqrt{64x^2+\dotsb} } } } =5 -$$ - -REPLY [14 votes]: Hint: $~x+1=\sqrt{x^2+2x+1}=\sqrt{x^2+\sqrt{4x^2+4x+1}}=\ldots~$ Can you take it from here ? :-$)$<|endoftext|> -TITLE: Discrete Laplacian -QUESTION [5 upvotes]: I was wondering if anybody would explain (or please point me to a nice reference) as to why the ``discrete Laplacian" on a graph is actually called a Laplacian. Namely, how is it related to the standard Laplacian on $\mathbb{R}^n$ ? Is there a sense in which the discrete Laplacian on say, a square lattice, would converge to the standard Laplacian as the lattice spacing tends to $0$ ? - -REPLY [5 votes]: I like the discussion in section 5.6 of Gilbert Strang's book Differential Equations and Linear Algebra. For a directed graph, the incidence matrix $A$ is a difference matrix --- so it is a discrete analog of the gradient $\nabla$. The graph Laplacian is $A^T A$, which is analogous to the (negative) Laplacian $\nabla^T \nabla = -\text{div} \nabla$.<|endoftext|> -TITLE: Does the axiom of choice have any use for finite sets? -QUESTION [6 upvotes]: It is well known that certain properties of infinite sets can only be shown using (some form of) the axiom of choice. I'm reading some introductory lectures about ZFC and I was wondering if there are any properties of finite sets that only hold under AC. - -REPLY [8 votes]: There are two remarks that may be relevant here. -(1) This depends on what you mean by "finite sets". Even for (infinite setts of) pairs the axiom of choice is does not follow from ZF if one looks at an infinite collection. This is popularly known as the "pairs of socks" version of AC which is one of the weakest ones. -(2) If you mean that the family of sets itself is finite, then AC can be proved in ZF by induction, i.e., it is automatic, but this is only true if your background logic is classical. For intuitionistic logic, the axiom of choice can be very powerful even for finite sets. For example, there is a theorem that the axiom of choice implies the law of excluded middle; in this sense the introduction of AC "defeats" the intuitionistic logic and turns the situation into a classical one. - -REPLY [7 votes]: The usual properties of finite sets are still true without the axiom of choice. - -If $A$ is a finite set, then every function $f\colon A\to A$ is injective iff it is surjecitve iff it is bijective. - -If $A$ is a finite set of non-empty sets, then there is a choice function from $A$. - -If $A$ is a finite set, then every partial order on $A$ has a maximal element; every two linear orders on $A$ are isomorphic; etc. - -The power set of a finite set is finite, and a subset of a finite set is finite. - - -All these proofs don't use the axiom of choice at all. However the axiom of choice comes into play at two points: - -You have an infinite family of finite sets. Then you might need the axiom of choice in order to say something. But this is because we left the realm of finite sets, the family of sets is now infinite. - -There are characterizations of finiteness which are not necessarily true anymore. There might be an infinite set $A$ such that every $f\colon A\to A$ is injective if and only if it is bijective, and so on. -So now there are several notions of finiteness. The term "finite" in choiceless contexts usually means a set which is in bijection with a bounded set of natural numbers. And we can talk about finiteness using Dedekind's characterization with injective functions (as above), and there is a whole spectrum in between.<|endoftext|> -TITLE: Proof of $n(n^2+5)$ is divisible by 6 for all integer $n \ge 1$ by mathematical induction -QUESTION [6 upvotes]: Prove the following statement by mathematical induction: -$n(n^2+5)$ is divisible by 6 for all integer $n \ge 1$ -My attempt: -Let the given statement be p(n). -(1) $1(1^2+5)$=6 Hence, p(1) is true. -(2) Suppose for all integer $k \ge 1$, p(k) is true. -That is, $k(k^2+5)$ is divisible by 6 -We must show that p(k+1) is true. -$(k+1)((k+1)^2+5)$=$k^3+3k^2+3k+1+5(k+1)$ -=$k^3+3k^2+8k+6$ -=$k(k^2+5)+3k^2+3k+6$ -I'm stuck on this step. I feel I have to show $3k^2+3k+6$ is divisible by 6. But, how can I show $3k^2+3k+6$ is divisible by 6? - -REPLY [2 votes]: I'm utterly confused. Three hours before asking this, you asked (and had answered) this: Proof of for all integer $n \ge 2$, $n^3-n$ is divisible by 6 by mathematical induction. -In that one you asked, and got answer for, how to show $6| 3k^2 + 3k$. In this one you are asking for how to show $6| 3k^2 + 3k + 6$. -How can you know the answer to one but not the answer to the other? -Answer to both: $3|3*h$ for any integer h, so $3|3(k^2 + k)$. If $k$ is odd so is $k^2$ so $k^2 + k$ is the sum of two odd numbers and is even. If $k$ is even so is $k^2$ and so is $k^2 + k$. So $k^2 + k$ is even. So $2|(k^2 +k)$ so $6|3(k^2 + k)$ so $6|3(k^2 + k) + 6$. -BTW: $n(n^2 + 5) = n^3 + 5n = (n^3 -1) + 6n$ so one is divisible by 6 if and only if the other one is.<|endoftext|> -TITLE: Are all compact subgroups of $GL(n,\Bbb C)$ in $U(n)$? -QUESTION [9 upvotes]: If $G$ is a compact subgroup of the multiplicative group $\Bbb C-\{0\}$, then it is easy to show that $G\subseteq S^1$. I wonder if this generalizes as follows: - -Question: If $G$ is a compact subgroup of $GL(n,\Bbb C)$, do we have $G\subseteq U(n)$? - -I am more interested in Lie groups, but I don't know if assuming $G$ is a Lie group can help. If it does make a difference, then yes I am assuming $G$ is a Lie group. - -Proof in case of $n=1$: Suppose $g\in G$ and $g\notin S^1$. Then, $g=re^{i\theta}$ for $r\neq 1$. By taking $g^{-1}$ if necessary we may assume $r>1$. Then, $g^n\in G$ for all $n\geq 0$ but $|g^n|=r^n\to\infty$ as $n\to\infty$ so $G$ is unbounded and hence not compact. - -REPLY [6 votes]: No it is not true. For $n=2$, define $A\in M_2(\mathbb{C})$ : -$$A:=\begin{pmatrix}1&-2\\0&-1\end{pmatrix} $$ -We have that : -$$A^2=\begin{pmatrix}1&-2\\0&-1\end{pmatrix}\begin{pmatrix}1&-2\\0&-1\end{pmatrix}=\begin{pmatrix}1&0\\0&1\end{pmatrix} $$ -Hence $G:=\langle A\rangle$ is a group of order $2$, in particular, it is compact. But clearly $AA^*\neq I_2$ so $G$ is not included in $U(2)$. -Remark 1 : $A$ is the involution taking $e_1$ to $e_1$ and $e_1+e_2$ to $-(e_1+e_2)$ (where $(e_1,e_2)$ is an orthonormal base for the natural hermitian inner-product of $\mathbb{C}^2$). I constructed it as a natural symmetry for a non-hermitian base, it is not so surprising that this leads to a non-hermitian transformation. -Remark 2 : conjugating $A$ by the changement of base $(e_1,e_2)$ to $(e_1,e_1+e_2)$, $A$ becomes $\begin{pmatrix}1&0\\0&-1\end{pmatrix}$ which is an hermitian transformation. Hence up to conjugation $A$ is included in $U(2)$. -Remark 3 : In general, any finite group $G$ of $GL(n,\mathbb{C})$ is included in $U(n)$ up to conjugation. The proof : - - Let $\langle\cdot,\cdot\rangle$ be the hermitian inner product on $\mathbb{C}^n$. Define a new hermitian inner product on $\mathbb{C}^n$ : $$\langle u,v\rangle_{G}:=\frac{1}{|G|}\sum_{g\in G}\langle g\cdot u,g\cdot v\rangle$$ One should check that $\langle \cdot,\cdot\rangle_{G}$ is indeed an hermitian inner product. Once you have these you can also realise that any $g\in G$ is a hermitian transformation for $\langle \cdot,\cdot\rangle_{G}$. Hence $G\leq U(\langle \cdot,\cdot\rangle_{G})$. Finally the groups $U(\langle \cdot,\cdot\rangle_{G})$ and $U(n)$ are conjugate because all inner product admits an orthonormal base : once an orthonormal base $\beta$ for $\langle \cdot,\cdot\rangle$ and an orthonormal base $\beta_G$ for $\langle \cdot,\cdot\rangle_G$ are given then the matrix sending the base $\beta$ to $\beta_G$ clearly conjugates $U(n)$ to $U(\langle \cdot,\cdot\rangle_{G})$. - -Remark 4 : the same is true for $G$ compact subgroup of $GL(n,\mathbb{C})$. The "proof" : - - By the proof for remark 3, it suffices to find a hermitian inner product stable by $G$. It is given by changing the sum to an integral on $G$ for a Haar-mesure. Since $G$ is compact, it is well-defined. The invariance of the inner-product is using the fact that we choose a Haar-mesure which is (by definition) right-invariant.<|endoftext|> -TITLE: Convergence of sequence: $ \sqrt{2} \sqrt{2 - \sqrt{2}} \sqrt{2 - \sqrt{2 - \sqrt{2}}} \sqrt{2 - \sqrt{2 - \sqrt{2-\sqrt{2}}}} \cdots $ =? -QUESTION [30 upvotes]: In other words, if we define a sequence $$ \displaystyle a_{n+1} = \sqrt{2-a_n}, \,\,\,a_0 = 0 .$$ Then, we need to find -$$ -\displaystyle \prod_{n=1}^{\infty}{a_n}. -$$ -Well, from here I don't seem to follow. I can understand that there would be some good simplification and the product will hopefully telescope but I'm lacking the right algebra. I also thought of finding a recurrence solution probably from the corresponding DE but that didn't follow as well. - -REPLY [14 votes]: Another approach is the following one: if we assume $ a_n = 2\cos(\theta_n) $ it follows that -$$ \cos(\theta_{n+1})=\sqrt{\frac{1-\cos\theta_n}{2}} = \sin\left(\frac{\theta_n}{2}\right) = \cos\left(\frac{\pi-\theta_n}{2}\right)\tag{1} $$ -from which we have $\theta_{n+1}=\frac{\pi-\theta_n}{2}$ and, by induction: -$$ \theta_{n+k} = \frac{\pi}{3}-(-1)^k\frac{\pi}{3\cdot 2^k}+(-1)^k\frac{\theta_n}{2^k}.\tag{2}$$ -Since $\theta_0=\frac{\pi}{2}$, -$$ \theta_k = \frac{\pi}{3}+(-1)^k \frac{\pi}{6\cdot 2^k},\qquad \color{red}{a_k = \cos\left(\frac{\pi}{6\cdot 2^k}\right)-(-1)^k\sqrt{3}\sin\left(\frac{\pi}{6\cdot 2^k}\right)}\tag{3} $$ -but since $2\cos\theta_n = \frac{\sin(2\theta_n)}{\sin(\theta_n)}$ and $\sin(\pi-\theta)=\sin(\theta)$, we also have a telescopic product. -In particular: -$$ a_1\cdot a_2\cdot\ldots\cdot a_n = \frac{\sin(2\theta_1)}{\sin(\theta_n)} \tag{4}$$ -hence: - -$$ \prod_{n\geq 1} a_n = \frac{\sin(2\theta_1)}{\sin(\lim_{n\to +\infty}\theta_n)} = \frac{1}{\sin\frac{\pi}{3}}=\color{red}{\frac{2}{\sqrt{3}}}.\tag{5}$$<|endoftext|> -TITLE: Determining a matrix from its characteristic polynomial -QUESTION [13 upvotes]: Let $A\in\mathcal{M}_{n}(K)$, where $K$ is a field. Then, we can obtain the characteristic polynomial of $A$ by simply taking $p(\lambda)=\det(A-\lambda I_n)$, which give us something like -$$p(\lambda) = (-1)^n\lambda^n + (-1)^{n-1}(\text{tr } A)\lambda^{n-1} + \cdots + \det A$$ -Now, how can we obtain the matrix $A$ knowing the characteristic polynomial? - -REPLY [13 votes]: If $A$ and $S$ are $n\times n$ matrices, with $S$ invertible, then $A$ and $SAS^{-1}$ have the same characteristic polynomial. But even non similar matrices can have the same characteristic polynomial: consider -$$ -\begin{bmatrix} -1 & 0 & 0 \\ -0 & 1 & 0 \\ -0 & 0 & 1 -\end{bmatrix},\qquad -\begin{bmatrix} -1 & 1 & 0 \\ -0 & 1 & 0 \\ -0 & 0 & 1 -\end{bmatrix},\qquad -\begin{bmatrix} -1 & 1 & 0 \\ -0 & 1 & 1 \\ -0 & 0 & 1 -\end{bmatrix} -$$ -So you cannot find the matrix having a given characteristic polynomial.<|endoftext|> -TITLE: Find all square numbers $n$ such that $f(n)$ is a square number -QUESTION [5 upvotes]: Find all the square numbers $n$ such that $ f(n)=n^3+2n^2+2n+4$ is also a perfect square. -I have tried but I don't know how to proceed after factoring $f(n)$ into $(n+2)(n^2+2)$. Please help me. -Thanks. - -REPLY [7 votes]: Your factoring seems to appear difficult to solve. -Instead say $n=t^2$. -If $t>1$, note that $f(t^2)=t^6+2t^4+2t^2+4$, and that $(t^3+t+1)^2=t^6+2t^4+t^2+2t^3+2t+1>f(t^2) >t^6+2t^4+t^2=(t^3+t)^2$ (since $2t^3-t^2+2t-3>0$ from here) -If $t<-1$, note that $(t^3+t-1)^2=t^6+2t^4+t^2-2t^3-2t+1>f(t^2) >t^6+2t^4+t^2=(t^3+t)^2$ (since $-2t^3-t^2-2t-3>0$ from here) -So $t=-1,0,1$.<|endoftext|> -TITLE: Can $xy$ and $yx$ lie in different connected components of the group of invertible elements of an algebra? -QUESTION [7 upvotes]: What is an example of a Banach or $C^{*}$ algebra $A$ which has two invertible elements $x, y$ such that $xy$ can not be connected to $yx$ in $G(A)$, the space of invertible elements of $A$. -A possible (weak) motivation for this question is that $K_{1}(A)$ is an Abelian group. -Another motivation: Put $F_{2}=$Free group on 2 generators $x,y$. then in the reduced $C^{*}-$algebra $C_{r}^{*} F_{2}$, $xyx^{-1}y^{-1}$ lies in the same connected component as of the identity. - -REPLY [4 votes]: Yes, they can lie in different components. (Answer rewritten for clarity; conclusion the same.) -I'll give one example with a real Banach algebra and one with a complex $C^{\ast}$ algebra. Both rely on a result of [Samelson,H.,"Groups and spaces of loops, Comm.Math.Helv.,28,278-87 (1954)] -Theorem (Samelson) Define two maps $\lambda$ and $\rho: SU(2) \times SU(2) \to SU(2)$ by $\lambda(g,h)=gh$ and $\rho(g,h)=hg$. Then $\lambda$ and $\rho$ are not homotopic. -We will need to know that $\lambda$ and $\rho$ remain nonhomotopic if we embed $SU(2)$ into larger groups. -Lemma $\lambda$ and $\rho$ remain nonhomotopic if we view their target as the nonzero quaternions (embedding $SU(2)$ as the norm one quaternions. -Proof The map $q \mapsto q/|q|$ is a retraction from the nonzero quaternions onto $SU(2)$; any homotopy could be composed with this retraction to violate Samelson's result. $\square$ -Lemma $\lambda$ and $\rho$ remain nonhomotopic if we view their target as $GL_2(\mathbb{C})$. -Proof For a $2 \times 2$ matrix $X$, write $X^{\ast}$ for the conjugate transpose. Then $X \mapsto \sqrt{X X^{\ast}}^{-1} X$ is a retraction from $GL_2(\mathbb{C})$ onto $U(2)$, and $Y \mapsto Y \left( \begin{smallmatrix} \det(Y)^{-1} & 0 \\ 0 & 1 \end{smallmatrix} \right)$ is a retraction from $U(2)$ onto $SU(2)$. $\square$. - -Now let $X = SU(2) \times SU(2)$ and let $H$ be either the quaternions or $\mathrm{Mat}_{2 \times 2}(\mathbb{C})$. Consider the algebra $A$ of continuous functions $X \to H$, equipped with the sup norm. If $H=\mathrm{Mat}_{2 \times 2}(\mathbb{C})$, this is a $C^{\ast}$ algebra, using the standard $C^{\ast}$ structure on $\mathrm{Mat}_{2 \times 2}(\mathbb{C})$. Let $x$ and $y \in A$ be the first and second projections from $X$ to $SU(2)$, followed by the obvious embeddings $SU(2) \to H$. -An element of $A$ is a unit if and only if it is valued in nonzero quaternions (if $H$ is the quaternions) or valued in $GL_2(\mathbb{C})$ (if $H$ is $\mathrm{Mat}_{2 \times 2}(\mathbb{C})$). So $xy$ and $yx$ han be joined by a path through units if and only if they give homotopic maps $X \to (\mathrm{nonzero quaternions})$ or $X \to GL_2(\mathbb{C})$ respectively -- which we showed they don't. $\square$ -See these answers by Eric Wofsey and Achim Krause for more on the homotopy structure of maps $SU(2) \times SU(2) \to SU(2)$. -UPDATE 2/29/16 A very similar example appears in a 1973 paper of Yuen (Groups of Invertible Elements of Banach algebras, Yuen, Bull of the AMS Volume 79, Number 1 (1973), 82-84 Example 2); she credits it to E. Fadell. Also, Klaja and Ransford give an intriguing example of a different kind of noncommutativity -- a Banach algebra with elements $a$ and $b$ such that $1-ab$ is in the connected component of the identity but $1-ba$ is not!<|endoftext|> -TITLE: Compute the series $\sum_{n=1}^{+\infty} \frac{1}{n^3\sin(n\pi\sqrt{2})}.$ -QUESTION [18 upvotes]: I need to compute $$\sum_{n=1}^{+\infty} \frac{1}{n^3\sin(n\pi\sqrt{2})}.$$ This an exercice of "Amar and Matheron, complex analysis". I proved the convergence and now to compute the sum, I follow the hint of the book which is : Consider integrals of the form $$\int_{\gamma}\frac{dz}{z^3[\sin(\pi z)\sin(\sqrt{2}-1)\pi z]}$$ for a well-chosen $\gamma.$ I know this a residue theorem application but it seems a bit hard to have the good idea. I also tried with a summation factor. Any help will be greatly appreciated. - -REPLY [20 votes]: The contour $\gamma$ you want is the square having vertices $\pm (N-1/2) (1 \pm i)$. You can show that, as $N \to \infty$, the contour integral goes to zero. -However, the contour integral has poles at the integers and at the integers times $\sqrt{2}+1$. The residue at $z=0$ may be evaluated by expansion in a Laurent series, as the pole here is of order $5$. This expansion looks like -$$\frac1{z^3} \frac1{\pi z \left (1-\frac{\pi^2 z^2}{3!} + \frac{\pi^4 z^4}{5!}+\cdots \right )} \frac1{(\sqrt{2}-1)\pi z \left (1-\frac{(\sqrt{2}-1)^2\pi^2 z^2}{3!} + \frac{(\sqrt{2}-1)^4 \pi^4 z^4}{5!}+\cdots \right )}$$ -The coefficient of $1/z$ in this expansion is essentially the coefficient of $z^4$ in the expansion of the sine terms in parentheses, or -$$\frac{13 \pi^2}{90} \left ( \sqrt{2}-1 \right )$$ -The residue at each pole $z=n \ne 0$ is simply -$$\frac{(-1)^n}{\pi n^3 \sin{(\sqrt{2}-1) \pi n}} = \frac1{\pi n^3 \sin{\sqrt{2} \pi n}}$$ -The residue at each pole $z=(\sqrt{2}+1) n \ne 0$ is -$$\frac{(-1)^n (\sqrt{2}+1)}{(\sqrt{2}+1)^3 \pi n^3 \sin{(\sqrt{2}+1) \pi n}} = \frac{(\sqrt{2}-1)^2}{\pi n^3 \sin{\sqrt{2} \pi n}}$$ -Thus, -$$2 \sum_{n=1}^{\infty} \frac{1+(\sqrt{2}-1)^2}{\pi n^3 \sin{\sqrt{2} \pi n}} + \frac{13 \pi^2}{90} \left ( \sqrt{2}-1 \right ) = 0$$ -because the contour integral is zero. Thus, I get that - -$$\sum_{n=1}^{\infty} \frac{1}{n^3 \sin{\sqrt{2} \pi n}} = -\frac{13 \pi^3}{360 \sqrt{2}} $$ - -ADDENDUM -I had some thoughts about this sum. First of all, let's talk about its convergence, which is not trivial. Numerical experiments are more or less helpful, but as one might expect, there is a bit of jumping around the numerical value of the result I have derived. So, even thought the OP stated that he had proven convergence, I just want to illustrate how convergence is achieved. -At issue is the factor $\sin{\sqrt{2} \pi n}$ of each term in the sum: when is this sine term dangerously close to zero? If we think about it for a bit, the worst-case scenario is when $2 n^2$ is one less or more than a perfect square. (Recall that $2 n^2$ can never be a perfect square.) That is, when -$$2 n^2 = m^2 \pm 1$$ -for some $m \in \mathbb{N}$. In this case, -$$\sin{\sqrt{2} \pi n} = \sin{\left (\sqrt{m^2 \pm 1} \pi \right )} $$ -For $n$ sufficiently large, i.e., $m$ large as well, we have -$$\sin{\sqrt{2} \pi n} \approx \sin{\left [m \pi \left (1 \pm \frac1{2 m^2} \right ) \right ]} = (-1)^m \sin{\frac{\pi}{2 m}} \approx (-1)^m \frac{\pi}{2 m}$$ -Thus, -$$\left | \frac1{n^3 \sin{\sqrt{2} \pi n}} \right | \le \frac1{n^3 \frac{\pi}{2 \sqrt{2} n}} = \frac{2 \sqrt{2}}{\pi n^2}$$ -and, because the worst-case scenario term is bounded by something times $1/n^2$, the series converges by comparison with the sum of $1/n^2$. -Why is this so important? Well, it looks like we have discovered a bug in Mathematica. As a matter of routine, I check the result against a straight evaluation in Mathematica. To my horror, Mathematica returned $-13 \pi^{\color{red}{2}}/(360 \sqrt{2})$. How was I off by a factor of $\pi$? I checked and checked my work but found nothing wrong. -The solution to this problem lay in simply evaluating the sum numerically for an increasingly large number of terms. However, in order to assess whether there would be any surprises waiting for us from the sine term, I had to estimate the worst possible "spike" near an integer times $\pi$. What I found above is that, worst case, the terms decrease as some constant times $1/n^2$, so the effect of any spike is limited. -Armed with this information, I was able to verify in Mathematica that, indeed, numerical evaluations of finite sums converged to the answer I gave above rather than Mathematica's result. Mr. Wolfram will be receiving yet another letter. -ADDENDUM II -I did send that letter, and here is what I got in response: - -Hello Ron, -Thank you for taking the time to send in this report. It does appear - that this sum is missing a factor of Pi, even in the latest version of - Mathematica (10.3.1), and I have forwarded an incident report to our - developers with the information you provided. -We are always interested in improving Mathematica, and I want to thank - you once again for bringing this issue to our attention. If you run - into any other behavior problems, or have any additional questions, - please don't hesitate to contact Wolfram Technical Support - (support@wolfram.com). -Sincerely, -[name redacted] - -Remember, just because Mathematica or Maple says something, it is not always true. -ADDENDUM III -I just got (20 Nov 2016) the following email from the fine folks at Wolfram Research: - -Hello Ron Gordon, -In Febuary 2016 you reported an issue with Mathematica wherein Sum - returns a wrong answer for some expressions. We believe that the issue - has been resolved in the current release of Mathematica. -Thank you for your report and we look forward to a continued, - productive relationship with you. -Best regards, Wolfram Technology Group Wolfram - Research, Inc. http://www.wolfram.com/support - -I have verified that this error has been corrected in Version 11.0.1. Thanks to Wolfram Research for helping me get the latest version installed.<|endoftext|> -TITLE: If $u \in H^1(\Omega) \cap L^\infty(\Omega)$, is $u|_{\partial\Omega} \in L^\infty(\partial\Omega)$? -QUESTION [6 upvotes]: Let $\Omega$ be a bounded Lipschitz domain. -Let $u \in H^1(\Omega) \cap L^\infty(\Omega)$, and suppose that $\lVert u \rVert_{L^\infty(\Omega)} \leq A$. -Let $T:H^1(\Omega) \to L^2(\partial\Omega)$ be the trace mapping. -Is it true that $Tu \in L^\infty(\partial\Omega)$ with -$\lVert Tu \rVert_{L^\infty(\partial\Omega)} \leq A'$ for some constant $A'$? -I think so, since we can find functions $u_n \in C^0(\bar \Omega)$ bounded by $A$ such that $u_n \to u$ in $H^1$ and $Tu = \lim Tu_n$ in $L^2$. - -REPLY [3 votes]: Consider the function $u + A$. It belongs to $H^1(\Omega)$ and is non-negative. A standard procedure yields a sequence $\{v_n\} \in C(\bar\Omega) \cap H^1(\Omega)$ with $v_n \ge 0$ and $v_n \to u + A$ in $H^1(\Omega)$. Now, $T v_n \ge 0$, since it corresponds with the usual trace of $v_n$. Since $T$ is continuous, you have $T v_n \to T(u + A)$ and $T(u+A) \ge 0$. Now, you can easily show $T(u + A) = T u + A$ and this yields $T u \ge -A$. Similarly, $T u \le A$ follows.<|endoftext|> -TITLE: Does there exist an uncountable dimensional real vector space $X$ such that $(X,\|\cdot\|)$ is a Banach space for any norm $\|\cdot\|$ on it? -QUESTION [6 upvotes]: Does there exist an uncountable dimensional real vector space $X$ such that $(X,\|\cdot\|)$ is a complete space for any norm $\|\cdot\|$ on it ? - -REPLY [8 votes]: No. On every infinite-dimensional real (or complex) vector space, there are comparable but not equivalent norms, i.e. there are norms $\lVert\,\cdot\,\rVert_1$ and $\lVert\,\cdot\,\rVert_2$ with -$$\lVert x\rVert_1 \leqslant \lVert x\rVert_2$$ -for all $x\in X$, but there is no $C \in (0,+\infty)$ with -$$\lVert x\rVert_2 \leqslant C\cdot \lVert x\rVert_1$$ -for all $x\in X$. By the open mapping theorem, at most one of $\lVert\,\cdot\,\rVert_1$ and $\lVert\,\cdot\,\rVert_2$ can make $X$ into a Banach space. -To see the existence of such norms, consider a Hamel basis $\{ e_{\alpha} : \alpha \in A\}$ of $X$ and take two functions $f_1,\,f_2 \colon A \to (0,+\infty)$ with $f_1(\alpha) \leqslant f_2(\alpha)$ for all $\alpha\in A$ and $\frac{f_2(\alpha)}{f_1(\alpha)}$ unbounded. Then define -$$\Biggl\lVert \sum_{\alpha \in A} c_{\alpha}\cdot e_{\alpha}\Biggr\rVert_i = \max \{ f_i(\alpha)\cdot \lvert c_\alpha\rvert : \alpha \in A\}.$$<|endoftext|> -TITLE: Is there a countably compact sequential non-$T_2$ space that is not sequentially compact? -QUESTION [5 upvotes]: Let $X$ be a topological space. -Definitions: - -$X$ is countably compact if every countable open cover of $X$ has a finite subcover or equivalently, every sequence in $X$ has a cluster point. -$X$ is sequentially compact if every sequence in $X$ has a convergent subsequence -$X$ is sequential if every sequentially closed set is closed. - -It is known that if $X$ is countably compact + sequential + $T_2$ then $X$ is sequentially compact (see e.g. Engelking). -The proof goes like this: -Let $x_n$ be a sequence in $X$. Since $X$ is countably compact $x_n$ has a cluster point $x \in X$. If $\{ n \mid x_n = x \}$ is infinite then we have a constant subsequence of $x_n$, thus convergent. So assume that $\{ n \mid x_n = x \}$ is finite such that there is some $n_0$ and $x_n \neq x$ for all $n \geq n_0$. -Consider the set $A := \{ x_n \mid n \geq n_0 \} \setminus \{ x \}$. -Then $A$ is not closed and since $X$ is sequential, $A$ is not sequentially closed. Thus, there is a sequence $y_k \in A$ and $y \in X \setminus A$ such that $y_k \to y$. Since $X$ is $T_2$ it follows that $y_k$ is not eventually constant since otherwise $y_k \to y_N \in A$ for some $N \in \mathbb{N}$ and $y_k \to y \in X \setminus A$ implies $y_N = y$ which is a contradiction. Thus, we have infinitely many $y_k$ in $A$ which can be finally used to construct a convergent subsequence of $x_n$. -There are also other properties $\varphi$ such that countable compactness + $\varphi$ imply sequential compactness. As an example, $\varphi$ can be taken to be first-countable or even Fréchet-Urysohn (cluster points of injective sequences $x_n$ are accumulation points of the corresponding sets $x(\mathbb{N})$, thus lying in the closure and thus being able to be approximated by a sequence in $x(\mathbb{N})$ which can be used to generate a convergent subsequence of $x_n$). There is no need for an additional separation property. -In my eyes, the Fréchet-Urysohn property is not "too far" away from the sequential property and thus it is a little bit "strange" that sequentialness needs an additional separation property. By "too far" I mean that typical spaces that are sequential but not Fréchet-Urysohn are a little bit pathological (e.g. Arens-Fort space). -Questions: - -Is there some deeper insight, why we need a separation property for sequentialness but not for Fréchet-Urysohn? -Is the separation property really needed, i.e. is there some sequential space which is countably compact but not sequentially compact? - -Remark: In fact, for the uniqueness of the sequential limit we can reduce the $T_2$ separation property to the $US$ separation property (i.e. $X$ is sequentially Hausdorff) which lies strictly between $T_1$ and $T_2$. This gives a hint, that $T_1$ should be not enough. - -REPLY [4 votes]: Theorem: If $X$ is countably compact and sequential (without any - additional separation property and thus not necessarily $T_1$) then - $X$ is sequentially compact. - -I found a proof in T. P. Kremsater, "Sequential Space Methods" (Master of Arts thesis). In the $T_1$ case the singleton sets $\{ x \}$ are closed. In the non-$T_1$ case consider instead the closure $\overline{\{ x \}}$ of the singleton. So, if $x_n$ is a sequence then in the $T_1$-case it is enough to consider the underlying set $\{ x_n \mid n \in \mathbb{N} \} = \bigcup_{n \in \mathbb{N}} \{ x_n \}$ whereas in the general case one should rather consider $\bigcup_{n \in \mathbb{N}} \overline{\{ x_n \}}$. -Here are the details for the proof: - -Lemma 1: Let $X$ be a topological space and $x, y \in X$. Then the - following are equivalent: - -$x \in \overline{\{ y \}}$ (i.e. $x$ is smaller than $y$ in the specialization preorder) -for all $C \subseteq X$ closed: $y \in C \Rightarrow x \in C$ -for all $U \subseteq X$ open: $x \in U \Rightarrow y \in U$. - -In particular, if $x_n, x \in X$ with $x \in \overline{\{ x_n \}}$ for - all $n$ then $\{ x_n \mid n \in \mathbb{N}\} \subseteq U$ for every open - neighborhood $U$ of $x$ and thus $x_n \to x$. - -The proof is clear. - -Lemma 2: Let $X$ be a topological space and $x_n \in X$. If $x_n$ has - no convergent subsequence then $\bigcup_{n \in \mathbb{N}} \overline{\{ x_n \}}$ is sequentially closed. - -Proof: Assume that $A$ is not sequentially closed. Then there is a sequence $y_k \in A$ and $y \in X \setminus A$ such that $y_k \to y$. From $y \in A$ it follows that there is a sequence $n_k$ such that $y_k \in \overline{\{ x_{n_k} \}}$. We show that the sequence $n_k$ is bounded. Otherwise, there is an increasing sequence $k_l$ such that $n_{k_l}$ is increasing. Then $y_{k_l}$ is a subsequence of $y_k$ and $x_{n_{k_l}}$ a subsequence of $x_n$. Thus $y_{k_l} \to y$ and from $y_{k_l} \in \overline{\{ x_{n_{k_l}}\}}$ for each $l$ it follows by Lemma 1 that $x_{n_{k_l}} \to y$. -(Indeed, for every open neighborhood $U$ of $y$ there exists $l_0$ such that $y_{k_l} \in U$ for all $l \geq l_0$ and thus $x_{n_{k_l}} \in U$ for all $l \geq l_0$.) Thus, $x_n$ has a convergent sequence which is a contradiction to the premise in the Lemma. Therefore, $n_k$ is bounded. It follows that there is $m$ such that $n_k = m$ infinitely often and thus $y_k$ is frequently in $\overline{\{ x_m \}}$. Thus, there exists a subsequence $y_{k_l}$ such that $y_{k_l} \in \overline{\{ x_m \}}$ for all $l$ and since $y_{k_l} \to y$ and and $\overline{\{ x_m \}}$ is closed it follows that $y \in \overline{\{ x_m \}} \subseteq A$. But $y \in X \setminus A$, contradiction. Thus, $A$ is sequentially closed. -Proof of Theorem: -Let $x_n$ be a sequence in $X$ and assume that $x_n$ has no convergent subsequence. By Lemma 2 it follows that $A := \bigcup_{n \in \mathbb{N}} \overline{\{ x_n \}}$ is sequentially closed and since $X$ is sequential, $A$ is closed. Since $X$ is countably compact the closed subset $A$ is also countably compact. Therefore, $x_n$ has a cluster point $x \in A$. We show that $B := \{ n \mid x \in \overline{\{ x_n \}} \}$ is finite. Otherwise, if $B$ is infinite, there is an increasing sequence $n_k$ such that $x \in \overline{\{ x_{n_k}\}}$ for all $k$. Thus, $x_{n_k}$ is a subsequence of $x_n$ and by Lemma 1 it follows that $x_{n_k} \to x$. But this contradicts the assumption that $x_n$ has no convergent subsequence. It follows that there is $N$ such that $x \not\in \bigcup_{n \geq N} \overline{\{ x_n \}}$. Since the sequence $(x_n)_{n \in \mathbb{N}}$ has no convergent subsequence it follows that the sequence $(x_n)_{n \geq N}$ has also no convergent subsequence. By Lemma 2 it follows that $\bigcup_{n \geq N} \overline{\{ x_n \}}$ is sequentially closed and since $X$ is sequential, $A$ is closed. But $x$ is a cluster point of $(x_n)_{n \geq N}$ which implies that $x \in \overline{\{ x_n \mid n \geq N\}}$. Since $\bigcup_{n \geq N} \overline{\{ x_n \}}$ is closed it follows that $x \in \overline{\{ x_n \mid n \geq N \}} = \overline{ \bigcup_{n \geq N} \{ x_n \}} \subseteq \bigcup_{n \geq N} \overline{ \{ x_n \} }$, contradiction. Thus, the assumption that $x_n$ has no convergent subsequence does not hold which finally implies that $X$ is sequentially compact.<|endoftext|> -TITLE: Game theory, olympiad question -QUESTION [8 upvotes]: I've seen the following question in the brazilian olympiad for university students, and I couldn't solve it. -Thor and Loki play the game: Thor chooses an integer $n_1 \ge 1$ , Loki chooses $n_2 \gt n_1$, Thor chooses $n_3 \gt n_2$ and so on. Let $X$ be such that -$$X = \bigcup_{j\in\mathbb N^*} \left(\left[n_{2j-1},n_{2j}\right) \cap \mathbb Z \right)$$ -and $$s= \sum_{n\in X} \frac {1}{n^2}$$ -Thor wins if s is rational, and Loki wins if s is irrational. -Determine who has got the winning strategy. - -REPLY [4 votes]: Loki should have the winning strategy. First note that $\sum_{n=1}^\infty \frac{1}{n^2}=\frac{\pi^2}{6}$, which is clearly irrational. Further, note that when Thor picks an integer he removes a set of rational numbers from the sum, thus lowering the highest possible value of the summation. Let $t_j=\sum_{n_{2j}\leq n\leq n_{2j+1}}\frac{1}{n^2}$ be the sum of the numbers removed at step $j$ (letting $n_0=1$). Note that $t_j$ is rational and that $s\leq \frac{\pi^2}{6} - \sum_{j=1}^m t_j$, where the right hand side is irrational.We will denote $s_j$ as the current sum at turn $j$ so that $s_j\to s$ as $j\to \infty$. -Loki can now impose some enumeration $q_1, q_2, ...$on $\mathbb Q$. On his $m^{th}$ turn, he picks the first rational in the list that is less than $\frac{\pi^2}{6} - \sum_{j=1}^m t_j$ and greater than $s_{m-1}$. Call this rational $q_i$. Note that if all remaining integers were picked after this turn, we would have $s=\frac{\pi^2}{6} - \sum_{j=1}^m t_j$, therefore Loki can pick some number of integers such that $q_i < s_m <\frac{\pi^2}{6} - \sum_{j=1}^m t_j$. -Continuing in this way, Loki can eliminate every possible rational number as being $s$. Thus $s$ will be irrational and Loki will win. -(Hopefully this makes sense. I'm not able to edit right now.)<|endoftext|> -TITLE: Why a complex symmetric matrix is not diagonalizible? -QUESTION [6 upvotes]: I know an Hermitian matrix is diagonalizable, and similarly a real symmetric matrix is diagonalizable, but what's wrong in a complex symmetric matrix. -Why does the Gram-Schmidt process fail? - -REPLY [6 votes]: As Chris Godsil and Dietrich Burde pointed out it's because $\langle x,y \rangle =x^*y=0$ which is the orthogonality condition on complex vectors does not imply that $x^Ty=0$ which is the complex symmetry condition. -So the Gram Schmidt process actually will produce orthogonal vectors, but they will not be able to diagonalize the matrix.<|endoftext|> -TITLE: Inequalities with floor function -QUESTION [6 upvotes]: I need some help with this exercise, I'm pretty new solving this exercises. -$$ \lfloor x \rfloor + \lfloor y \rfloor \le \lfloor x + y \rfloor \le \lfloor x \rfloor + \lfloor y \rfloor + 1$$ -I know that I had to use the formal definition of the floor function, which is: -$$ \lfloor x \rfloor = \max {\{ m \in \Bbb Z \mid m \le x\}}$$ - -REPLY [3 votes]: I do not know how rigorous or formal you need/want to be but it's straight forward that $[x]$ is the unique integer such that $[x] \le x < [x] + 1$[*] -Therefore $[x] \le x < [x] + 1$ and $[y] \le y < [y] + 1$ so $[x] + [y] \le x + y < [x] +[y] + 2$. -So there are two possible cases: -Case 1: $[x] + [y] \le x + y < [x] + [y] + 1$ -This means $[x + y ] = [x] + [y]$. So $[x]+[y] = [x+y] < [x] + [y] + 1$. -Case 2: $[x] + [y] + 1 \le x + y < [x]+ [y] + 2$ -This means $[x + y] = [x] + [y] + 1$. So $[x]+[y] < [x+y] = [x]+[y] + 1$. -And that's it. -[*] Rigor and formality?? -By the archemedian principal, for every real $x$, there is a unique integer $n$ such that $n \le x < n+ 1$. -$n \in \mathbb Z$ and $n \le x$ so $n \in \{m \in \mathbb Z| m \le x\}$. If $m \in \mathbb Z; m > n$ then $m \ge n+1 > x$ so $m \not \in \{m \in \mathbb Z| m \le x\}$. -So $n = \max \{m \in \mathbb Z| m \le x\} = [x]$.<|endoftext|> -TITLE: Higher Ramification Groups for $\mathbb{Q}(\sqrt{d})|\mathbb{Q}$. Clever way to compute -QUESTION [5 upvotes]: I'm asked to computing the higher ramification group for quadratic extensions $K=\mathbb{Q}(\sqrt{d})|\mathbb{Q}$. They are defined as follows, for a prime ideal $\mathfrak{p}$, $$G_{\mathfrak{p}}^{(i)}:=\{\sigma \in Gal(\mathbb{Q}(\sqrt{d})|\mathbb{Q})\mid \forall \alpha \in \mathcal{O}_K, \sigma(\alpha)=\alpha \pmod{\mathfrak{p}^{i+1}}\}$$ -clearly for $i=0$ we have that the ramification group is the inertia group. -By the fact that the extension is clearly Galois, the definition of inertia group depends only on the prime number contained in the prime ideal. -Then we have clearly that the cardinality of the inertia group at a prime is the ramification index of such prime, and therefore $$ p \nmid \delta_K \Leftrightarrow G_{\mathfrak{p}}^{(0)}\cong \{1\}$$ and therefore all the higher ramification groups are trivial. -Is there a clever way to deal with the case $p \mid \delta_K$? More importantly, what properties do the higher ramification groups have in order to compute them in this easy case? -ADDENDUM Brute computations shows that for $p$ odd and $d\neq 1 \pmod{4}$ $G_p^{(1)}$ is trivial. I think one can do other cases by brute computations as well. Is there a smarter way? - -REPLY [10 votes]: Here’s a method you can use for the higher ramification in a ramified quadratic extension. If you look at the definition of the ramification groups, you see that everything comes down to the distance between a prime element $\pi$ and its conjugate $\bar\pi$. -The computation is purely local, which means you can do it over $\Bbb Q_p$ if you like, and it boils down to this: take a prime element $\pi$, then the unique break number is $v_\pi(\bar\pi-\pi)-1$. I won’t justify that decrease by $1$, but you’ll see it when you trace through the standard definition. -Let’s look at examples: first, $\Bbb Q(\sqrt d\,)$ at odd $p|d$: Here, locally at $p$, $\sqrt d$ is a prime, and you take $v_{\sqrt d}(2\sqrt d\,)=1$ and subtract $1$ to get the result I stated in my comment, that the break is at $0$, since you have tame ramification. So $G_0$ is the whole group, $G_1$ is trivial. -All remaining cases are with $p=2$, first for $d\equiv3\pmod4$. Then $\sqrt d-1=\pi$ is a good prime, locally at $2$ remember, and its minimal polynomial is $X^2+2X+1-d$, Eisenstein because $d\equiv3$ modulo $4$. Now, $\bar\pi-\pi=-2-2\pi$, which has $v_\pi$-value $2$, so the break-number is $1$. -Finally, for $d\equiv2$ modulo $4$, $\sqrt d=\pi$ is your local prime, and $v_\pi(\bar\pi-\pi)=3$, so that the break-number is $2$. -Maybe I should add a slightly philosophical note: what ramification theory tells you is how far the various conjugates of a prime element are from each other (whether or not your extension is normal). In a tame extension, all the conjugates are equally far from each other, like the $n$ vertices of an $(n-1)$-simplex. In a wild extension, they may be at various distances from each other. Consider the primitive $16$-th roots of unity $\zeta$ and the corresponding prime elements $\pi=\zeta-1$. There are eight of them in all and if you fix one of them, you will find one other that’s moderately close to it; two others that are somewhat farther away, and the remaining four are at a greater distance yet. The break numbers tell you exactly what these distances are.<|endoftext|> -TITLE: What does "dual statement" mean exactly in category theory? -QUESTION [6 upvotes]: I have long been confused about this notion. I know that for a statement within a single category, forming the dual statement is just reversing every arrows. But what about a statement concerning several categories and functors between them? -Mac Lane suggests reversing all arrows in all categories and leaving the functors invariant in his book "Categories for the working mathematician" (page 32), but it seems that Mac Lane himself doesn't always follow this discipline (when he states "the dual of Yoneda Theorem", he doesn't reverse the arrows in $\mathbf{Set}$ !) Can anyone give me a formal definition of "dualize a statement in category theory"? - -REPLY [5 votes]: Some of the categories in a statement are "variables" (e.g. "$C$"), and you should take the opposite of those categories but not any categories which are "constants" (e.g. $\text{Set}$). The point is that any category which is a "variable" is being quantified over, so you can always replace that category with an opposite category, in the same way that you can substitute anything you want for a variable in an identity. -For example, if a functor $F : C \to D$ is a left adjoint, then it preserves colimits. The dual of this statement is obtained by replacing $C$ and $D$ with opposites, since they are both "variables," and you get that if $F : C^{op} \to D^{op}$ is a left adjoint, then it preserves colimits. But this is equivalent to a statement about $C$ and $D$ themselves, which is that if $F^{op} : C \to D$ is a right adjoint, then it preserves limits.<|endoftext|> -TITLE: Show that any two consecutive odd integers are relatively prime -QUESTION [17 upvotes]: I've selected two integers $m=2k+1$ and $n=2k+3$ and I've tried to make a linear combination of the two such that it equals 1, but I'm sort of stuck and am not sure if this is a dead end or not. Any pointers or alternative ideas? - -REPLY [3 votes]: A=2n+1=c*a , where c is a common divisor -B=2n-1=c*b -A-B=c(a-b)=2 -A>B hence a>b hence a-b>0. -If a-b=1 then c=2, and then A and B (since A=ca, B=cb) are not odd, which is a contradiction. Hence a-b is not 1. -If a-b=2 then c=1. -If a-b>2 then c*(a-b)>2 which is a contradiction. -Hence c=1 is the only option for a common divisor c.<|endoftext|> -TITLE: Is there an orthogonal matrix that is not unitary? -QUESTION [5 upvotes]: I could find a example of a unitary matrix such that is not orthogonal, thats simple in $\mathbb{C}$, but for this exercise of a orthogonal that is not unitary i realize that is possible just on $\mathbb{C}$ because all orthogonal matrix on $\mathbb{R}$ is unitary, so anyone have a exemple of this case? - -REPLY [2 votes]: Late remark. In general, if $K$ is complex skew-symmetric, then $Q=e^{zK}$ is complex orthogonal for every $z\in\mathbb C$. When $K\ne0$, since $e^{zK}$ is holomorphic and its power series expansion has some nonzero high-order terms, $e^{zK}$ is not a constant function. Therefore it cannot be real all the time (otherwise it will fail to satisfy Cauchy-Riemann equations). Pick a $z$ such that $Q=e^{zK}$ is not real. Then $Q$ is not unitary, because all unitary orthogonal matrices are real.<|endoftext|> -TITLE: Lifting an equation from the localization by clearing denominators (Atiyah-Macdonald 5.12) -QUESTION [5 upvotes]: In proposition 5.12, Atiyah & Macdonald prove that localization commutes with taking the integral closure. That is, they prove the following: - -Let $A \leq B $ be commutative rings, Let $C$ be the integral closure of $A$ in $B$, and let $S$ be a submonoid of $A$. Then $S^{-1}(C)$ is the integral closure of $S^{-1}A$ in $S^{-1}B$. - -In the proof, they show that every element $\frac bs \in S^{-1}B$ is integral over $S^{-1}A$, by multiplying the equation $$(\frac bs)^n+\frac {a_1} {s_1}(\frac bs)^{n-1}+...+\frac {a_n}{s_n}=0$$ by $(s \cdot \prod s_i)^n$, and claiming that this gives an integral equation in $A$ for $bs_1...s_n$. - -How do we justify going from an equation in the localization to an equation in the original ring, given that $A$ is not necessarily a domain? - -After all, the equation in the localization means a "pretty complicated" thing: that the numerator resulting from the common denominator of the expression is annihilated by some element of $S$. Do I have to write explicitly the numerator to make this implication? how is it formally justified? - -REPLY [3 votes]: From -$$\left(\frac bs\right)^n+\frac{a_1} {s_1}\left(\frac bs\right)^{n-1}+\cdots+\frac {a_n}{s_n}=\frac01$$ we get $$\frac{s_1\cdots s_nb^n+sa_1s_2\cdots s_nb^{n-1}+\cdots+s^na_ns_1...s_{n-1}}{s^ns_1\cdots s_n}=\frac 01,$$ so there is $u\in S$ such that $$u\left(s_1\cdots s_nb^n+sa_1s_2\cdots s_nb^{n-1}+\cdots+s^na_ns_1...s_{n-1}\right)=0.$$ (It seems the book missed this part.) Now multiply the equation by $(us_1\cdots s_n)^{n-1}$ and find that $us_1\cdots s_nb$ is integral over $A$.<|endoftext|> -TITLE: A sum involving binomial coefficients and powers of 2 -QUESTION [5 upvotes]: I am interested in a simplified version of the following sum -$$\sum_{k=1}^{n}\binom{n}{k}\frac{(-1)^k}{2^k-1}.$$ -I have to evaluate it for values of n ranging from $10^{4}$ till $10^{10}.$ -Is there a way to express it in terms of some special function computable through Matlab or mathematica? -UPDATE : For small values of $n$ I noticed that the value is quite close to $−log_2(n).$ - -REPLY [2 votes]: We can rewrite the sum using a geometric series, then apply the binomial theorem: -\begin{align*} -\sum_{k=1}^{n}\binom{n}{k}\frac{(-1)^k}{2^k-1}&=\sum_{k=1}^n (-1)^k {n\choose k} \sum_{m=1}^\infty \frac{1}{2^{mk}}\\ -&=\sum_{m=1}^{\infty}\left[\left(1-\frac{1}{2^{m}}\right)^n-1\right]. -\end{align*} -In a sense this made things worse, because we replaced the finite sum with an infinite one. On the other hand, the infinite series is nice because: - -the terms are all negative and increase monotonically to $0$, and -the terms decay exponentially once $m$ is bigger than $\log_2(n)$. - -For $n$ large, terms in the series are very close to $-1$ when $m$ is less than $\log_2 n$ and very close to $0$ when $m$ is greater than $\log_2 n$. With some care, this is enough to show that the sum never differs from $-\log_2(n)$ by more than $2$.<|endoftext|> -TITLE: Prove that the relation $x^n + y^n = z^n$ does not hold for $n \geq z$ -QUESTION [5 upvotes]: Assume $x, y, z$, and $n$ are positive integers, and $n \geq z$. Prove that the relation $x^n + y^n = z^n$ does not hold. - -I find it hard to relate the condition $n \geq z$ to solve this question. Maybe this is best proved by induction. I could start with $n = 1$. Then we have $x+y = 1$ for which there exist no positive integer solutions $(x,y)$. We can then assume the statement is true for some $k \geq z$. Thus there exist no positive integer solution pairs $(x,y)$ to $x^k+y^k = z^k$. We have to show it is the same for $x^{k+1}+y^{k+1} = z^{k+1}$. - -REPLY [7 votes]: Since we assume that $x,y,z$ are positive integers, we have $x^n + y^n > y^n$, and since $a \mapsto a^n$ is monotonic, it follows that $x^n + y^n > z^n$ if $y \geqslant z$. By the same reasoning we can exclude $x \geqslant z$. -For $x,y < z$, we then have $x^n + y^n \leqslant 2(z-1)^n$, and then showing $2(z-1)^n < z^n$ finishes the proof. Since by assumption $n \geqslant z$, we have -$$\frac{(z-1)^n}{z^n} = \biggl(1 - \frac{1}{z}\biggr)^n \leqslant \biggl( 1 - \frac{1}{z}\biggr)^z,$$ -and it is easily shown if not already known that -$$\biggl(1 - \frac{1}{z}\biggr)^z < e^{-1}$$ -for all positive integers $z$. Hence -$$\frac{2(z-1)^n}{z^n} < \frac{2}{e} < 1.$$<|endoftext|> -TITLE: Basic formality when considering random numbers -QUESTION [6 upvotes]: Suppose we are interested in randomly picking numbers in the interval $[0,1]$ with the uniform distribution. If I want to write a mathematical text about this, it can be done by saying that $X$ is a random variable with uniform distribution in $[0,1]$, or $X\sim U[0,1]$ for short. Then I can analyze the expected value of $X$, it's variance, some particular probabilities and so on. By analyzing $X$, I'm analyzing the process of picking numbers described above. -In practice, I'm able to do that, and if asked to formalize more about this random variable, I would introduce a probability space $(\Omega, \mathcal{F}, P)$, say that $X:\Omega\to[0,1]$ is a measurable function, talk about it's density function and so on. Everything ok to this point. -Usually the sample space $\Omega$ stands for some concrete (real world) problem we are interested in and $X$ translates this concrete in numbers (so we can use mathematics). But in this case there is no concrete, we are only interested in numbers. We really just want the codomain of $X$, not the domain. So I considered two explanations of how we should interpret $\Omega$: -1) We just let $\Omega$ undefined. It is there just to formalize the notion of a measurable function. -2) $\Omega = [0,1]$ and $X(\omega) = \omega$, for all $\omega\in [0,1]$. -My two questions are: How should we interpret $\Omega$ in this situation? And what are the differences between my two interpretations (is one of them more right)? -Thanks!!! - -REPLY [2 votes]: sorry for answering so late! -In all practical terms you would get away with the first option (not specifying what $\Omega$ is). Even academic papers in pure mathematics often omit this, because as you say "we really just want the codomain of $X$". -However if you want to formalise your random variable, I would definitely go with the second option you offered, that is taking $\Omega = [0,1]$ and $X(\omega) = \omega$ for all $\omega\in [0,1]$. The sample space $\Omega$ stands for the set of all possible outcomes/results of some experiment. Here we take the experiment to be picking numbers in the interval $[0,1]$, which may not seem so "real-world" but does not need to be! If you would prefer a more concrete experiment, picking such a number could be interpreted as drawing a unit interval on the wall and blindly throwing a dart on it. -Then the results of the experiment are real numbers from $0$ to $1$, and it becomes clear that we should take $\Omega$ and $X$ as you suggested! :)<|endoftext|> -TITLE: Category-Theoretic relation between Orbit-Stabilizer and Rank-Nullity Theorems -QUESTION [11 upvotes]: In linear algebra, the Rank-Nullity theorem states that given a vector space $V$ and an $n\times n$ matrix $A$, -$$\text{rank}(A) + \text{null}(A) = n$$ -or that -$$\text{dim(image}(A)) + \text{dim(ker}(A)) = \text{dim}(V).$$ - -In abstract algebra, the Orbit-Stabilizer theorem states that given a group $G$ of order $n$, and an element $x$ of the set $G$ acts on, -$$|\text{orb}(x)||\text{stab}(x)| = |G|.$$ - -Other than the visual similarity of the expressions, is there some deeper, perhaps category-theoretic connection between these two theorems? Is there, perhaps, a functor from the category of groups $\text{Grp}$ to some category where linear transformations are morphisms? Am I even using the words functor and morphism correctly in this context? - -REPLY [3 votes]: The intuition behind this question is spot-on. I'm going to try to fill out some of the details to make this work. -The first thing to note is that a linear map $A:V\to V$ also gives a genuine group action: it is the additive group of $V$ acting on the set $V$ by addition. That is, any $v\in V$ acts on $x\in V$ as $v: x \mapsto x+Av.$ -Now we see that given any $x$ in $V$ the stabilizer subgroup $\text{stab}(x)$ of this action is precisely the kernel of $A.$ The orbit of $x$ is $x$ plus the image of $A.$ -If we are working with a vector space over a finite field, we can take the cardinality of these sets as in the formula $|\text{orb}(x)||\text{stab}(x)| = |G|$ and as @Ravi suggests, take the logarithm of this where the base is the size of the field and we get exactly the rank-nullity equation. -If we have an infinite field then this doesn't work and we need to think more along the lines of a categorified orbit-stabilizer theorem. In this case, for each $x\in V$ we can find a bijection: -$$ -\text{orb}(x) \cong G / \text{stab}(x) -$$ -and as @Nick points out, this bijection gives us the First Isomorphism Theorem: -$$ -\mathrm{Im}(A) \cong V / \mathrm{Ker}(A). -$$<|endoftext|> -TITLE: Is a substitution always required to change the limits of an integral? -QUESTION [6 upvotes]: For example: -Suppose we have an impulsive force $f(t)$ lasting from $t=t_0$ until $t=t_1$ which is applied to a mass $m$. Then by Newtons Second law we have $$\int_{t=t_0}^{t=t_1}f(t)\,\mathrm{d}t=\int_\color{red}{t=t_0}^\color{red}{t=t_1}m\color{blue}{\frac{\mathrm{d}v}{\mathrm{d}t}}\mathrm{d}t=\int_\color{red}{v=v_0}^\color{red}{v=v_1}m\,\mathrm{d}v=m(v_1-v_0)\tag{1}$$ -What I can't understand is; What substitution was made to allow the limits marked $\color{red}{\mathrm{red}}$ to change from $t$ to $v$. -I thought it might be due to $$\color{blue}{\frac{\mathrm{d}v}{\mathrm{d}t}}=\frac{\mathrm{d}v}{\mathrm{d}x}\cdot \underbrace{\frac{\mathrm{d}x}{\mathrm{d}t}}_{\Large{\color{#180}{=v}}}=v\frac{\mathrm{d}v}{\mathrm{d}x}$$ by the chain rule. -But it is something much simpler than this, and I believe I am over-thinking it too much. Could someone please tell me what substitution was made to change the limits marked $\color{red}{\mathrm{red}}$ in equation $(1)$? - -Edit: -Comments below seem to indicate that one can simply change the limits of integration to make the integral dimensionally correct. But I consider this to be a less rigorous approach, and I was taught that integral limits must be changed via a substitution. So I still need to know what substitution was made? -Thanks again. - -REPLY [3 votes]: The fundamental reason to change the limits of integration is that -the variable of integration has changed. -Substitution is an obvious case in which this is likely to occur. -For example, substitute $u = x - 2$ in $\int (x - 2) dx$: -$$ -\int_0^2 (x - 2) dx = \int_{-2}^0 u\; du. -$$ -The intuition I follow on this is that the start of the integral occurs -"when $x=0$" and ends "when $x=2$". -But "when" $x=0$, it must also be true that $u=-2$, -and "when" $x=2$, it must also be true that $u=0$. -So in terms of $u$, the integral needs to start -"when $u=-2$" and end "when $u=0$". -A more rigorous treatment would take $x - 2$ as a function over -the domain $[0,2]$ and transform it; but transforming the function -also transforms its domain, so $x - 2$ over the domain $[0,2]$ -transforms to $u$ over the domain $[-2,0]$. -Anything that changes the integration variable of a definite integral -also has to be reflected in the limits of integration, -because just as with any substitution, you're integrating a (possibly) -different function over a (possibly) different domain. -That is, if the integrand changes from $f(t)dt$ to $h(v)dv$ -(even if $h(v)$ is a constant function, as it is in the question), -the integral over $v$ needs to start and end at $v$-values that are -correctly matched to the $t$-values at which the integral over $t$ -started and ended. - -Note that in any change of variables, regardless of whether we achieve it -by first writing down an explicit substitution formula (such as $u = x - 2$), -has to account for the derivative of the new variable of integration -with respect to the old one. For a substitution from $t$ to $u$ via the -equation $u = h(t)$, the derivative of the new w.r.t. the old is -$\dfrac{du}{dt} = h'(t)$ -and it is accounted for in the rule -$$ -\int g(h(t))\, h'(t)\, dt = g(u)\, du. -$$ -As far as I know, a change of variables must not break this rule, -so there must somehow be a substitution $u = h(t)$ that can explain it. -In the integral in the question, -$f(t) = m \dfrac{d^2 x}{dt^2}$. -If $\dfrac{dx}{dt} = v = h(t)$ -and if $g$ is the constant function with value $m$, -then $\dfrac{d^2 x}{dt^2} = \dfrac{dv}{dt} = h'(t)$ and $g(h(t)) = m$, so -$$ -\int m \frac{dv}{dt} \, dt - = \int g(h(t))\, h'(t)\, dt - = \int g(v)\, dv = \int m \, dv. -$$ -The definite integral follows the same rule but also has to make the -corresponding change to the interval of integration: -$$ -\int_{t_0}^{t_1} g(h(t))\, h'(t)\, dt - = \int_{h(t_0)}^{h(t_1)} g(v)\, dv; -$$ -setting $v_0=h(t_0)$ and $v_1=h(t_1)$, -$$ -\int_{t_0}^{t_1} m \frac{dv}{dt} \, dt - = \int_{h(t_0)}^{h(t_1)} m \, dv - = \int_{v_0}^{v_1} m \, dv. -$$<|endoftext|> -TITLE: Are there Möbius transformations of arbitrary group-theoretic order? -QUESTION [6 upvotes]: Take, for example, $f(x)=\frac{x-3}{x+1}$. One can verify that $f\circ f\circ f$ is the identity, so $f$ has order 3 in the group of Möbius transformations. Constructing such functions can be done easily. -Are there Möbius transformations of aribtrarily greater orders? If so, how can one construct them? - -REPLY [6 votes]: The composition of Möbius transforms is naturally associated with their matrix of coefficients: -$$x \rightarrow f(x)=\dfrac{ax+b}{cx+d} \ \ \ \leftrightarrow \ \ \ \begin{bmatrix} a & b\\ c & d \end{bmatrix}$$ -This correspondence is in particular a group isomorphism between the group of (invertible) homographic transforms of the real projective line and $PGL(2,\mathbb{R})$. -(composition $\circ$ mapped to matrix product $\times$). -Thus, your question boils down to the following: for a given $n$, does it exist a $2 \times 2$ matrix $A$ such that $A^n=I_2$ ? -The answer is yes for real coefficients. It suffices to take the rotation matrix : -$$\begin{bmatrix} \cos(a) & -\sin(a) \\ \sin(a) & \cos(a) \end{bmatrix} \ \ \ a=\dfrac{2\pi}{n}$$ -Edit: If you are looking for integer coefficients, the answer is no. In fact, with integer coefficients, only homographies of order 2,3,4 and 6 can exist. -(I rectify here an error that has been signaled and I add information). See for that the very nice paper (http://dresden.academic.wlu.edu/files/2017/08/nine.pdf) (in particular its lemma 1).<|endoftext|> -TITLE: Determine $x$ such that $\lim\limits_{n\to\infty} \sqrt{1+\sqrt{x+\sqrt{x^2…+\sqrt{x^n}}}} = 2$ -QUESTION [27 upvotes]: Find the value of $x$ such that $\lim\limits_{n\to\infty} \sqrt{1+\sqrt{x+\sqrt{x^2…+\sqrt{x^n}}}} = 2$ -I tried getting rid of square roots and got $(...((9-x)^2-x^2)^2-...)^2-x^n = 0$ which I don't think helped. Please point me in the right direction. - -REPLY [7 votes]: Let me describe a sketch of proof that $x=4$. -A. Observe that if $f(x)=\lim_{n\to\infty}\sqrt{1+\sqrt{x+\sqrt{x^2+\cdots\sqrt{x^n}}}}$, then $f$ is strictly increasing. -B. We shall show that $f(4)=2$, and hence $x=4$ is the unique answer. -$B_1.$ Fix $m\in\mathbb N$ and show that, for $n=m,m-1,m-2,\cdots$ (induction backwards) -$$ -2^n<\sqrt{4^n+\sqrt{4^{n+1}+\cdots\sqrt{4^{m-1}+\sqrt{4^m}}}}<2^n+1, -$$ -while -$$ -\sqrt{4^n+\sqrt{4^{n+1}+\cdots\sqrt{4^{m-1}+\sqrt{4^m}+1}}}=2^n+1. -$$ -$B_2.$ Next estimate the difference -$$ -(2^n+1)- -\sqrt{4^n+\sqrt{4^{n+1}+\cdots\sqrt{4^{m-1}+\sqrt{4^m}}}} \\ -=\sqrt{4^n+\sqrt{4^{n+1}+\cdots\sqrt{4^{m-1}+\sqrt{4^m}+1}}}- -\sqrt{4^n+\sqrt{4^{n+1}+\cdots\sqrt{4^{m-1}+\sqrt{4^m}}}} \\ -=\frac{\sqrt{4^{n+1}+\cdots\sqrt{4^{m-1}+\sqrt{4^m}+1}}- -\sqrt{4^{n+1}+\cdots\sqrt{4^{m-1}+\sqrt{4^m}}}}{\sqrt{4^n+\sqrt{4^{n+1}+\cdots\sqrt{4^{m-1}+\sqrt{4^m}+1}}}+ -\sqrt{4^n+\sqrt{4^{n+1}+\cdots\sqrt{4^{m-1}+\sqrt{4^m}}}}} \\ -<\frac{{\sqrt{4^{n+1}+\cdots\sqrt{4^{m-1}+\sqrt{4^m}+1}}- -\sqrt{4^{n+1}+\cdots\sqrt{4^{m-1}+\sqrt{4^m}}}}}{2\cdot 2^n} \\ -<\cdots<\frac{(\sqrt{4^m}+1)-\sqrt{4^m}}{2^{m-n}\cdots 2^{n+(n+1)+\cdots+(m-1)}}=2^{-\frac{(m-n)(n+m+1)}{2}} -$$ -Thus -$$ -\lim_{m\to\infty}\sqrt{4^n+\sqrt{4^{n+1}+\cdots\sqrt{4^{m-1}+\sqrt{4^m}}}}=2^n+1. -$$ -For $n=0$ we have -$$ -\lim_{m\to\infty}\sqrt{1+\sqrt{4+\cdots\sqrt{4^{m-1}+\sqrt{4^m}}}}=2^0+1=2. -$$<|endoftext|> -TITLE: Solutions manual for Analysis On Manifolds -QUESTION [8 upvotes]: A few months ago,I wanted to learn something fundmental about manifolds. From highly recommend , I decided to choice Analysis on Manifolds by James R.Munkres as my self-learning textbook.Until now ,I have finished the first two chapter's solutions. But I am not sure my answer to this exerices abusolutly right.Is there some solutions manual for this book? Can anyone provides free downloads ? I appreciate it indeed! - -REPLY [3 votes]: If anyone is interested, I wrote down solutions to all exercises in Chapters 1 and 2 (The Algebra and Topology of $\mathbb{R}^n$ and Differentiation). -You can find them here: https://positron0802.wordpress.com/analysis-on-manifolds-munkres/<|endoftext|> -TITLE: Standard Notation for diagonal matrices -QUESTION [6 upvotes]: Is there standard notation for the set of diagonal matrices? Specifically if the elements must be nonnegative, i.e. the matrix is positive semi-definite? - -REPLY [8 votes]: I'm not sure there is a standard notation per se, but you could construct one using standard notation. -One common way (among others) to specify the set of non-negative reals is $\mathbb{R}_{\ge 0}$. Thus, $\mathbb{R}_{\ge 0}^n$ would be the corresponding Cartesian product (i.e. the set of all nonnegative n-tuples). -A standard way to talk about diagonal matrices uses $\text{diag}(\cdot)$ which maps an n-tuple to the corresponding diagonal matrix: -$$\text{diag}:\mathbb{R^n}\rightarrow \mathbb{R^{n\times n}}, \quad \text{diag}(a_1,...,a_n) := -\begin{bmatrix}a_1&&\\ &\ddots&\\ &&a_n\end{bmatrix}$$ -Thus the set of all positive semi-definite diagonal matrices can be constructed using set comprehension: -$$\{ \text{diag}(v) : v \in\mathbb{R}_{\ge 0}^n \}.$$ -If you really want to talk about the elements of this set, it might be more straightforward to define some non-negative tuples $v_a, v_b, v_c,... \in \mathbb{R}_{\ge 0}^n$ first and then talk about $\text{diag}(v_a) , \text{diag}(v_b), \text{diag}(v_c),$ etc. afterwards. - -Update: -Using some slight abuse of notation as discussed here, one could simply say that the set of non-negative diagonal matrices is: -$$\text{diag}(\mathbb{R}_{\ge 0}^n).$$<|endoftext|> -TITLE: Find $\int\frac{dx}{\cos^3x-\sin^3x}$ -QUESTION [5 upvotes]: $\int\frac{dx}{\cos^3x-\sin^3x}$ - -Let $I=\int\frac{dx}{\cos^3x-\sin^3x}=\int\frac{dx}{(\cos x-\sin x)(\cos^2 x+\sin^2 x+\sin x\cos x)}$ -But it does not seem to be solved further by this method,so i tried another method. -$I=\int\frac{dx}{\cos^3x-\sin^3x}=\int\frac{\csc^3 xdx}{\cot^3x-1}=\int\frac{\csc^2 x \csc xdx}{\cot^3x-1}=\int\frac{\csc^2 x \sqrt{1+\cot^2x}dx}{\cot^3x-1}$ -Put $\cot x=t\implies -\csc^2 x dx=dt$ -$I=\int\frac{-\sqrt{1+t^2}dt}{t^3-1}=\int\frac{-\sqrt{1+t^2}dt}{(t-1)(t^2+t+1)}$ -But i am stuck here. - -REPLY [7 votes]: $$\cos^3x-\sin^3x=(\cos x-\sin x)(1+\sin x\cos x)=\dfrac{(\cos x-\sin x)\{3-(\cos x-\sin x)^2\}}2$$ -Writing $\cos x-\sin x=t$ and using Partial Fraction, -$$\dfrac1{t(3-t^2)}=\dfrac1{3t}+\dfrac t{3(3-t^2)}$$ -$$\implies\dfrac3{\cos^3x-\sin^3x}=\dfrac1{\cos x-\sin x}+\dfrac{\cos x-\sin x}{3-(\cos x-\sin x)^2}$$ -The first integral can be managed easily -For the second $$\text{as }\int(\cos x-\sin x)dx=\sin x+\cos x$$ and as $$(\cos x-\sin x)^2+(\sin x+\cos x)^2=2$$ -write $$3-(\cos x-\sin x)^2=1+(\sin x+\cos x)^2$$ and replace $\sin x+\cos x$ with $u$<|endoftext|> -TITLE: Is $0.248163264128…$ a transcendental number? -QUESTION [18 upvotes]: My question is in the title: - -Is $a=0.248163264128…$ a transcendental number? The number $a$ is defined by concatenating the powers of $2$ (in base $10$). - - -It is possible to express $a$ as a series : -$$a = \sum\limits_{n=1}^{\infty} 2^n \cdot - 10^{ -\sum\limits_{k=1}^{n} (\lfloor{ k \cdot \log_{\,10}\,(2) }\rfloor + 1) } \tag{*}$$ -I know that $a$ is irrational. -I know that if I consider the powers of $10$ instead the powers of $2$, i.e. if I consider $b=0.10100100010000...$, this number is transcendental. -Looking at the series (*), it seems very difficult to establish the transcendence of $a$. However, it is known (thanks to Kurt Mahler) that numbers as: -$$c = 0.149162536… = -\sum\limits_{n=1}^{\infty} n^2 \cdot - 10^{ -\sum\limits_{k=1}^{n} (\lfloor{ 2 \cdot \log_{\,10}\,(k) }\rfloor + 1) } \tag{**} $$ -are transcendental ($c$ is the concatenation of the square numbers in base $10$ ; the same holds for third powers and so on). -I am aware that this could be a difficult problem. Similar numbers, as Copeland-Erdős constant, are not known to be transcendental. I would really appreciate if anyone had a reference about this number $a$, because I didn't find anything that could help me to determine whether $a$ is transcendental, or whether it is still unknown. -Thank you very much! - -REPLY [14 votes]: Yann Bugeaud "Distribution Modulo One and Diophantine Approximation", page 221: - -For integers $b \geq 2$ and $c \geq 2$, let $(c)_b$ denote the sequence of digits of $c$ in its representation in base $b$. Mahler [471] proved that the real number $0 (c)_{10}(c^2)_{10} \dots$ is irrational. This was subsequently reproved and extended to every base $b \geq 2$ by Bundschuh [170] and Niederreiter [539]; see also [69, 172, 647, 652]. -Problem 10.48. With the above notation, prove that, for arbitrary integers $b \geq 2$ and $c \geq 2$, Mahler's number $0 (c)_{b}(c^2)_{b} \dots$ is transcendental and normal to base $b$. -The question of normality of $0.248163264 \dots$ to base $10$ was already posed by Pillai [561]. - -Some of the links I was able to recover: - -[172] P.Bundschuh, P. J.-S. Shiue and X.Y. Yu, Transcendence and -algebraic independence connected with Mahler type numbers, Publ. Math. -Debrecen 56 (2000), 121-130. -[647] Z.Shan, A note on irrationality of some numbers, J. Number -Theory 25 (1987), 211-212. -[652] Irrationality criteria for numbers of Mahler's type. In: -Analytic Number Theory (Kyoto, 1996), 343-351, London Math. Soc. -Lecture Note Ser., 247, Cambridge University Press, Cambridge, 1997. - - -Some data on this number from me. More digits: -$$0.2481632641282565121024204840968192163843276865536\dots$$ -Simple continued fraction: -$$[0; 4, 33, 1, 3, 2, 565, 3, 5, 1, 10, 1, 43, 1, 1, 1, 1, 3, 1, 4, 1, 1, 3, 2, 3, 3, 2, 1, 1, 3, 5, 1, 16, 1, 15, 1, 2, 1, 3, 1, 3, 3, 327, \dots]$$ -Euler type continued fraction: -$$\cfrac{1}{5-\cfrac{5}{6-\cfrac{5}{6-\cfrac{5}{51-\cfrac{50}{51-\cfrac{50}{51-\cfrac{50}{501-...}}}}}}}$$ -The probability of a bigger partial quotent to occur after a smaller one in this fraction is equal to: -$$\frac{\ln 2}{\ln 10}=0.30103 \dots$$ -Note that this fraction always approaches the number from below, fot example this truncation is exactly equal to $0.248163264128$ -Unfortunately, general continued fractions do not afford any insight in the trancendentality of a number, as far as I know.<|endoftext|> -TITLE: A combinatorial proof of the identity $\sum\limits_{k=0}^n\binom{2k}{k}\binom{2(n-k)}{n-k}=4^n$ -QUESTION [10 upvotes]: I have to prove that $$\sum_{k=0}^n\binom{2k}{k}\binom{2(n-k)}{n-k}=4^n$$ The questions also asks for an algebraic proof and I used induction for that algebraic proof. For a combinatorial proof I have no idea on how to proceed on it . The first thing I thought of using a set and considering $k$ subsets of it but that idea does not work here. I just want a hint and I would prefer not giving out the full answer. - -REPLY [3 votes]: You can prove this identity using paths with steps of size $\pm1$. There are a total of $4^n$ such paths of length $2n$ starting at $(0,0)$. Partition this set of paths according to the final visit to the $x$-axis, call that position $(2k,0).$ -It is pretty clear that the number of paths from $(0,0)$ to $(2k,0)$ is $2k\choose k$; this is the red part of the diagram. It is not so clear, but -is true, that the number of paths of length $2n-2k$ that start at $(2k,0)$ and do not touch the $x$-axis is ${2(n-k)\choose n-k}$. This is the black part of the diagram. - - -To find a combinatorial argument for this identity has a long history. -See the references below and also Phira's answer here. The following -diagram fills a gap in my argument above by showing a bijection between -balanced paths that start with an upstep and strictly positive paths. - -Starting with a balanced path, keep the initial upstep, then color red -until you reach the minimum value for the first time. Reverse the red -section and swap the red and black sections, and connect them - to create a strictly positive path. -Starting with a strictly positive path, keep the initial upstep. -Now the path ends at $(2n,2a)$ for some $a>0$. Begin at the right and color red -until you reach level $a$ for the first time. -Reverse the red section and swap the red and black sections, and connect them - to create a balanced path. -Similarly, balanced paths with an initial downstep are in one-to-one correspondence -with strictly negative paths. Therefore the set of all balanced paths is -in one-to-one correspondence with paths that don't touch the $x$-axis except at $(0,0)$. - -Counting and Recounting: The Aftermath by Marta Sved, -Math. Intelligencer 6 (1984), no. 4, 44–45. -Bijections for the identity $4^n=\sum_{k=0}^n {2k\choose k}{2(n-k)\choose n-k}$ by David Callan -Two New Bijections on Lattice Paths by Glenn Hurlbert and Vikram Kamat, -arXiv:math/0609222 [math.CO]<|endoftext|> -TITLE: How to prove $\gcd(dm,dn)=d\cdot\gcd(m,n)$ -QUESTION [5 upvotes]: I want to prove the following equation : -$$ -(dm,dn) = d\cdot(m,n) -$$ -where -$$ -(m,n) = \gcd(m,n) \\ -(dm,dn) = \gcd(dm,dn) -$$ -I tried this : -$$ -(dm,dn) \rightarrow \exists g_1 \in Z : g_1|dm, g_1|dn \rightarrow g_1|(dm\cdot x+dn\cdot y) \rightarrow g_1|d\cdot (mx+ny) \\ -\rightarrow g_1=\frac{d\cdot (mx+ny)}{t} -$$ -And the same for $ (m,n)$ : -$$ -g_2=\frac{mx+ny}{t} -$$ -If i insert $g_1$ and $g_2$ i get : -$$ -d\cdot\frac{mx+ny}{t}=d\cdot \frac{mx+ny}{t} -$$ -Is this right? - -REPLY [5 votes]: You can also deduce it by Bézout's Lemma: -Put $g:=(m,n)$ and $G:=(dm,dn)$. There exist integers $a,b$ such that $$an+bm=g.$$ -Therefore $a(dn)+b(dm)=dg$, what implies that $G|dg$. -On the other hand, since $g|m$ and $g|n$ we get $dg|dm$ and $dg|dn$. Hence $dg|G$ and thus $G=dg$.<|endoftext|> -TITLE: Prove the following trigonometric identity without a calculator involved -QUESTION [25 upvotes]: I have to prove the following statement. - -$$1+\cos{2\pi\over5}+\cos{4\pi\over5}+\cos{6\pi\over5}+\cos{8\pi\over5}=0$$ - -I have tried to use the sum of angles formula for cosine, but didn't get to a point where I'd be able to show that it is equal to $0$. - -REPLY [4 votes]: The given terms are projections of unit radii of a regular pentagon on x-axis. A sum of cyclic vectors or static equilibrium of vectors/forces acting on a point, it sums to zero. -BTW and likewise, the y-axis projection sum -$$ 1+\sin{2\pi\over5}+\sin{4\pi\over5}+\sin{6\pi\over5}+\sin{8\pi\over5} $$ -also goes to zero. -Also we have the formula for sum of cosines of $n$ angles in arithmetic progression with common difference $ \beta$ -$$ \dfrac{\sin n \beta/2 }{ \sin \beta/2} \cdot \cos \dfrac{\alpha_1 +\alpha_2}{2} $$ -which also vanishes.<|endoftext|> -TITLE: Complete Graphs as Unions of Paths -QUESTION [6 upvotes]: Show that for $n \geq 2$ the complete graph $K_n$ is the union of paths of distinct lengths. -I have been stuck on this problem for the past couple of days now and would really like to see a solution/proof. -What I have tried so far is the following: -We know that the size of the set of edges $E(K_n)$ is $|E(k_n)| = {n \choose 2} = \frac{n!}{2!(n-2)!} = \frac{n(n-1)}{2} = \displaystyle \sum_{i=1}^{n-1} i$. -From here, I considered $K_{n+1}$ and the respective size of the edge set which came out to $\frac{n(n+1)}{2}$. If I understand correctly, then we need to somehow choose a partition of $K_n$, so maybe separating the vertex set into two different sets might help, but I am not even sure if this is the right way to go about it. -Many thanks in advance for your time. Any help is greatly appreciated. - -REPLY [2 votes]: If $n=2k+1$, the graph is the disjoint union of $k$ Hamiltonian-cycles and from there you know what to do. -If $n=2k$, use the previous construction for $n-1$ with the additional constraint that exactly two paths start at each vertex but one, and then extend each path with an extra edge.<|endoftext|> -TITLE: Pattern in twin primes -QUESTION [9 upvotes]: I recently noticed a pattern in twin primes. My questions are: does this pattern continue to hold indefinitely, and how would I prove it? Here's the pattern: -For the $n$th prime, there exists exactly $n-2$ twin prime pairs that can be created as follows: -$p_n$ is the $n$th prime, -$P_p=\prod_{1