Loading [MathJax]/extensions/TeX/mathchoice.js

Sunday, June 27, 2021

Beta function

dirichlet beta

Derivative of Dirichlet Beta function


We prove the following infinite product which turned out to be a modified version of the Dirichlet Beta function derivative. 133557799111113131515=(πeγΓ(34)4)π4 Proof

Take logarithm in both sides ln(3)3+ln(5)5ln(7)7+ln(9)9ln(11)11+ln(13)13...=π4[ln(π)γ4lnΓ(34)] Then, we want to prove n=1(1)n+1ln(2n1)(2n1)=π4[ln(π)γ4lnΓ(34)] Consider the Dirichlet beta function β(v)=n=1(1)n+1(2n1)v Then β(v)=n=1(1)nln(2n1)(2n1)v Therefore we want to find β(1)=n=1(1)n+1ln(2n1)(2n1) There are various ways to find the derivative, one of the most straigforward is using the Kummer's Fourier series for the log-gamma function: lnΓ(t)=12ln(πsinπt)+[γ+ln(2π)](12t)+1πn=1lnnnsin(2πnt) It turned out that if t=14 then sin(πn2)={sin(π2)=1 if n=1sin(π)=0 if n=2sin(3π2)=1 if n=3sin(2π)=0 if n=4} Then n=1lnnnsin(πn2)=n=1(1)n+1ln(2n1)(2n1) Therefore, n=1(1)n+1ln(2n1)(2n1)=π[lnΓ(14)12ln(πsinπ4)[γ+ln(2π)](14)]=π[ln(π)12ln(2)lnΓ(34)12ln(π)14ln(2)14γln(2π)4]=π4[ln(π)γ4Γ(34)] Therefore 133557799111113131515=(πeγΓ(34)4)π4

Thursday, June 24, 2021

Application of Ramanujan's master theorem

Bessel function

Mellin transform of the Bessel function

We show the following result 0(x2)sJs(x)dx=Γ(s) Proof
We will use the Ramanujan's master theorem to prove this result: Assume that the function f has an expansion of the form f(x)=k=0(ϕ(k)k!)(x)k for some analytic function ϕ(k), then the Mellin transform of f(x) is given by {Mf(x)}=0xs1f(x)dx=Γ(s)ϕ(s) Back to our problem, recall the expansion series of the Bessel function: Js(x)=(x2)sj=0(x2/4)jj!Γ(1+j+s) Then 0(x2)s1Js(x)dx=0(x2)2s1j=0(x2/4)jj!Γ(1+j+s)dx let w=x24 then dw=x2dx, therefore 0(x2)2s1j=0(x2/4)jj!Γ(1+j+s)dx=0ws1j=0(w)jj!Γ(1+j+s)dw Clearly f(w)=j=0(w)jj!Γ(1+j+s) and ϕ(j)=1Γ(1+j+s) By Ramanujan's master theorem : 0ws1j=0(w)jj!Γ(1+j+s)dw=Γ(s)ϕ(s)=Γ(s)Γ(1)=Γ(s) Then 0(x2)sJs(x)dx=Γ(s)

Integral of the day

Integral involving the compostion of sin function with itself

We prove the following integral: π20sinsinxdx=113!!2+15!!217!!2+... Proof
Recall the series expansion of sin(x) sinx=n=0(1)n(2n+1)!x2n+1xR Therefore π20sinsinxdx=π20n=0(1)n(2n+1)!sin2n+1(x)dx=n=0(1)n(2n+1)!π20sin2n+1(x)dx=n=0(1)n(2n+1)!10w2n+1(1w2)12dw(wsinx)=n=0(1)n(2n+1)!10tn(1t)122dt(tw2)=n=0(1)n(2n+1)!B(n+1,12)2=n=0(1)n(2n+1)!Γ(n+1)Γ(12)2Γ(n+1+12)=n=0(1)n(2n+1)!n!π2Γ(n+1+12)(Γ(12)=π)) Now recall the Legrende duplication formula πΓ(2z)=22z1Γ(z)Γ(z+12) Applying the formula to Γ(n+1+12) n=0(1)n(2n+1)!n!π2Γ(n+1+12)=n=0(1)n(2n+1)!22nn!Γ(n+1)Γ(2n+2)=n=0(1)n(2n+1)!22nn!2(2n+1)!=n=0(1)n22nn!2(2n+1)!2=n=0(1)n1(2n+1)!!2 π20sinsinxdx=n=0(1)n1(2n+1)!!2

Wednesday, June 23, 2021

Feynman's trick II

log integral

Formula involving integrals of logarithm function

We prove the following formula involving the integral of powers of the function ln(1x). 10lnn(1x)ln(x)xdx=(1)n+1n![(n+12)ζ(n+2)12n1k=1ζ(k+1)ζ(n+1k)] Some examples for the first naturals: n=110ln(1x)ln(x)xdx=ζ(3)n=210ln2(1x)ln(x)xdx=3ζ(4)+ζ(2)ζ(2)=π4180n=310ln3(1x)ln(x)xdx=12ζ(5)6ζ(2)ζ(3)=12ζ(5)π2ζ(3)n=410ln4(1x)ln(x)xdx=60ζ(6)+24ζ(2)ζ(4)+12ζ(3)ζ(3)=12ζ2(3)2π6105n=510ln5(1x)ln(x)xdx=360ζ(7)120ζ(2)ζ(5)120ζ(3)ζ(4)=4π4ζ(3)320π2ζ(5)+360ζ(7) Proof 10lnn(1x)ln(x)xdx=10[dndtn|t=0+(1x)t][dds|s=0+xs]xdx=dndtn|t=0+dds|s=0+10(1x)txsxdx=dndtn|t=0+dds|s=0+10(1x)txs1dx=dndtn|t=0+dds|s=0+B(t+1,s) Now, recall the series expansion of the beta function (proof in Appendix 1): B(x,y)=j=0(1y)jj!(x+j)y>0 Therefore dndtn|t=0+dds|s=0+B(t+1,s)=dndtn|t=0+dds|s=0+j=0(1s)jj!(t+1+j)=dndtn|t=0+[lims0+ddsj=0(1s)jj!(t+1+j)]=dndtn|t=0+[lims0+j=0(1s)j(ψ(0)(1s)ψ(0)(1s+j))j!(t+1+j)]=dndtn|t=0+[j=0(1)j(ψ(0)(1)ψ(0)(1+j))j!(t+1+j)]=dndtn|t=0+[j=0Γ(j+1)(γψ(0)(1+j))j!(t+1+j)]=dndtn|t=0+[j=0Γ(j+1)Hjj!(t+1+j)]=dndtn|t=0+[j=0Hj(t+1+j)]=limt0+j=0dndtnHj(t+1+j)=limt0+j=0(1)n+1n!Hj(t+1+j)n+1=j=0(1)n+1n!Hj(1+j)n+1 Now we know that Hj+1=1j+1+Hj therefore j=0(1)n+1n!Hj(1+j)n+1=j=0(1)n+1n!Hj+1(1+j)n+1j=0(1)n+1n!(1+j)n+2=(1)n+1n![j=0Hj+1(1+j)n+1j=01(1+j)n+2] If m=j+1 then (1)n+1n![j=0Hj+1(1+j)n+1j=01(1+j)n+2]=(1)n+1n![m=1Hmmn+1m=11mn+2]=(1)n+1n![m=1Hmmn+1ζ(n+2)] Now, recall this beutiful result proven by Euler (Proof in Appendix 2): n=1Hnnr=(1+r2)ζ(r+1)12r2k=1ζ(k+1)ζ(rk) Then (1)n+1n![m=1Hmmn+1ζ(n+2)]=(1)n+1n![(1+n+12)ζ(n+2)12n1k=1ζ(k+1)ζ(n+1k)ζ(n+2)]=(1)n+1n![(n+12)ζ(n+2)12n1k=1ζ(k+1)ζ(n+1k)] Therefore 10lnn(1x)ln(x)xdx=(1)n+1n![(n+12)ζ(n+2)12n1k=1ζ(k+1)ζ(n+1k)]

Apendix 1: Series expansion for the beta function

From the definition of the beta function: B(x,y)=10tx1(1t)y1dt Recall the series expansion of (1t)y1 (1-t)^{y-1} = \sum_{j=0}^{\infty} (-1)^{j} \binom{y-1}{j}t^{j} \begin{align*} \int_{0}^{1} t^{x-1}(1-x)^{y-1}dt =& \int_{0}^{1} t^{x-1}\sum_{j=0}^{\infty} (-1)^{j} \binom{y-1}{j}t^{j}dt\\ =& \sum_{j=0}^{\infty} (-1)^{j} \binom{y-1}{j} \int_{0}^{\infty} t^{x+j-1}dt\\ =& \sum_{j=0}^{\infty} (-1)^{j} \binom{y-1}{j} \frac{1}{(x+j)} \end{align*} Then we have \begin{align*} \sum_{j=0}^{\infty} (-1)^{j} \binom{y-1}{j}\frac{1}{(x+j)} =& \sum_{j=0}^{\infty}\binom{j-y}{j}\frac{1}{(x+j)}\\ =& \sum_{j=0}^{\infty}\frac{(j-y)!}{j!(-y)!}\frac{1}{(x+j)}\\ =& \sum_{j=0}^{\infty}\frac{\Gamma(j-y+1)}{j!\Gamma(1-y)} \frac{1}{(x+j)}\\ =& \sum_{j=0}^{\infty}\frac{(1-y)_{j}}{j!(x+j)} \end{align*} where the last equality is the definnition of the Pochhammer polynomials. Therefore \boxed{B(x,y) = \sum_{j=0}^{\infty}\frac{(1-y)_{j}}{j!(x+j)}}

Appendix 2: Proof of Euler's theorem for harmonic series

I found a beautiful proof of this result which relies on the residue theorem. I really liked this proof because it shows that the famous result due to Euler is just a residue of certain kernel function. I present here the original proof in the spirit of Flajolet et al. [1] but I complete the omitted parts by the authors in the original paper.

We need the following Lemma due to Cauchy and Lindelöf.

Lemma. Let \xi be a kernel function and let r(s) be a rational function with \mathscr{O}(s^{-2}) at infinity. Then \underbrace{\sum_{\alpha } \left\{\operatorname{Res}\left(r(s)\xi(s), \alpha\right) \Big| \alpha \textrm{ is a pole of } \xi \textrm{ that is not a pole of } r(s) \right\}}_{\textrm{infinite series}} = -\underbrace{\sum_{\beta } \left\{\operatorname{Res}\left(r(s)\xi(s), \beta \right) \Big| \beta \textrm{ is a pole of } r(s) \right\}}_{\textrm{finite series}}

We will apply the lemma to the kernel \displaystyle \xi(s) = (\psi(-s)+\gamma)^2

Propositon. If \displaystyle \xi(s) = (\psi(-s)+\gamma)^2 then
\sum_{\alpha } \left\{\operatorname{Res}\left(r(s)\xi(s), \alpha\right) \Big| \alpha \textrm{ is a pole of } \xi \textrm{ that is not a pole of } r(s) \right\} = 2\sum_{n=1}^{\infty}H_{n}r(n)+ \sum_{n=1}^{\infty}r'(n)
Proof

The kernel \xi(s) has a countable number of pole at the positive integers.
Let \displaystyle n\in \mathbb{N}, then the function \displaystyle (\psi(-s)+\gamma) has the following expansion at s \to n.

\psi(-s)+\gamma = \frac{1}{(s-n)}+ H_{n} + \sum_{k=1}^{\infty}\left[(-1)^K H_{n}^{k+1} - \zeta(k+1)\right](s-n)^{k} Therefore
\begin{align*} (\psi(-s)+\gamma)^2 =& \left[\frac{1}{(s-n)}+ H_{n} + \sum_{k=1}^{\infty}\underbrace{\left[(-1)^K H_{n}^{k+1} - \zeta(k+1)\right]}_{A_{k}}(s-n)^{k}\right]^2\\ =& \left(\frac{1}{(s-n)}+ H_{n}\right)^2 + 2\left(\frac{1}{(s-n)}+ H_{n}\right)\sum_{k=1}^{\infty}A_{k}(s-n)^{k} + \left(\sum_{k=1}^{\infty}A_{k}(s-n)^{k}\right)^2\\ =& \frac{1}{(s-n)^2}+\frac{2H_{n}}{(s-n)} + H^2_{n} + 2\left(\frac{1}{(s-n)}+ H_{n}\right)\sum_{k=1}^{\infty}A_{k}(s-n)^{k} + \left(\sum_{k=1}^{\infty}A_{k}(s-n)^{k}\right)^2\\ =& \frac{1}{(s-n)^2}+\frac{2H_{n}}{(s-n)} + H^2_{n} + 2A_{1} + \mathscr{O}(s-n) \end{align*} Now, suppose r(s) is a rational function that can be expanded with its power series (in reality, we have assumed something weaker: that r(s) is \mathcal{O}(s^{-2})). Then for s \to n: r(s) = r(n)+ r'(n)(s-n)+\frac{r''(n)}{2!}(s-n)^2+\frac{r'''(n)}{3!}(s-n)^3+... Therefore \begin{align*} r(s)(\psi(-s)+\gamma)^2 =& r(n)(\psi(-s)+\gamma)^2+ r'(n)(s-n)(\psi(-s)+\gamma)^2+\frac{r''(n)}{2!}(s-n)^2(\psi(-s)+\gamma)^2+...\\ =& \frac{r(n)}{(s-n)^2}+\frac{2H_{n}r(n)}{(s-n)} + H^2_{n}r(n) + 2A_{1}r(n) + \mathscr{O}(s-n)r(n) + \frac{r'(n)}{(s-n)}+2H_{n}r'(n) + H^2_{n}r'(n)(s-n) + 2A_{1}r'(n)(s-n) + \mathscr{O}(s-n)r'(n)(s-n) + ... \\ =& H^2_{n}r(n)+ 2A_{1}r(n)+2H_{n}r'(n)+\frac{r''(n)}{2!}+\frac{1}{s-n}\underbrace{\left[2H_{n}r(n)+r'(n)\right]}_{\textrm{residue at } n} +\frac{r(n)}{(s-n)^2} + \mathscr{O}(s-n) \end{align*} \therefore \operatorname{Res}\left(r(s)(\psi(-s)+\gamma)^2, n\right) = 2H_{n}r(n)+r'(n) Summing each n \in \mathbb{N}\setminus\left\{0\right\} \boxed{\sum_{n=1}^{\infty} \operatorname{Res}\left(r(s)(\psi(-s)+\gamma)^2, n\right) = 2\sum_{n=1}^{\infty}H_{n}r(n)+ \sum_{n=1}^{\infty}r'(n)} or, equivalently \boxed{\sum_{\alpha } \left\{\operatorname{Res}\left(r(s)\xi(s), \alpha\right) \Big| \alpha \textrm{ is a pole of } \xi \textrm{ that is not a pole of } r(s) \right\} = 2\sum_{n=1}^{\infty}H_{n}r(n)+ \sum_{n=1}^{\infty}r'(n)}

We are ready to prove Euler's theorem

Theorem (Euler). For integer \geq 2, \sum_{n=1}^{\infty} \frac{H_{n}}{n^q} = \left(1+\frac{q}{2}\right)\zeta(q+1) -\frac{1}{2}\sum_{k=1}^{q-2} \zeta(k+1)\zeta(q-k) Proof If we apply the last result to the function \displaystyle r(s)= \frac{1}{s^q} we have \sum_{n=1}^{\infty} \operatorname{Res}\left(\frac{(\psi(-s)+\gamma)^2}{s^q}, n\right) = 2\sum_{n=1}^{\infty}\frac{H_{n}}{n^q} -q\zeta(q+1) By the first lemma, \displaystyle -\sum_{\beta} \left\{\operatorname{Res}\left(\frac{(\psi(-s)+\gamma)^2}{s^q}, \beta \right) \Big| \beta \textrm{ is a pole of } \frac{1}{s^q} \right\} = 2\sum_{n=1}^{\infty}\frac{H_{n}}{n^q} -q\zeta(q+1) Given that \displaystyle r(s)= \frac{1}{s^q} has a single pole at s=0 of order q we have: -\operatorname{Res}\left(\frac{(\psi(-s)+\gamma)^2}{s^q}, 0\right) =2\sum_{n=1}^{\infty}\frac{H_{n}}{n^q} -q\zeta(q+1)

To find the residue we can expand \displaystyle \frac{(\psi(-s)+\gamma)^2}{s^q}:
First, recall the series expansion of \psi(-s)+\gamma at s=0:
\psi(-s)+\gamma = \frac{1}{s} - \sum_{k=1}^{\infty} \zeta(k+1)s^{k} Therefore \begin{align*} (\psi(-s)+\gamma)^2 =& \left[\frac{1}{s} -\sum_{k=1}^{\infty} \zeta(k+1)s^{k}\right]^2 \\ =& \frac{1}{s^2} -2\sum_{k=1}^{\infty} \zeta(k+1)s^{k-1} + \left(\sum_{k=1}^{\infty} \zeta(k+1)s^{k}\right)^2\\ =& \frac{1}{s^2} -2\sum_{k=1}^{\infty} \zeta(k+1)s^{k-1} + \sum_{k=1}^{\infty}\sum_{j=1}^{k} \zeta(j+1)\zeta(k-j+2)s^{k+1}\\ \end{align*} Then \begin{align*} \frac{(\psi(-s)+\gamma)^2}{s^q} = \frac{1}{s^{2+q}} -2\underbrace{\sum_{k=1}^{\infty} \zeta(k+1)s^{k-1-q}}_{A} + \underbrace{\sum_{k=1}^{\infty}\sum_{j=1}^{k} \zeta(j+1)\zeta(k-j+2)s^{k+1-q}}_{B} \end{align*} From here is easy to show that the residue in A is the coefficient of \displaystyle s^{k-1-q}=s^{-1} \Longrightarrow k-1-q=-1 \Longrightarrow k=q and the residue in B is coefficient of \displaystyle s^{k+1-q}=s^{-1} where \Longrightarrow k+1-q=-1 \Longrightarrow k=q-2 Therefore \operatorname{Res}\left(\frac{(\psi(-s)+\gamma)^2}{s^q},0\right) = -2\zeta(q+1) + \sum_{j=1}^{q-2} \zeta(j+1)\zeta(q-j) Then 2\zeta(q+1) - \sum_{j=1}^{q-2} \zeta(j+1)\zeta(q-j) = 2\sum_{n=1}^{\infty}\frac{H_{n}}{n^q} -q\zeta(q+1) \boxed{\therefore\sum_{n=1}^{\infty} \frac{H_{n}}{n^q} = \left(1+\frac{q}{2}\right)\zeta(q+1) -\frac{1}{2}\sum_{k=1}^{q-2} \zeta(k+1)\zeta(q-k)}

[1] Philippe Flajolet, Bruno Salvy. Euler Sums and Contour Integral Representations. [Research Report] RR-2917, INRIA. 1996. ffinria-00073780f

Friday, June 18, 2021

Cauchy integral formula

triple exponential

Triple exponential integral

We show the proof of the following integral: \int_{0}^{\pi} e^{e^{\cos x}\cos\sin x}\cos(e^{cos x} \sin \sin x) dx = \pi e On the way we will find this beautiful result: \int_{0}^{\pi } e^{e^{e^{ix}}} + e^{e^{e^{-ix}}}dx =2\pi e Proof \begin{align*} \int_{0}^{\pi} e^{e^{\cos x}\cos\sin x}\cos(e^{cos x} \sin \sin x) dx = &\int_{0}^{\pi} e^{e^{\cos x}\cos\sin x} \left[ \frac{ e^{ie^{\cos x}\sin\sin x} + e^{-ie^{\cos x}\sin\sin x}}{2}\right]dx \\ =& \frac{1}{2}\int_{0}^{\pi } e^{e^{\cos x}\cos\sin x+ie^{\cos x}\sin\sin x} + e^{e^{\cos x}\cos\sin x-ie^{\cos x}\sin\sin x}dx\\ =& \frac{1}{2}\int_{0}^{\pi } e^{e^{\cos x}(\cos\sin x+i\sin\sin x)} + e^{e^{\cos x}(\cos\sin x-i\sin\sin x)}dx\\ =& \frac{1}{2}\int_{0}^{\pi } e^{e^{\cos x}e^{i\sin x}} + e^{e^{\cos x}e^{-i\sin x}}dx\\ =& \frac{1}{2}\int_{0}^{\pi } e^{e^{\cos x + i\sin x}} + e^{e^{\cos x-i\sin x}}dx\\ =& \frac{1}{2}\int_{0}^{\pi } e^{e^{e^{ix}}} + e^{e^{e^{-ix}}}dx\\ =& \frac{1}{2}\int_{0}^{\pi } \left[ \sum_{n=0}^{\infty} \frac{(e^{e^{ix}})^n}{n!} + \sum_{0}^{\infty} \frac{(e^{e^{-ix}})^{n}}{n!} \right]dx\\ =& \frac{1}{2}\int_{0}^{\pi } \sum_{n=0}^{\infty} \frac{(e^{e^{ix}})^n+(e^{e^{-ix}})^{n}}{n!} dx\\ =& \frac{1}{2}\sum_{n=0}^{\infty} \frac{1}{n!}\int_{0}^{\pi } (e^{e^{ix}})^n+(e^{e^{-ix}})^{n}dx\\ =& \frac{1}{2}\sum_{n=0}^{\infty} \frac{1}{n!} \left[\int_{0}^{\pi } (e^{e^{ix}})^ndx+ \int_{0}^{\pi }(e^{e^{-ix}})^{n}dx\right]\\ =& \frac{1}{2}\sum_{n=0}^{\infty} \frac{1}{n!}\left[\int_{0}^{\pi } (e^{e^{ix}})^ndx- \int_{\pi}^{0 }(e^{e^{-ix}})^{n}dx\right]\\ =& \frac{1}{2}\sum_{n=0}^{\infty} \frac{1}{n!}\left[\int_{0}^{\pi } (e^{e^{ix}})^ndx+ \int_{-\pi}^{0 }(e^{e^{ix}})^{n}dx\right]\\ =& \frac{1}{2}\sum_{n=0}^{\infty} \frac{1}{n!}\int_{-\pi}^{\pi } (e^{e^{ix}})^ndx \\ =& \frac{1}{2}\sum_{n=0}^{\infty} \frac{1}{n!}\oint_{\gamma} \frac{e^{zn}}{zi} dz \quad \left( \textrm{where } \gamma(x) = e^{ix} \quad x\in [-\pi,\pi]\right)\\ =& \frac{1}{2}\sum_{n=0}^{\infty} \frac{1}{n!} 2\pi \quad (\textrm{Cauchy integral formula}) \\ =& \pi e \end{align*} \boxed{\int_{0}^{\pi} e^{e^{\cos x}\cos\sin x}\cos(e^{cos x} \sin \sin x) dx = \pi e } Corollary \boxed{ \int_{0}^{\pi } e^{e^{e^{ix}}} + e^{e^{e^{-ix}}}dx =2\pi e}

Thursday, June 17, 2021

Riemann zeta function

zeta series

Series involving the zeta function, e and pi

We show the proof of this beutiful series posted by @infseriesbot. \exp\left(\frac{\zeta(2)}{2}-\frac{\zeta(2)}{3}+\frac{\zeta(4)}{4}-\frac{\zeta(4)}{5}+\frac{\zeta(6)}{6}-...\right)=\sqrt{\frac{2\pi}e} To prove this result we use two concepts:
  1. The Taylor series expansion of the log-Gamma function \ln \Gamma(x)
  2. The Euler's reflection formula for the Gamma function \Gamma(x)
Lets start with the left hand side \begin{align*} \frac{\zeta(2)}{2}-\frac{\zeta(2)}{3}+\frac{\zeta(4)}{4}-\frac{\zeta(4)}{5}+\frac{\zeta(6)}{6}-...=&\sum_{n=1}^{\infty} \left(\frac{1}{2n}-\frac{1}{2n+1}\right)\zeta(2n)\\ =&\sum_{n=1}^{\infty}\frac{\zeta(2n)}{(2n)(2n+1)} \end{align*} What is the series of the right hand side? This series is related to the expansion series of the log-Gamma function: \ln(\Gamma(z))=-\gamma(z-1)+\sum_{k=2}^{\infty}\frac{(-1)^{k}\zeta(k)}{k} (z-1)^{k} \quad |z-1|<1 where \gamma is the Euler-Mascheroni constant. If we let z=1+t and then z=1-t in the expansion series we obtain two equations: \begin{align*} \ln(\Gamma(t+1))&=\gamma t+\sum_{k=2}^{\infty}\frac{(-1)^{k}\zeta(k)}{k} t^{k} \\ \ln(\Gamma(1-t))&=-\gamma t+\sum_{k=2}^{\infty}\frac{\zeta(k)}{k} t^{k} \end{align*} Adding both equations: \begin{align*} \ln(\Gamma(t+1))+\ln(\Gamma(1-t))&=\sum_{k=1}^{\infty}\frac{\zeta(2k)}{2k} t^{2k} \label{eq:3} \end{align*} Integrating both sides: \begin{align*} \int_{0}^{1}\left(\ln(\Gamma(t+1))+\ln(\Gamma(1-t))\right)dt &=\sum_{k=1}^{\infty}\frac{\zeta(2k)}{k} \int_{0}^{1}t^{2k} dt \\ \Longrightarrow \frac{1}{2}\int_{0}^{1}\left(\ln(\Gamma(t+1))+\ln(\Gamma(1-t))\right)dt & = \sum_{k=1}^{\infty}\frac{\zeta(2k)}{2k(2k+1)} \tag{1} \end{align*} This is the series we were looking for in terms of a definite integral. Hopefully we can transform it to a maneuverable one. Here is where the Euler's reflection formula for the Gamma function is useful: \begin{align*} \Gamma(t)\Gamma(1-t)=\frac{\pi}{\sin(\pi t)} \quad t\notin \mathbb{Z} \end{align*} Therefore \begin{align*} \frac{1}{2}\int_{0}^{1}\left(\ln(\Gamma(t+1))+\ln(\Gamma(1-t))\right)dt & = \frac{1}{2}\int_{0}^{1}\left(\ln(\Gamma(t+1))(\Gamma(1-t))\right)dt\\ & =\frac{1}{2}\int_{0}^{1}\left(ln(t\cdot\Gamma(t))(\Gamma(1-t))\right)dt\\ & =\frac{1}{2}\int_{0}^{1}\ln\left( \frac{t \cdot \pi}{\sin(\pi t)}\right)dt\\ & = \frac{1}{2}\ln(\sqrt{\pi})-\frac{1}{2}-\frac{1}{2}\int_{0}^{1}\ln(\sin(t\pi))dt \tag{2} \end{align*} We just have to prove that \int_{0}^{1}\ln(\sin(t\pi))dt=-\ln(2) \begin{align*} \int_{0}^{1}\ln(\sin(t\pi))dt& = \int_{0}^{\frac{1}{2}}\ln(\sin(2\pi t))dt\\ & =\ln(2)+ \int_{0}^{\frac{1}{2}}(\sin(\pi t))dt+\int_{0}^{\frac{1}{2}}\ln(\cos(\pi t))dt \quad (\sin(2t)=2\sin(t)\cos(t)) \\ \end{align*} We also know that \begin{align*} \int_{0}^{1}\ln(\sin(t\pi))dt& = 2\int_{0}^{\frac{1}{2}}\ln(\sin(\pi t))dt \quad \text{(using $-\sin(\theta)=\sin(-\theta)$)}\\ &= \int_{0}^{\frac{1}{2}}\ln(\sin(2\pi t))dt \end{align*} Therefore \begin{align*} \int_{0}^{1}\ln(\sin(t\pi))dt& \ln(2)+ \int_{0}^{\frac{1}{2}}\ln(\sin(\pi t))dt+\int_{0}^{\frac{1}{2}}\ln(\cos(\pi t))dt \\ & = \ln(2)+ \int_{0}^{\frac{1}{2}}\ln(\sin(\pi t))dt+\int_{0}^{\frac{1}{2}}\ln(\sin(\pi w))dw \quad \text{($w=\frac{1}{2}-t$)}\\ & = \ln(2)+ 2\int_{0}^{1}\ln(\sin(\pi t))dt+2\int_{0}^{1}\ln(\sin(\pi w))dw \end{align*} \begin{align*} \Longrightarrow \int_{0}^{1}\ln(\sin(t\pi))dt & = \ln(2)+ \int_{0}^{1}\ln(\sin(t\pi))dt+\int_{0}^{1}\ln(\sin(t\pi))dt \\ \Longrightarrow \int_{0}^{1}\ln(\sin(\pi t))dt & = -\ln(2) \\ \tag{3} \end{align*} We conclude from results (1),(2) and (3) our claim: \begin{align*} \frac{\zeta(2)}{2}-\frac{\zeta(2)}{3}+\frac{\zeta(4)}{4}-\frac{\zeta(4)}{5}+\frac{\zeta(6)}{6}-...&= \frac{1}{2}\ln(\sqrt{\pi})-\frac{1}{2}-\int_{0}^{1}\ln(\sin(t\pi))dt\\ =&\frac{1}{2}\ln(\sqrt{\pi})-\frac{1}{2}+\frac{1}{2}\ln(2) \\ \end{align*} \boxed{ \exp\left(\frac{\zeta(2)}{2}-\frac{\zeta(2)}{3}+\frac{\zeta(4)}{4}-\frac{\zeta(4)}{5}+\frac{\zeta(6)}{6}-...\right)=\sqrt{\frac{2\pi}e}}

Wednesday, June 16, 2021

Feynman's trick

Feynman

Integral involving the Feynman's integral trick

Today we show the proof of this result which involves the famous Feynman's integral trick twice. The post of @infseriesbot has a typo (the sign is wrong). First, we differentiate the parameter a under the integral sign: \begin{align*} \frac{d}{da} \int_{0}^{\infty} \frac{(1-\cos a x)\ln(x)}{x^2}dx =& \int_{0}^{\infty} \frac{d}{da} \frac{(1-\cos a x)\ln(x)}{x^2}dx\\ =&\int_{0}^{\infty} \frac{ \sin a x \ln (x)}{x} dx\\ =&\int_{0}^{\infty} \frac{ \sin w \ln (\frac{w}{a})}{w} dw \quad (w \mapsto ax)\\ =&\int_{0}^{\infty} \frac{ \sin w \ln (w)}{w} dw-\ln (a)\int_{0}^{\infty} \frac{ \sin w }{w} dw \\ =&\int_{0}^{\infty} \frac{\sin w \ln (w)}{w} dw-\frac{\pi \ln(a)}{2}\\\ =& \int_{0}^{\infty} \frac{ \sin w \frac{d}{dt} w^{t} \big|_{t=0+}}{w} dw-\frac{\pi \ln(a)}{2}\\ =& \frac{d}{dt}\Big|_{t=0+}\int_{0}^{\infty} w^{t-1} \sin w dw -\frac{\pi \ln(a)}{2} \end{align*} Here we are using the fact that \displaystyle \frac{d}{dt} w^{t} \big|_{t=0+} =\ln(w) and the fact that \displaystyle \int_{0}^{\infty} \frac{\sin(w)}{w}dw = \frac{\pi}{2} (a famous result obtained through complex analysis). Note that the first integral is the Mellin transform of \sin w, so we could use the following expansion series of \sin (a consequence of the Euler's formula and the de Moivre theorem) in oder to apply the Ramanujan's master theorem: \sin w = \sum_{n=0}^{\infty}\frac{(-w)^n[-\sin\left(\frac{n\pi}{2}\right)]}{n!} Then, by Ramanujan's master theorem: \begin{align*} \frac{d}{dt}\Big|_{t=0+}\int_{0}^{\infty} w^{t-1} \sin w dw= &\frac{d}{dt}\Big|_{t=0+} \left[\Gamma(t) \sin\left(\frac{t\pi}{2}\right)\right] \\ =& \lim_{t \to 0+}\Gamma(t)\left[\frac{\pi}{2}\cos\left(\frac{t\pi}{2}\right) +\psi^0(t)\sin\left(\frac{\pi t}{2} \right) \right]\\ =& \lim_{t \to 0+}\Gamma(t+1)\left[\frac{\pi}{2}\frac{\cos\left(\frac{t\pi}{2}\right)}{t} +\psi^0(t)\frac{\sin\left(\frac{\pi t}{2} \right)}{t} \right] \end{align*} Update: I found a cool way to calculate this limit:
Note that \begin{align*} \frac{\pi}{2}\frac{\cos\left(\frac{t\pi}{2}\right)}{t} = &\frac{\pi}{2}\frac{1}{t}-\frac{\pi^3}{2^3}\frac{t}{2!}+\frac{\pi^5}{2^5}\frac{t^3}{4!}-\frac{\pi^7}{2^7}\frac{t^5}{6!}+... \\ =& \frac{\pi}{2}\frac{1}{t}+\mathcal{O}(t) \end{align*} Also note that \begin{align*}\psi^0(t)\frac{\sin\left(\frac{\pi t}{2} \right)}{t} =&\psi^0(t)\frac{\pi}{2}-\psi^0(t)\frac{\pi^3}{2^3}\frac{t^2}{3!}+\psi^0(t)\frac{\pi^5}{2^5}\frac{t^4}{5!}-\psi^0(t)\frac{\pi^7}{2^7}\frac{t^6}{7!}+...\\ =& \psi^0(t)\frac{\pi}{2}-t\psi^0(t)\left(\frac{\pi^3}{2^3}\frac{t}{3!}-\frac{\pi^5}{2^5}\frac{t^3}{5!}+\frac{\pi^7}{2^7}\frac{t^5}{7!}-...\right)\\ =& \psi^0(t)\frac{\pi}{2}-t\psi^0(t)\mathcal{O}(t) \end{align*} Therefore \begin{align*}\frac{\pi}{2}\frac{\cos\left(\frac{t\pi}{2}\right)}{t} +\psi^0(t)\frac{\sin\left(\frac{\pi t}{2} \right)}{t} =& \frac{\pi}{2}\frac{1}{t}+\psi^0(t)\frac{\pi}{2}+\mathcal{O}(t)-t\psi^0(t)\mathcal{O}(t) \\ =& \frac{\pi}{2}\left(\frac{1}{t}+ \psi^0(t)\right)+\mathcal{O}(t)-t\psi^0(t)\mathcal{O}(t)\\ =&\frac{\pi}{2}\psi^{0}(t+1)+\mathcal{O}(t)-t\psi^0(t)\mathcal{O}(t)\\ \end{align*} \lim_{t \to 0+}\Gamma(t+1)\left[\frac{\pi}{2}\frac{\cos\left(\frac{t\pi}{2}\right)}{t} +\psi^0(t)\frac{\sin\left(\frac{\pi t}{2} \right)}{t} \right] =\lim_{t \to 0+}\Gamma(t+1)\left[\frac{\pi}{2}\psi^{0}(t+1)+\mathcal{O}(t)-t\psi^0(t)\mathcal{O}(t) \right]=-\frac{\pi}{2}\gamma
where \displaystyle \lim_{t \to 0+} \psi^0(t+1) =-\gamma and \displaystyle \lim_{t \to 0+} t\psi^0(t) =-1

Therefore
\begin{align*} \frac{d}{da} \int_{0}^{\infty} \frac{(1-\cos a x)\ln(x)}{x^2}dx =& -\frac{\pi}{2} \gamma -\frac{\pi \ln(a)}{2}\\ \Longrightarrow \int_{0}^{\infty} \frac{(1-\cos a x)\ln(x)}{x^2}dx = &-\frac{\pi}{2} \int a\gamma +a\ln(a) da + C\\ =& -\frac{a\pi\gamma}{2} - \frac{a\pi \ln(a)}{2} + \frac{\pi a}{2} + C \end{align*} Note that if a=0 then C=0 therefore \boxed{ \int_{0}^{\infty} \frac{(1-\cos a x)\ln(x)}{x^2}dx = -\frac{\pi a}{2}(\gamma +\ln(a)-1)}

Monday, June 14, 2021

Continued fractions

coth

Continued fraction expansion for \coth\left(z\right)

We will show the proof for this continued fraction posted by @infseriesbot. \frac{e+1}{e-1} = \coth\left(\frac{1}{2}\right) = 2+\cfrac{1}{6+\cfrac{1}{10+\cfrac{1}{14+\cdots}}} Update We add the proof of this result, which is bascially the same function evaluated at other point: 0 = \coth\left(-\frac{i\pi}{2}\right) = 2-\cfrac{\pi^2}{6-\cfrac{\pi^2}{10-\cfrac{\pi^2}{14-\cdots}}}
Recall the continued fraction representation for ratios of confluent hypergeometric functions of the type _{0}F_{1}(c,z): Let (a_{n}) be a sequence of complex numbers defined by a_{n} = \frac{1}{(c+n-1)(c+n)} where c is a complex constant such that c\notin \mathbb{Z^{-}}\cup \{0\} . Then 1+\mathcal{K}_{n=1}^{\infty} \left(\frac{a_{n}z}{1}\right) = \frac{_{0}F_{1} (c,z)}{_{0}F_{1} (c+1,z)}
Now, for this particular case, we will use the hypergemetric representation of \sin(z) and \cos(z) \sin(z)= z \, {_{0}F_{1}}\left(\frac{3}{2},-\frac{z^2}{4}\right) \sin(z)= {_{0}F_{1}}\left(\frac{1}{2},-\frac{z^2}{4}\right) Then \begin{align*} \tan(x) = &\frac{\sin(x)}{\cos(x)} = \frac{x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+...}{1+\frac{x^2}{2!}+\frac{x^4}{4!}-\frac{x^6}{6!}+...}\\ =& \frac{x(1-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+...)}{1+\frac{x^2}{2!}+\frac{x^4}{4!}-\frac{x^6}{6!}+...}\\ =& \frac{x\cdot _{0}F_{1}\left(\frac{3}{2},-\frac{x^2}{4}\right)}{ _{0}F_{1}\left(\frac{1}{2},-\frac{x^2}{4}\right)}\\ =& \frac{x}{1-\cfrac{x^2}{3-\cfrac{x^2}{5-\cfrac{x^2}{7-\cdots}}}} \end{align*} If we evaluate at x=\frac{iz}{2} \begin{align*} \tan\left(\frac{iz}{2}\right) = &\frac{iz^2}{2-\cfrac{-z^2}{6-\cfrac{-z^2}{10-\cfrac{-z^2}{14-\cdots}}}}\\ =&\frac{iz}{2+\cfrac{z^2}{6+\cfrac{z^2}{10+\cfrac{z^2}{14-\cdots}}}}\\ \Longrightarrow \coth\left(\frac{z}{2}\right)z = &\cot\left(\frac{iz}{2}\right)iz = 2+\cfrac{z^2}{6+\cfrac{z^2}{10+\cfrac{z^2}{14+\cdots}}} \end{align*} If we evaluate at z=1 \boxed{ \frac{e+1}{e-1} = \coth\left(\frac{1}{2}\right) = 2+\cfrac{1}{6+\cfrac{1}{10+\cfrac{1}{14+\cdots}}}} Since the result is valid for al the complex plane, if we evaluate at z=-i\pi then \displaystyle \coth\left(-\frac{i\pi}{2}\right) =0 and \boxed{ 0 = \coth\left(-\frac{i\pi}{2}\right) = 2-\cfrac{\pi^2}{6-\cfrac{\pi^2}{10-\cfrac{\pi^2}{14-\cdots}}}}

Residue theorem

niceintegral

Integral involving the Residue theorem

Today we present here the proof of this nice integral. The solution involves Complex Analysis, geometric series, even functions and trigonometry \begin{align*} \int_{0}^{\infty} \frac{\cos \sqrt{x} }{e^{2\pi \sqrt{x}}-1}dx = & \int_{0}^{\infty}\left[ \sum_{n=1}^{\infty} e^{-2\pi n \sqrt{x}} \cos \sqrt{x}\right] dx\\ =& \sum_{n=1}^{\infty} \int_{0}^{\infty}e^{-2\pi n \sqrt{x}} \cos \sqrt{x} dx \\ =& \sum_{n=1}^{\infty} \int_{0}^{\infty}e^{-2\pi n \sqrt{x}} \left(\frac{e^{i\sqrt{x}}+e^{-i\sqrt{x}}}{2}\right) dx \\ =& \sum_{n=1}^{\infty} \int_{0}^{\infty}\frac{e^{\sqrt{x}(i-2\pi n)}+e^{\sqrt{x}(-i-2\pi n)}}{2} dx \\ =& \frac{1}{2}\sum_{n=1}^{\infty}\int_{0}^{\infty}e^{\sqrt{x}(i-2\pi n)}dx+\frac{1}{2}\sum_{n=1}^{\infty}\int_{0}^{\infty}e^{\sqrt{x}(-i-2\pi n)}dx\\ =& \frac{1}{2}\sum_{n=1}^{\infty} \tfrac{e^{\sqrt{x}(i-2\pi n)}(-1+(i-2\pi n) \sqrt{x})}{(i-2\pi n)^2} \Big|_0^\infty+\frac{1}{2}\sum_{n=1}^{\infty} \tfrac{e^{\sqrt{x}(-i-2\pi n)}(-1+(-i-2\pi n) \sqrt{x})}{(-i-2\pi n)^2}\Big|_{0}^{\infty}\\ =& \sum_{n=1}^{\infty} \frac{1}{(i-2\pi n)^2}+ \sum_{n=1}^{\infty} \frac{1}{(-i-2\pi n)^2}\\ =& \sum_{n=1}^{\infty} \left[\frac{1}{(i-2\pi n)^2}+\frac{1}{(-i-2\pi n)^2}\right]\\ =& 2+\sum_{n=0}^{\infty} \left[\frac{1}{(i-2\pi n)^2}+\frac{1}{(-i-2\pi n)^2}\right]\\ =& 1+\frac{1}{2}\sum_{n=-\infty}^{\infty} \frac{1}{(i-2\pi n)^2} + \frac{1}{2}\sum_{n=-\infty}^{\infty} \frac{1}{(-i-2\pi n)^2}\\ =& 1- \frac{1}{2}\operatorname{Res}\left(\frac{\pi \cot \pi z}{(i-2\pi z)^2}, \frac{i}{2\pi} \right)- \frac{1}{2}\operatorname{Res}\left(\frac{\pi \cot \pi}{(-i-2\pi z)^2}, -\frac{i}{2\pi} \right)\\ =&1- \frac{1}{2}\lim_{z\to \frac{i}{2\pi}} \left[\frac{d}{dz} \frac{(z-\frac{i}{2\pi})^2(\pi \cot \pi z)}{(i-2\pi z)^2}\right]- \frac{1}{2}\lim_{z\to -\frac{i}{2\pi}} \left[\frac{d}{dz} \frac{(z+\frac{i}{2\pi})^2(\pi \cot \pi z)}{(-i-2\pi z)^2}\right]\\ =&1- \frac{1}{8\pi}\lim_{z\to \frac{i}{2\pi}} \frac{d}{dz}\cot \pi z- \frac{1}{8\pi}\lim_{z\to -\frac{i}{2\pi}} \frac{d}{dz}\cot \pi z\\ =&1+ \frac{1}{8}\csc^2 \left(\frac{i}{2}\right) + \frac{1}{8}\csc^2\left( -\frac{i}{2}\right)\\ =& 1-\frac{1}{8}\left( \frac{2i}{e^{-\frac{1}{2}}-e^{\frac{1}{2}}}\right)^2 -\frac{1}{8}\left( \frac{2i}{e^{\frac{1}{2}}-e^{-\frac{1}{2}}}\right)^2 \\ =& 1- \frac{e}{( e-1)^2} \end{align*}

Thursday, June 10, 2021

Gaussian integral

Mathedemo

Generalization of Gaussian integral

We are going to prove this result posted by @infeseriesbot that is a generalization of the Gaussian integral (s=\frac{1}{2}). \int_{-\infty}^{\infty} \sum_{n=0}^{\infty } \frac{(-x^2)^n}{n!^{2s}} dx = \pi^{1-s} \quad \operatorname{Re}(s)>0 It was previously discussed in Twitter a few months ago and turned out that the proof is a consquence of the Ramanujan's master theorem. However, other proofs were provided using the bracketts method, the log-gamma function and contour integration. In the proof we also find the values of s that guarantee the convergence of the integral (\operatorname{Re}(s)>0).

Proof. Recall the Ramanujan's master theorem: If a function f(x) has an expansion of the form f(x) = \sum_{k=0}^{\infty} \frac{\phi(k)}{k!} (-x)^k for some function (say analytic or integrable) \varphi(k), then the Mellin transform of f(x) is given by \{\mathscr{M}f(x)\}(t) = \int_{0}^{\infty} x^{t-1} f(x) dx = \Gamma(t)\varphi(-t) In this particular case \begin{align*} \int_{-\infty}^{\infty} \sum_{n=0}^{\infty } \frac{(-x^2)^n}{n!^{2s}} dx =& 2\int_{0}^{\infty} \sum_{n=0}^{\infty } \frac{(-x^2)^n}{n!^{2s}} dx \\ =& \int_{0}^{\infty}\sum_{n=0}^{\infty} w^{\frac{1}{2}-1} \frac{(-w)^n}{n!^{2s}} dw \quad (w \mapsto x^2)\\ =& \int_{0}^{\infty} w^{\frac{1}{2}-1} \sum_{n=0}^{\infty } \frac{(-w)^n}{n!^{2s}} dw \\ =& \int_{0}^{\infty} w^{\frac{1}{2}-1} \sum_{n=0}^{\infty } (n!)^{1-2s} \frac{(-w)^n}{n!} dw \end{align*} this is a Mellin transform \displaystyle \{\mathscr{M}f(w)\}\left(\frac{1}{2}\right) = \int_{0}^{\infty} w^{\frac{1}{2}-1} f(w)dw \textrm{ where } f(w) = \sum_{n=0}^{\infty } (n!)^{1-2s}\frac{(-w)^n}{n!} \quad \textrm{and } \; \phi(-t) = (-t)!^{1-2s} = \left[\Gamma(1-t)\right]^{1-2s} By the Ramanujan's master theorem \int_{0}^{\infty} w^{\frac{1}{2}-1} \sum_{n=0}^{\infty } (n!)^{1-2s}\frac{(-w)^n}{n!} dw = \Gamma\left(\frac{1}{2}\right) \left[\Gamma\left(\frac{1}{2}\right)\right]^{1-2s} = \pi^{1-s} The result is valid for \operatorname{Re}(s)>0. To show this, lets review the convergence of \displaystyle \sum_{n=0}^{\infty } \frac{(-x^2)^n}{n!^{2s}} dx. We can use the ratio test for the sequence \displaystyle \left(\frac{(-x^2)^n}{n!^{2s}}\right)_{n} \begin{align*} \lim_{n\to \infty}\left| \frac{\frac{(-x^2)^{n+1}}{(n+1)!^{2s}}}{\frac{(-x^2)^{n}}{n!^{2s}}}\right| =& \lim_{n\to \infty} \left| \frac{n!^{2s}}{(n+1)!^{2s}}\right||x^2| \\ =& \lim_{n\to \infty} \left| \frac{n!^{2s}}{(n+1)!^{2s}}\right||x^2| \\ =& \lim_{n \to \infty } \exp\left(2\operatorname{Re}(s\ln n!)-2\operatorname{Re}(s\ln(n+1)!)\right) |x^2|\\ =& \lim_{n \to \infty } \exp\left(2\operatorname{Re}(s) \left(\ln n!-\ln(n+1)!\right)\right) |x^2|\\ =& \exp\left( \lim_{n \to \infty } \operatorname{Re}(s) 2\left(\ln n!-\ln(n+1)!\right)\right) |x^2|\\ =& \exp \left(\operatorname{Re}(s) \lim_{n \to \infty}2\ln\left(\frac{n!}{(n+1)!}\right) \right) |x^2|\\ =& \exp \left(\operatorname{Re}(s) (-\infty) \right) |x^2| \end{align*} \lim_{n\to \infty}\left| \frac{\frac{(-x^2)^{n+1}}{(n+1)!^{2s}}}{\frac{(-x^2)^{n}}{n!^{2s}}}\right| = \begin{cases} 0 \quad \textrm{ if } \operatorname{Re}(s) >0 \\ |x^2| \quad \textrm{ if } \operatorname{Re}(s) =0\\ \infty \quad \textrm{ if } \operatorname{Re}(s) <0 \end{cases} Therefore \sum_{n=0}^{\infty } \frac{(-x^2)^n}{n!^{2s}} dx <\infty \quad \textrm{if} \begin{cases} \operatorname{Re}(s)>0\\ \operatorname{Re}(s)=0 \wedge |x|<1 \end{cases} Then, we have obtained the desired result: \boxed{ \int_{-\infty}^{\infty} \sum_{n=0}^{\infty } \frac{(-x^2)^n}{n!^{2s}} dx = \pi^{1-s} \quad \operatorname{Re}(s)>0}

Series of the day

Series involving the digamma and the zeta functions The sum \displaystyle \sum\frac{1}{(n+1)^pn^q} ...