Calculating the Golden Ratio

Here I’ll exhibit a method from approximating another well known irrational. The Golden Ratio, often denoted \varphi, has fascinated people for many years, some going as far as to attribute mystical connotations to it.

\varphi crops up a lot in nature and it’s always a pleasure to see it. It’s most easily understood as the positive root of the quadratic x^2-x-1, making it equal to \frac{1+\sqrt{5}}{2} . Note that this means it has the property \varphi^2 = \varphi + 1. This time we’ll construct a sequence converging to \varphi using the function f(x) = 1 + \frac{1}{x} .

Let x_0 = 1, and for all n\in\mathbb{N} let x_{n+1} = f(x_n). Then the first few numbers in this sequence are


One might begin to suspect at this point that x_n = \frac{F_{n+1}}{F_n}, where \{F_n\} is the Fibonacci sequence (with F_0 =  F_1 = 1). We can prove this is the case by induction, we’ve already shown the base case by listing the first few values above. Then if we assume that for some k\in\mathbb{N} x_k = \frac{F_{k+1}}{F_k}, then

\begin{array}{ccl} x_{k+1} &=& f(x_k)\\&=& 1+\frac{1}{x_k}\\ &=& \frac{F_{k+1} + F_k}{F_{k+1}}\\&=& \frac{F_{k+2}}{F_{k+1}}\end{array} .

This sequence is known to converge to \varphi, but I’ve never explicitly convinced myself of this before so I’ll do that now. First we’ll define a new sequence \{y_n\} by y_n = x_{n+1}. In doing so we’ve ditched the first element of the sequence and so now for all n\in\mathbb{N}, 1.5\leq y_n \leq 2. This is because for all x we have that 1.5\leq x\leq 2 implies 1.5\leq f(x)\leq 2, which you can check out for yourself. Now we note that for all 1.5\leq x,y \leq 2 we have

\begin{array}{ccl} |f(x)-f(y)| &=& |\frac{1}{x}-\frac{1}{y}|\\&=& \frac{1}{xy}|x-y| \\ &\leq& \frac{4}{9}|x-y|\end{array} ,

since \frac{1}{xy} achieves a maximum of \alpha = \frac{4}{9} on the square [1.5,2]\times [1.5,2].

(This shows that f(x) is a contraction mapping on the interval 1.5\leq x\leq 2. At this point we could appeal to the Banach fixed-point theorem and we’d be done. Instead, we directly prove convergence of our series using some inspiration from that theorem).

The next thing we can so is derive the following inequality (from here on in we’re assuming that all numbers we consider are between 1.5 and 2):

\begin{array}{cccl} &|x-y| &\leq& |x-f(x)| + |f(x)-f(y)| + |y-f(y)| \\ \Rightarrow&|x-y|& \leq & \frac{1}{1-\alpha}(|x-f(x)|+ |y-f(y)|)\end{array}


|y_1-y_0| \leq \alpha^0|y_1 - y_0|.

Now if we assume that for some k, |y_{k+1} - y_k| \leq \alpha^k|y_1-y_0| then

|y_{k+2} - y_{k+1}| = |f(y_{k+1} )- f(y_k)| \leq \alpha^{k+1}|y_1-y_0|.

Thus by induction |y_{n+1}-y_n| \leq \alpha^n|y_1-y_0| for all n.

Now we show this sequence is Cauchy convergent by noting that for all m,n

|y_n - y_m| \leq\frac{1}{1-\alpha}(|y_{n+1}-y_n|+|y_{m+1}-y_m|)\leq \frac{\alpha^n + \alpha^m}{1-\alpha}|y_1 - y_0|

which clearly approaches 0 as m and n go to \infty. So \{y_n\} (and thus \{x_n\}) is Cauchy convergent, and thus convergent since \mathbb{R} is complete. To see that it converges to \varphi, we’ll use the continuity of f(x):

\begin{array}{ccl} \lim_{n\rightarrow \infty}x_{n}&=& \lim_{n\rightarrow \infty}x_{n+1} \\&=& \lim_{n\rightarrow\infty}f(x_n)\\&=& f(\lim_{n\rightarrow\infty}x_n)\end{array}

Since the equation f(x) = x has a unique solution for 1.5\leq x\leq 2, it must be that \lim_{n\rightarrow \infty}x_n = \varphi.

Thus by iterated application of the function f(x) = 1+\frac{1}{x} we may approximate \varphi. Continued iteration yields ever more accurate approximations.

Let me know if I’ve made any mistakes, I haven’t given myself much proof-reading time before publishing (=




Calculating Square Root of 2

I had so much fun deriving that series which converges to pi the other day that I thought it’d be fun to occasionally whip up such series for other irrational numbers. Today I’ll do \sqrt{2}.

There are many different series that would do the job, the one I’ll derive is found by expanding \sqrt{x} around x=1 as a power series.

Indeed, for x satsfying |x-1| < 1 we have

\begin{array}{ccl} \sqrt{x} & = & 1 + \frac{1}{2}(x-1) - \frac{1}{8}(x-1)^2 + \frac{1}{16}(x-1)^3-\frac{5}{128}(x-1)^4+ ... \\ & = & 1+ \frac{1}{2}\sum_{n=0}^\infty (-\frac{1}{4})^n\frac{(2n)!}{(n!)^2(n+1)}(x-1)^{n+1}\end{array}.

This series is guaranteed to converge for all 0<x<2, and to diverge for all x<0 and x>2.   The number 2 itself sits right on the boundary of the circle of convergence, so although we would like to say

\sqrt{2} = 1 + \sum_{n=0}^\infty (-\frac{1}{4})^n\frac{(2n)!}{(n!)^2(n+1)},

we first need to show that the series is convergent. Consider the sequence a_n = \frac{(2n)!}{4^n(n!)^2(n+1)}. This sequence is monotonically decreasing, and we will show that for all n>0, a_n < \frac{1}{n}. Firstly,

a_1 = \frac{2!}{4\cdot 2} = \frac{1}{4} < 1.

We now assume that a_k < \frac{1}{k} for some integer k and show that a_{k+1} < \frac{1}{k+1}. This is true, since

\begin{array}{ccl}\frac{(2k+2)!}{4^{k+1}((k+1)!)^2(k+2)} & = & \frac{(2k+1)(2k+2)}{4(k+1)(k+2)}\frac{(2k)!}{4^k(k!)^2(k+1)} \\ &=& \frac{(2k+1)(2k+2)}{4(k+1)(k+2)}a_k\\& < & \frac{(2k+1)(2k+2)}{4k(k+1)(k+2)}\\ & = & \frac{1}{4}(\frac{1}{k}+\frac{3}{k+2}) \\ & \leq & \frac{1}{k+1}\end{array},

where the last inequality holds for k\geq1. Thus a_{k} < \frac{1}{k} implies a_{k+1}<\frac{1}{k+1}, and so by induction a_n < \frac{1}{n} for all n.

To show convergence of our series, we note we have shown enough to invoke the alternating series test. So it is true that

1 + \sum_{n=0}^\infty (-\frac{1}{4})^n\frac{(2n)!}{(n!)^2(n+1)}

converges, and by continuity  it must equal \sqrt{2}. Truncating the series at n=10,000,000 gives

\sqrt{2}\approx 1.4142135623775118

which is correct up 11 decimal places.

Mathematical Proof that God EXISTS!!

Trigger Warning: This post takes the position that Santa Claus doesn’t exist.

I heard this one from a friend of a friend, it’s a good one. It goes like this:

Consider the logical connective associated with implication, the material conditional as it is sometimes called. It basically captures the intuition behind the “if blah is true then bleep is true”. So if a and b are two truth values (either true or false) and we write the material conditional as “if a then b“, then we can write out the four possible outputs:

  1. “if false then false” is true (both are false, implication still makes sense).
  2. “if false then true” is true (remember a implies b, but b can be true whether or not a is).
  3. “if true then false” is false (a implies b, so this combination is not allowed).
  4. “if true then true” is true (a forces b to be true).

The material conditional evaluates to either true or false, depending on whether the truth values are consistent with the implication. Now we can consider a to represent the proposition “God exists”. Let b stand for “Santa Claus exists”. We can now ask ourselves, is a true? Well, clearly b is false. What about the implication “if a then b“? I think everyone would agree that the existence of God in no way implies the existence of Santa, so we can say “if God exists then Santa exists” is false. We now check out the above list and observe that case 3 uniquely matches our situation; the whole implication is false and b is false. From this we conclude that a must be true, and therefore God exists.

Half Angle Substitution

Here’s a neat substitution for integration that I never learned from calculus subjects but found in a book. I also frequently forget the formulae so whenever I want to use it I need to re-derive everything, hopefully writing it here will help me remember.

If x is our integration variable, then we make the substitution x = \tan(t/2). Using the trig identities, we can now get expressions for trigonometric functions in terms of t.

\sin x = 2\sin \frac{x}{2}\cos \frac{x}{2}
= 2\tan\frac{x}{2}\cos^2\frac{x}{2}
= 2\frac{t}{\sec^2(x/2)}
= 2t/(t^2+1).

\cos x = 1 - 2\sin^2\frac{x}{2}
= 1 - 2\tan^2\frac{x}{2}\cos^2\frac{x}{2}
= (t^2+1-2t^2)/(t^2+1)
= (1-t^2)/(1+t^2)

\tan x = \sin x /\cos x
= 2t/(1-t^2)

And of course dt = \frac{1}{2}\sec^2\frac{x}{2}dx so dx = 2dt/(1+t^2).
Now we can work out things like \int \csc x dx, which is given by
\int dt/t = \ln\tan\frac{x}{2} + c

Happy Belated Pi Day

I didn’t create this blog until after Pi day, so unfortunately I’m a bit late. I celebrate Pi day by deriving the same series expression for pi every year. I’m not sure who to attribute this particular method to, but I definitely learned in from some old calculus book once upon a time. It’s as simple as calculating \int_0^1\frac{1}{x^2+1}dx two different ways and equating the results. First up, we can make the substitution x = \tan\theta and simplify yielding \int_0^{\pi/4}d\theta = \frac{\pi}{4}. On the other hand, if |x|<1 then \sum_{n=0}^\infty(-1)^nx^{2n} converges (uniformly on all intervals |x|\leq r < 1) to \frac{1}{x^2+1}, and so may proceed with

\int_0^1\frac{1}{1+x^2}dx = \sum_{n=0}^\infty(-1)^n\int_0^1x^{2n}dx. Each \int_0^1x^{2n}dx = \frac{1}{2n+1}, and so putting it all together we’re left with

\pi = 4\sum_{n=0}^\infty \frac{(-1)^n}{2n+1} = 4(1-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+...)

And there it is!

Generating Function for Fibonacci Sequence

We seek closed form for the power series \sum_{n=0}^\infty F_nz^n, where F_0 = 1, F_1 = 1 and F_{n+2} = F_{n+1} + F_n. First we note that the ratio test yields

\lim_{n\rightarrow\infty}\frac{F_{n+1}}{F_n}|z| = \varphi |z|, where \varphi = \frac{1+\sqrt{5}}{2} is the golden ratio. Thus the series converges on the region of the complex plane where |z| < \frac{1}{\varphi}. Now let the power series be denoted by F(z), and note that

F(z) - zF(z) - z^2F(z) = 1 + \sum_{n=0}^\infty (F_{n+2}-F_{n+1}-F_n)z^{n+2} = 1

and so F(z) = \frac{1}{1-z-z^2}. It seems so unlikely, before you know better at least, that things like that can be done. Why should it be that the nth Fibonacci number is given by \frac{1}{2\pi i}\oint\frac{1}{z^{n+1}(1-z-z^2)}dz, the integral along some closed contour in the complex plane of a function that otherwise seems unrelated. That’s maths!

On Complex vs Real Differentiability

Once upon a time I was confused about what really made a holomorphic function different from a function on \mathbb{R}^2 which is differentiable in the real sense. I mean obviously I knew how they were different but I wanted to put it in a context that was more meaningful for me, and after some thinking I thought the following and was satisfied:

Let’s cover some definitions; it’s likely I’ll write these classic definitions in more detail in later posts. Let f:\mathbb{C}\rightarrow\mathbb{C} be a complex function and let z_0 be a complex number. f is called (complex) differentiable at z_0 if the limit

f'(z_0) = \lim_{z\rightarrow z_0}\frac{f(z) - f(z_0)}{z-z_0}

exists, where the limit is taken along whatever path you like. The function f is differentiable on a subset A\subset\mathbb{C} if it is differentiable for all z_0\in A. Now a complex number can be written as z = x+iy, where x,y\in\mathbb{R}, and a complex function can be written as f(z=x+iy) = u(x,y)+iv(x,y) where u and v are both functions from \mathbb{R}^2 to \mathbb{R}. As \mathbb{R}-vector spaces, \mathbb{C} and \mathbb{R}^2 are isomorphic and we can think of f as a function f:\mathbb{R}^2\rightarrow\mathbb{R}^2 given by

(x,y)\mapsto (u(x,y),v(x,y)).

Now we it can be said that a complex function is complex differentiable if and only if it is differentiable in the real sense (i.e. all partial derivatives of u and v exist and are continuous) and u and v satisfy the Cauchy-Riemann equations (\frac{\partial u}{\partial x} = \frac{\partial v}{\partial y} and \frac{\partial v}{\partial x} = -\frac{\partial u}{\partial y}).

This observation is well known and does a good job of contrasting the two senses of differentiability, but I wanted to know the answer to “why is the derivative of the real version of the function a linear transformation (given by the Jacobian matrix) but the derivative of the complex version another complex function?” The answer to that is obvious in hindsight, but I was happy when I went from not knowing to knowing so I thought I better recount it here. The trick is, of course, that we construct a field isomorphism (which I’ll call \varphi) from \mathbb{C} into \mathrm{M}_2(\mathbb{R}) (the ring of 2 by 2 matrices with real entries) given by

\varphi(z) = \begin{pmatrix} \mathrm{Re}z & -\mathrm{Im}z \\ \mathrm{Im}z& \mathrm{Re}z \end{pmatrix}.

Now taking the derivative  of f gives us a function \mathrm{D}f:\mathbb{R}^2\rightarrow\mathrm{M}_2(\mathbb{R}), but the Cauchy-Riemann equations mean that if f is complex differentiable, then \mathrm{D}f(\mathbb{R}^2) \subset \varphi(\mathbb{C})\subset\mathrm{M}_2(\mathbb{R}). So in that case we can think of \mathrm{D}f as a function from \mathbb{R}^2\cong\mathbb{C} to \varphi(\mathbb{C})\cong\mathbb{C}, and so my question is resolved! The real version of the function is complex differentiable precisely when the Jacobian matrix can be identified with a complex function through the isomorphism \varphi.