## Calculating the Golden Ratio

Here I’ll exhibit a method from approximating another well known irrational. The Golden Ratio, often denoted $\varphi$, has fascinated people for many years, some going as far as to attribute mystical connotations to it.

$\varphi$ crops up a lot in nature and it’s always a pleasure to see it. It’s most easily understood as the positive root of the quadratic $x^2-x-1$, making it equal to $\frac{1+\sqrt{5}}{2}$. Note that this means it has the property $\varphi^2 = \varphi + 1$. This time we’ll construct a sequence converging to $\varphi$ using the function $f(x) = 1 + \frac{1}{x}$.

Let $x_0 = 1$, and for all $n\in\mathbb{N}$ let $x_{n+1} = f(x_n)$. Then the first few numbers in this sequence are

$1,2,\frac{3}{2},\frac{5}{3},\frac{8}{5},\frac{13}{8},...$

One might begin to suspect at this point that $x_n = \frac{F_{n+1}}{F_n}$, where $\{F_n\}$ is the Fibonacci sequence (with $F_0 = F_1 = 1$). We can prove this is the case by induction, we’ve already shown the base case by listing the first few values above. Then if we assume that for some $k\in\mathbb{N}$ $x_k = \frac{F_{k+1}}{F_k}$, then

$\begin{array}{ccl} x_{k+1} &=& f(x_k)\\&=& 1+\frac{1}{x_k}\\ &=& \frac{F_{k+1} + F_k}{F_{k+1}}\\&=& \frac{F_{k+2}}{F_{k+1}}\end{array}$.

This sequence is known to converge to $\varphi$, but I’ve never explicitly convinced myself of this before so I’ll do that now. First we’ll define a new sequence $\{y_n\}$ by $y_n = x_{n+1}$. In doing so we’ve ditched the first element of the sequence and so now for all $n\in\mathbb{N}$, $1.5\leq y_n \leq 2$. This is because for all $x$ we have that $1.5\leq x\leq 2$ implies $1.5\leq f(x)\leq 2$, which you can check out for yourself. Now we note that for all $1.5\leq x,y \leq 2$ we have

$\begin{array}{ccl} |f(x)-f(y)| &=& |\frac{1}{x}-\frac{1}{y}|\\&=& \frac{1}{xy}|x-y| \\ &\leq& \frac{4}{9}|x-y|\end{array}$,

since $\frac{1}{xy}$ achieves a maximum of $\alpha = \frac{4}{9}$ on the square $[1.5,2]\times [1.5,2]$.

(This shows that $f(x)$ is a contraction mapping on the interval $1.5\leq x\leq 2$. At this point we could appeal to the Banach fixed-point theorem and we’d be done. Instead, we directly prove convergence of our series using some inspiration from that theorem).

The next thing we can so is derive the following inequality (from here on in we’re assuming that all numbers we consider are between 1.5 and 2):

$\begin{array}{cccl} &|x-y| &\leq& |x-f(x)| + |f(x)-f(y)| + |y-f(y)| \\ \Rightarrow&|x-y|& \leq & \frac{1}{1-\alpha}(|x-f(x)|+ |y-f(y)|)\end{array}$

Clearly

$|y_1-y_0| \leq \alpha^0|y_1 - y_0|$.

Now if we assume that for some $k$, $|y_{k+1} - y_k| \leq \alpha^k|y_1-y_0|$ then

$|y_{k+2} - y_{k+1}| = |f(y_{k+1} )- f(y_k)| \leq \alpha^{k+1}|y_1-y_0|$.

Thus by induction $|y_{n+1}-y_n| \leq \alpha^n|y_1-y_0|$ for all $n$.

Now we show this sequence is Cauchy convergent by noting that for all $m,n$

$|y_n - y_m| \leq\frac{1}{1-\alpha}(|y_{n+1}-y_n|+|y_{m+1}-y_m|)\leq \frac{\alpha^n + \alpha^m}{1-\alpha}|y_1 - y_0|$

which clearly approaches $0$ as $m$ and $n$ go to $\infty$. So $\{y_n\}$ (and thus $\{x_n\}$) is Cauchy convergent, and thus convergent since $\mathbb{R}$ is complete. To see that it converges to $\varphi$, we’ll use the continuity of $f(x)$:

$\begin{array}{ccl} \lim_{n\rightarrow \infty}x_{n}&=& \lim_{n\rightarrow \infty}x_{n+1} \\&=& \lim_{n\rightarrow\infty}f(x_n)\\&=& f(\lim_{n\rightarrow\infty}x_n)\end{array}$

Since the equation $f(x) = x$ has a unique solution for $1.5\leq x\leq 2$, it must be that $\lim_{n\rightarrow \infty}x_n = \varphi$.

Thus by iterated application of the function $f(x) = 1+\frac{1}{x}$ we may approximate $\varphi$. Continued iteration yields ever more accurate approximations.

Let me know if I’ve made any mistakes, I haven’t given myself much proof-reading time before publishing (=

## Mathematical Proof that God EXISTS!!

Trigger Warning: This post takes the position that Santa Claus doesn’t exist.

I heard this one from a friend of a friend, it’s a good one. It goes like this:

Consider the logical connective associated with implication, the material conditional as it is sometimes called. It basically captures the intuition behind the “if blah is true then bleep is true”. So if $a$ and $b$ are two truth values (either true or false) and we write the material conditional as “if $a$ then $b$“, then we can write out the four possible outputs:

1. “if false then false” is true (both are false, implication still makes sense).
2. “if false then true” is true (remember $a$ implies $b$, but $b$ can be true whether or not $a$ is).
3. “if true then false” is false ($a$ implies $b$, so this combination is not allowed).
4. “if true then true” is true ($a$ forces $b$ to be true).

The material conditional evaluates to either true or false, depending on whether the truth values are consistent with the implication. Now we can consider $a$ to represent the proposition “God exists”. Let $b$ stand for “Santa Claus exists”. We can now ask ourselves, is $a$ true? Well, clearly $b$ is false. What about the implication “if $a$ then $b$“? I think everyone would agree that the existence of God in no way implies the existence of Santa, so we can say “if God exists then Santa exists” is false. We now check out the above list and observe that case 3 uniquely matches our situation; the whole implication is false and $b$ is false. From this we conclude that $a$ must be true, and therefore God exists.
QED

## Half Angle Substitution

Here’s a neat substitution for integration that I never learned from calculus subjects but found in a book. I also frequently forget the formulae so whenever I want to use it I need to re-derive everything, hopefully writing it here will help me remember.

If $x$ is our integration variable, then we make the substitution $x = \tan(t/2)$. Using the trig identities, we can now get expressions for trigonometric functions in terms of $t$.

$\sin x = 2\sin \frac{x}{2}\cos \frac{x}{2}$
$= 2\tan\frac{x}{2}\cos^2\frac{x}{2}$
$= 2\frac{t}{\sec^2(x/2)}$
$= 2t/(t^2+1)$.

$\cos x = 1 - 2\sin^2\frac{x}{2}$
$= 1 - 2\tan^2\frac{x}{2}\cos^2\frac{x}{2}$
$= (t^2+1-2t^2)/(t^2+1)$
$= (1-t^2)/(1+t^2)$

$\tan x = \sin x /\cos x$
$= 2t/(1-t^2)$

And of course $dt = \frac{1}{2}\sec^2\frac{x}{2}dx$ so $dx = 2dt/(1+t^2)$.
Now we can work out things like $\int \csc x dx$, which is given by
$\int dt/t = \ln\tan\frac{x}{2} + c$

## Happy Belated Pi Day

I didn’t create this blog until after Pi day, so unfortunately I’m a bit late. I celebrate Pi day by deriving the same series expression for pi every year. I’m not sure who to attribute this particular method to, but I definitely learned in from some old calculus book once upon a time. It’s as simple as calculating $\int_0^1\frac{1}{x^2+1}dx$ two different ways and equating the results. First up, we can make the substitution $x = \tan\theta$ and simplify yielding $\int_0^{\pi/4}d\theta = \frac{\pi}{4}$. On the other hand, if $|x|<1$ then $\sum_{n=0}^\infty(-1)^nx^{2n}$ converges (uniformly on all intervals $|x|\leq r < 1$) to $\frac{1}{x^2+1}$, and so may proceed with

$\int_0^1\frac{1}{1+x^2}dx = \sum_{n=0}^\infty(-1)^n\int_0^1x^{2n}dx$. Each $\int_0^1x^{2n}dx = \frac{1}{2n+1}$, and so putting it all together we’re left with

$\pi = 4\sum_{n=0}^\infty \frac{(-1)^n}{2n+1} = 4(1-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+...)$

And there it is!

## Generating Function for Fibonacci Sequence

We seek closed form for the power series $\sum_{n=0}^\infty F_nz^n$, where $F_0 = 1$, $F_1 = 1$ and $F_{n+2} = F_{n+1} + F_n$. First we note that the ratio test yields

$\lim_{n\rightarrow\infty}\frac{F_{n+1}}{F_n}|z| = \varphi |z|$, where $\varphi = \frac{1+\sqrt{5}}{2}$ is the golden ratio. Thus the series converges on the region of the complex plane where $|z| < \frac{1}{\varphi}$. Now let the power series be denoted by $F(z)$, and note that

$F(z) - zF(z) - z^2F(z) = 1 + \sum_{n=0}^\infty (F_{n+2}-F_{n+1}-F_n)z^{n+2} = 1$

and so $F(z) = \frac{1}{1-z-z^2}$. It seems so unlikely, before you know better at least, that things like that can be done. Why should it be that the nth Fibonacci number is given by $\frac{1}{2\pi i}\oint\frac{1}{z^{n+1}(1-z-z^2)}dz$, the integral along some closed contour in the complex plane of a function that otherwise seems unrelated. That’s maths!

## On Complex vs Real Differentiability

Once upon a time I was confused about what really made a holomorphic function different from a function on $\mathbb{R}^2$ which is differentiable in the real sense. I mean obviously I knew how they were different but I wanted to put it in a context that was more meaningful for me, and after some thinking I thought the following and was satisfied:

Let’s cover some definitions; it’s likely I’ll write these classic definitions in more detail in later posts. Let $f:\mathbb{C}\rightarrow\mathbb{C}$ be a complex function and let $z_0$ be a complex number. $f$ is called (complex) differentiable at $z_0$ if the limit

$f'(z_0) = \lim_{z\rightarrow z_0}\frac{f(z) - f(z_0)}{z-z_0}$

exists, where the limit is taken along whatever path you like. The function $f$ is differentiable on a subset $A\subset\mathbb{C}$ if it is differentiable for all $z_0\in A$. Now a complex number can be written as $z = x+iy$, where $x,y\in\mathbb{R}$, and a complex function can be written as $f(z=x+iy) = u(x,y)+iv(x,y)$ where $u$ and $v$ are both functions from $\mathbb{R}^2$ to $\mathbb{R}$. As $\mathbb{R}$-vector spaces, $\mathbb{C}$ and $\mathbb{R}^2$ are isomorphic and we can think of $f$ as a function $f:\mathbb{R}^2\rightarrow\mathbb{R}^2$ given by

$(x,y)\mapsto (u(x,y),v(x,y))$.

Now we it can be said that a complex function is complex differentiable if and only if it is differentiable in the real sense (i.e. all partial derivatives of $u$ and $v$ exist and are continuous) and $u$ and $v$ satisfy the Cauchy-Riemann equations ($\frac{\partial u}{\partial x} = \frac{\partial v}{\partial y}$ and $\frac{\partial v}{\partial x} = -\frac{\partial u}{\partial y}$).

This observation is well known and does a good job of contrasting the two senses of differentiability, but I wanted to know the answer to “why is the derivative of the real version of the function a linear transformation (given by the Jacobian matrix) but the derivative of the complex version another complex function?” The answer to that is obvious in hindsight, but I was happy when I went from not knowing to knowing so I thought I better recount it here. The trick is, of course, that we construct a field isomorphism (which I’ll call $\varphi$) from $\mathbb{C}$ into $\mathrm{M}_2(\mathbb{R})$ (the ring of 2 by 2 matrices with real entries) given by

$\varphi(z) = \begin{pmatrix} \mathrm{Re}z & -\mathrm{Im}z \\ \mathrm{Im}z& \mathrm{Re}z \end{pmatrix}$.

Now taking the derivative  of $f$ gives us a function $\mathrm{D}f:\mathbb{R}^2\rightarrow\mathrm{M}_2(\mathbb{R})$, but the Cauchy-Riemann equations mean that if $f$ is complex differentiable, then $\mathrm{D}f(\mathbb{R}^2) \subset \varphi(\mathbb{C})\subset\mathrm{M}_2(\mathbb{R})$. So in that case we can think of $\mathrm{D}f$ as a function from $\mathbb{R}^2\cong\mathbb{C}$ to $\varphi(\mathbb{C})\cong\mathbb{C}$, and so my question is resolved! The real version of the function is complex differentiable precisely when the Jacobian matrix can be identified with a complex function through the isomorphism $\varphi$.

## Group Therapy

The study of groups cleanses the soul. This post will define the concept of a group and show a few simple results, after this there will be several different directions to go in for future posts. Familiarity with sets and functions is assumed.

Definition of Group: A group is really a package of two things. It is a set, which we will denote by $G$, and a function mapping from $G\times G$ to $G$. For now we shall denote this function by $\alpha$ and call it the multiplication function. The set $G$ and the function $\alpha$ are together required to satisfy the so called Group axioms:

1. Associativity: For all $g,h,k\in G$, we have that $\alpha(g,\alpha(h,k)) = \alpha(\alpha(g,h),k)$.
2. Identity: There exists an element of $G$ called the identity, which we will usually but not always denote by $1_G$, with the property that for all $g\in G, \alpha(1_G,g) = \alpha(g,1_G) = g$.
3. Inverse: For each element $g\in G$, there exists another element called the inverse of $g$, denoted as $g^{-1}$, which satisfies $\alpha(g,g^{-1}) = \alpha(g^{-1},g) = 1_G$.

Sometimes one will encounter abelian groups. An abelian group satisfies the additional property that for all $g,h\in G, \alpha(g,h) = \alpha(h,g)$.

The first question to ask is, do any groups actually exist? Our first example of a group will be the infamous trivial group. Let our set $G$ contain only one element. The group axioms force us to choose this sole element to be the identity, i.e. $G = \{1_G\}$. Since $|G| = 1$ there is only once choice for $\alpha$, namely the function that maps $(1_G,1_G)$ to $1_G$. With these definitions the trivial group satisfies the group axioms.

A more complicated but still basic example is the unique group with two elements. Let $G$ be the set $\{0,1\}$ and let $\alpha$ be given by $\alpha(0,1) = \alpha(1,0) = 1$ and $\alpha(0,0) = \alpha(1,1) = 0$. In this case $0$ plays the role of the identity element.

In general groups of all cardinalities exist, and our first basic result about groups is that in any group, there can exist only one element satisfying the properties of the identity. To show this, let $(G,\alpha)$ be a group with identity $1_G$ and suppose there exists an element $e\in G$ also fulfilling the role of the identity element. By the properties of $1_G$ we have that $e = \alpha(e,1_G)$. Since $e$ also satisfies these same properties, it must be that $\alpha(e,1_G) = 1_G$ and thus $e = 1_G$.

In practice, rather than explicitly referring to the function $\alpha$ it is more common to simply write application of $\alpha$ as multiplication. For example, rather than $\alpha(g,h)$ we simply write $gh$ and use parentheses to indicate the order of application. Using this notation we can rewrite the axiom of associativity as requiring that $g(hk) = (gh)k$. This is a lot neater, though being explicitly aware of the function $\alpha$ is helpful for appreciating the non-triviality of associativity. It will also be useful to refer to the function explicitly when dealing with more than one group, each having their own multiplication functions.

From the definition of a group we can go in several directions. There will be posts delving further into group theory itself, but this post will also serve as a foundation for studying more complex algebraic objects such as rings and fields. Stay tuned.