Mainly for referencing.
1.Derivation of Gaussian distribution from binomial distribution
P(k,n)=\frac{n!}{k!(n-k)!}p^kq^{n-k} P ( k , n ) = k ! ( n − k ) ! n ! p k q n − k as n
\rightarrow\infty → ∞ and
p p remains finite.
Taking log:ln(p)=ln(n!)-ln(k!)-ln((n-k)!)+kln(p)+(n-k)ln(q) l n ( p ) = l n ( n ! ) − l n ( k ! ) − l n ( ( n − k ) ! ) + k l n ( p ) + ( n − k ) l n ( q )
Using Stirling approximation :
ln(n!) \approx nln(n)+n+\frac{1}{2}ln(2\pi n) l n ( n ! ) ≈ n l n ( n ) + n + 2 1 l n ( 2 π n )
ln(p) \approx [nln(n)+n+\frac{1}{2}ln(2\pi n)]+[kln(k)+k+\frac{1}{2}ln(2\pi k)]+[(n-k)ln((n-k)) l n ( p ) ≈ [ n l n ( n ) + n + 2 1 l n ( 2 π n ) ] + [ k l n ( k ) + k + 2 1 l n ( 2 π k ) ] + [ ( n − k ) l n ( ( n − k ) ) +n+\frac{1}{2}ln(2\pi (n-k)]+kln(p)+(n-k)ln(q) + n + 2 1 l n ( 2 π ( n − k ) ] + k l n ( p ) + ( n − k ) l n ( q )
(1) For terms of order
ln(n) l n ( n ) , we evaluate them at mean value for
k=\langle k\rangle=np k = ⟨ k ⟩ = n p :
\frac{1}{2}ln(2\pi n)-\frac{1}{2}ln(2\pi k)-\frac{1}{2}ln(2\pi (n-k)) 2 1 l n ( 2 π n ) − 2 1 l n ( 2 π k ) − 2 1 l n ( 2 π ( n − k ) ) \implies \frac{1}{2}ln(2\pi n)-\frac{1}{2}ln(2\pi np)-\frac{1}{2}ln(2\pi np) ⟹ 2 1 l n ( 2 π n ) − 2 1 l n ( 2 π n p ) − 2 1 l n ( 2 π n p ) \implies \frac{1}{2}ln(2\pi npq) ⟹ 2 1 l n ( 2 π n p q ) which is
ln(\frac{1}{\sqrt{2\pi \sigma^2} }) l n ( 2 π σ 2 1 ) , where
\sigma^2=npq σ 2 = n p q .
(2) For terms
kln(p) k l n ( p ) and
(n-k)ln(n-k) ( n − k ) l n ( n − k ) again we plug in
k=\langle k\rangle=np k = ⟨ k ⟩ = n p :
kln(p) +(n-k)ln(n-k) \implies np ln(p)+nqln(q) k l n ( p ) + ( n − k ) l n ( n − k ) ⟹ n p l n ( p ) + n q l n ( q )
(3) For terms
nln(n) n l n ( n ) ,
kln(p) k l n ( p ) and
(n-k)ln(n-k) ( n − k ) l n ( n − k ) we can Taylor expand to 2nd order and plug in results from (2) to obtain:
nln(n)+ kln(p)+(n-k)ln(n-k) \implies -kln(p)-nqln(q)+kln(q)-npln(q)-\frac{1}{2npq}(k-nq)^2 n l n ( n ) + k l n ( p ) + ( n − k ) l n ( n − k ) ⟹ − k l n ( p ) − n q l n ( q ) + k l n ( q ) − n p l n ( q ) − 2 n p q 1 ( k − n q ) 2 which is
-\frac{1}{2\sigma^2} (k-np)^2 − 2 σ 2 1 ( k − n p ) 2 .
Thus from (1) and (3):
ln(p)=ln(\frac{1}{\sqrt{2\pi \sigma^2} })+-\frac{1}{2\sigma^2} (k-np)^2 l n ( p ) = l n ( 2 π σ 2 1 ) + − 2 σ 2 1 ( k − n p ) 2 p(k)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{1}{2}(\frac{k-\langle k\rangle}{\sigma})^2} p ( k ) = 2 π σ 1 e − 2 1 ( σ k − ⟨ k ⟩ ) 2
2.Derivation of Poisson distribution from binomial distribution
P(k,n)=\frac{n!}{k!(n-k)!}p^kq^{n-k} P ( k , n ) = k ! ( n − k ) ! n ! p k q n − k , with the assumption that as
n\rightarrow \infty n → ∞ ,
p>>1 p<< 1 and
np \rightarrow 0 n p → 0 .
Let’s define \mu= np μ = n p . Thus, p=\frac{\mu}{n} p = n μ , q=(1-\frac{\mu}{n}) q = ( 1 − n μ ) .
We can rewrite the binomial distribution:
p(k,n)=\frac{n!}{k!(n-k)!}(\frac{\mu}{n})^k(1-\frac{\mu}{n})^{n-k} p ( k , n ) = k ! ( n − k ) ! n ! ( n μ ) k ( 1 − n μ ) n − k \implies \frac{n!}{(n-k)!k!} \frac{\mu^k}{n^k} (1-\frac{\mu}{n})^n (1-\frac{\mu}{n})^{-k} ⟹ ( n − k ) ! k ! n ! n k μ k ( 1 − n μ ) n ( 1 − n μ ) − k
Now we take the limit of the distribution as
n\rightarrow \infty n → ∞ by looking at each term involving n:
(0) Term
\frac{\mu^k}{k!} k ! μ k is irrelevant as we take limit of n.
(1) \lim_{n\to \infty} \frac{n!}{(n-k)!n^k} n → ∞ lim ( n − k ) ! n k n ! Since
\frac{n!}{(n-k)!n^k}=\frac{n(n-1) \dots (n-k)(n-k-1)\dots1 }{(n-k)(n-k-1)\dots 1} \frac{1}{n^k}=\frac{n(n-1) \dots (n-k+1)}{n^k} ( n − k ) ! n k n ! = ( n − k ) ( n − k − 1 ) … 1 n ( n − 1 ) … ( n − k ) ( n − k − 1 ) … 1 n k 1 = n k n ( n − 1 ) … ( n − k + 1 ) \implies \frac{n}{n}\frac{n-1}{n} \dots\frac{n-k+1}{n} ⟹ n n n n − 1 … n n − k + 1
If we take limit as
n \to \infty n → ∞ , each of the n terms above goes to 1. Thus
\lim_{n\to \infty} \frac{n!}{(n-k)!n^k}=1 lim n → ∞ ( n − k ) ! n k n ! = 1 .
(2) \lim_{n\to \infty} (1-\frac{\mu}{n})^n n → ∞ lim ( 1 − n μ ) n Using definition of e:
e=\lim_{n\to \infty} (1+\frac{1}{x})^x e = n → ∞ lim ( 1 + x 1 ) x We let
x=-\frac{n}{\mu} x = − μ n The limit of interest becomes
\implies \lim_{x\to \infty} (1+\frac{1}{x})^{-\mu x}=e^{-\mu} ⟹ x → ∞ lim ( 1 + x 1 ) − μ x = e − μ
(3) \lim_{n\to \infty} (1-\frac{\mu}{n})^{-k} =1^{-k}=1 n → ∞ lim ( 1 − n μ ) − k = 1 − k = 1
Thus
p(k,n)=\frac{\mu!e^{-\mu}}{k!} p ( k , n ) = k ! μ ! e − μ which is the Poisson distribution.
3. 2D random walk stimulation in Python:
import numpy as np
import random
import matplotlib. pyplot as plt
from matplotlib. collections import LineCollection
import math
def two_d_randomwalk ( x, y) :
theta = np. random. random( ) * 2 * math. pi
x+= math. cos( theta)
y+= math. sin( theta)
return ( x, y)
x, y= 0 , 0
steps = 10000
a= np. zeros( ( steps, 2 ) )
r= [ 0 ]
for i in range( steps) :
x, y= two_d_randomwalk( x, y)
a[ i: ] = x, y
distance_sq= x** 2 + y** 2
r. append( distance_sq)
lc= LineCollection( zip( a[ : - 1 ] , a[ 1 : ] ) , array= z, cmap= plt. cm. hsv)
fig, ax= plt. subplots( 1 , 1 )
ax. add_collection( lc)
ax. margins( 0.1 )
plt. show( )
fig = plt. figure( )
ax = fig. add_subplot( 111 )
plt. xlabel( 'Number of steps' )
plt. ylabel( 'Distance squared' )
ax. plot( r)
plt. show( )