## Wednesday, October 8, 2014

We have seen that the randomness of prime factors (with respect to a given large number) can be encapsulated through the Erdős–Kac theorem, whereby their distribution approaches the perfect bell curve associated with the normal distribution!

Using the accessible manner of description given in Wikipedia, the Erdős–Kac theorem states that if ω(n) is the number of distinct prime factors of n, then, loosely speaking, the probability distribution of
$\frac{\omega(n) - \log\log n}{\sqrt{\log\log n}}$
is the standard normal distribution.

So what we are expressing here with respect to the normal distribution is the number of standard deviations that the total of distinct prime factors (for each number) in each case is from the mean.
For example, when n has 10,000 digits each number on average would be composed of about 10 (distinct) prime factors. As the standard deviation (i.e. the square root of log log n) would now be slightly in excess of 3, this entails that about 2/3 of the prime factors would lie within 1 standard deviation on either side (i.e. between 7 and 13).

It therefore struck me that given that we have a complementary notion of prime randomness with respect to the individual primes in the context of the collective (natural) number system, that it should be possible to also demonstrate this randomness through approximation to a normal distribution.

If we now let ω(n) refer to the successive gap as between the individual prime numbers, the average (mean) gap is now given as log n. Therefore on the convenient assumption that the standard deviation is again taken as the square root of the mean - though I suspect that this may not be so -  this would entail that

ω(n) – log n
√log n

is likewise the standard normal distribution in this case. The important thing is that some appropriate expression for the standard deviation should exist, thus enabling the normal distribution to apply.

What we are expressing here by contrast is the number of standard deviations that the gap between successive primes lies from the average (mean) gap. And the normal distribution here is in keeping with the random nature of the individual primes (used to estimate such gaps).

Now by its very nature (which is inherently dynamic and relative), we always remain in the realm of approximation.

However in principle through sufficiently increasing the size of n, it would be possible to approximate ever more closely to the perfect bell curve associated with the normal (Gaussian) distribution.

It should also be possible in a similar manner to the Erdős–Kac theorem - though it is not my intention to attempt to do so here - to prove this result.

However the key point to realise is the dynamic interactive context of both notions of prime randomness, so that the normal distributions (associated with such randomness) must be interpreted in a relative - rather than absolute - manner.

Thus from one perspective the (Type 1) individual primes are distributed as randomly as is possible consistent with as much order as as is possible with respect to the (Type 2) prime factors.

Equally from the complementary perspective the (Type 2) prime factors are distributed as randomly as possible consistent with as much order as is possible with respect to the (Type 1) primes.

Thus both the randomness and order with respect to both prime notions (in individual and collective terms) are mutually determined in a holistic synchronistic manner.

However this very interpretation involving the complementarity of opposite frames of reference, requires a dynamic holistic means of interpretation,which formally remains totally absent from Conventional Mathematics.