Wednesday, May 14, 2014

Estimating Sum of Riemann Zeros (2)

In my last blog entry, I mentioned how the simple formula n(n + 1)(log n – 1)/2 can be used to estimate two aggregate sums with respect to both the factors of composite numbers and the sum of the Riemann (Zeta 1) zeros.

I illustrated this approach with respect to the aggregate sum of factors showing how it is compiled for the composite numbers up to 10.

Once again 4 has 2, 6 has 3, 8 has 3, 9 has 2 and 10 has 3 factors. So we obtain therefore the sum  (4 * 2) + (6 * 3) + (8 * 3) + (9 * 2) + (10 * 3) = 8 + 18 + 24 + 18 + 30 = 98.

I will now likewise illustrate up to n = 10, with respect to the corresponding aggregate of the Riemann zeros.

We have to remember that these zeros are measured on the imaginary scale up to t where n = t/2π. Thus we add the zeros up to t = 62.83, i.e. 14.13 + 20.02 + 25.01 + 30.42 + 32.94 + 37.59 + 40.92 + 43.33 + 48.01 + 49.77 + 52.97 + 56.45 + 59.35 + 60.83 (correct to 2 decimal places) = 571.74.

Then to express this result with respect to n, we divide by 2π to obtain 90.96 = 91 (to nearest unit).
    
Up to n
Acc. sum of factors (1)
Acc. sum of Riemann zeros (2)
Formula
Est. (3)
(2)/(1)     as %
(3)/(2) as %
  10
      98
      91
      72
  92.86
79.12
  20
    499
    493
    419
  98.80
84.99
  30
  1355
  1234
  1117
  91.07
90.52
  40
  2620
  2677
  2205
102.17
82.37
  50
  4277
  4221
  3713
  98.69
87.96
  60
  6459
  6370
  5663
  98.62
88.90
  70
  9038
  8767
  8073
  97.00
92.08
  80
12073
12200
10956
101.05
89.80
  90
15947
15858
14322
  99.44
90.31
100
20367
20133
18206
  98.85
90.43
110
24608
24958
22591
101.42
90.52

In the above table, I show the results (in 10's) up to n = 110, for both the aggregate sum of  factors of the composite nos. (col. 2) and the corresponding sum of Riemann zeros (col. 3) with values rescaled to n.

Then in col. 4, I show the estimated aggregate sum using the formula, i.e.{n(n + 1)(log n – 1)}/2.

Then in col 5, I show the % accuracy of the actual sum of factor values and the corresponding sum of zeros.

These values compare vary well indeed. Sometimes the factor value sum exceeds that of the sum of zeros (and likewise the sum of zeros may exceed the sum of factor values). However even though we are still at a very early point on the n scale, the two sets of values already have converged very close to each other (in the region of 99% accuracy) as indicated in col. 5..

The formula estimate is not quite so accurate (with respect to both factor sums and zero aggregate sums). However it gives a consistent estimate (with % accuracy gradually improving) that already - as can be seen from the final column (col. 6) - is about 90% accurate.

Of course - as was the case for so long with the prime number theorem - I have not provided actual proofs of what is indicated by the data.

However I would confidently assert that the actual sums (with respect to both the factors and zero sum aggregates) would eventually approach 100% accuracy (in relative terms).

Likewise with only slightly less confidence, I would expect that the formula estimate (at high n) would also approach 100% accuracy (in relative terms) with respect to the prediction of both aggregates.

Friday, May 9, 2014

Estimating Sum Of Riemann Zeros

In earlier blog entries, "Simple Estimate of Frequency of Riemann Zeros 1" and "Simple Estimate of Frequency of Riemann Zeros 2", I demonstrated how the frequency of the non-trivial (Zeta 1) zeros are closely related to the accumulated sum of the number of factors contained in the composite numbers.

I then considered the possibility of a simple formula which would estimate the sum of these zeros (up to a given number on the imaginary scale).
Also given the strong link as between the non-trivial zeros and the factors of the composite numbers, this formula would equally apply to an important aggregate with respect to the factors of composites.

As I have stated before the simple formula for estimating the accumulated frequency of factors of the composite numbers bears a complementary relationship with a similar type formula for the estimation of the frequency of primes (to a given number)

So n/(log n – 1) measures the frequency of primes (to n on the real scale)) whereas n(log n – 1) measures the corresponding (accumulated) frequency of the factors of the composite numbers (to n on the same scale).

Now to adjust this latter formula for the estimation of the non-trivial zeros up to the point t (on the imaginary scale) we set n = t/2π  

So the frequency of non-trivial zeros (up to t) is thereby given as,

 t/2π{log(t/2π) – 1}

Now I have suggested the addition of 1 to this formula giving,

 t/2π{log(t/2π) – 1} + 1 as the recommended version.

This gives stunningly accurate estimates of the frequency of these zeros - not only in relative - but also in absolute terms. Even at the highest values on the imaginary scale in the tables provided by Andrew Odlydzko  estimates using this formula are frequently exactly correct in absolute terms (and generally are accurate to within 1 of the correct value).

So unlike the primes which occur unpredictably, leading to a  discontinuous jump in the step function representing their actual frequency, the non-trivial zeros lie at the other extreme, with respect to the smoothing out of such irregular discrete behaviour. Not surprisingly therefore, the frequency of such zeros can then be very accurately predicted through a continuous function (such as the one here suggested).

However though the random zeros therefore represent an extreme in terms of the notion of continuous order, this can only be interpreted in a relative rather than absolute manner.
Thus we could equally draw, as with the primes a step function to exactly represent (in absolute terms) the frequency of the non-trivial zeros. Thus with the occurrence of each new non-trivial zero a discontinuous increase of 1 will take place. However because these zeros are located as well as is possible to ensure complete order with respect to behavior in the number system, we can thereby expect to estimate to an extraordinary degree of accuracy the actual occurrence of the non-trivial zeros.

Bearing this in mind, it is now possible to suggest the appropriate aggregate (with respect to the factors of the composite numbers) to which the sum of the non-trivial zeros should thereby correspond.

As we have seen the composite numbers (with factors) complement the primes (with no factors).

In this sense the composites veer towards the polar aspect of ordered behaviour (with respect to the number system) in contrast to the primes which veer towards the opposite pole of random behaviour.

However this ordered behaviour with respect to factors occurs in a somewhat discontinuous manner.

So we start with 1, 2 and 3 (with no true factors yet in evidence). Then we reach 4 (which immediately contains 2 factors). 5 is once again prime, and then we hit 6 (with 3 factors).

So the factors of the composite numbers always occur as immediate discontinuous multiple amounts (in striking contrast to the random behaviour of the primes).

So if we were to draw a step function for the composite numbers, we would encounter far more discontinuous steps on our journey. Furthermore, rather than all these steps being measured in single units (as with the primes) the heights of these steps vary considerably. So for example whereas 10 has a step of height 3) 12 has a step of height 5. In other words 10 has 3 factors (2, 5 and 10) whereas 12 has 5 factors (2,3,4,6 and 12).

So the considerable task with respect to the number system is the reconciliation of both the random aspect of the primes, representing their individual behaviour and the ordered aspect of the composites, representing their collective behaviour (through common factors).

Now this vitally important task is provided for the number system as a whole through the non-trivial (Zeta 1) zeros.

As I have repeatedly stated we can only appreciate this properly in a relative dynamic interactive manner.

So each non-trivial zero represents therefore a point where the notion of random (independent) behaviour with respect to the number system is fully reconciled  - as is finitely possible in a relative approximate manner - with the corresponding notion of ordered (interdependent) behaviour.

I have also repeatedly stated that these two aspects are properly quantitative and qualitative with respect to each other.

It is certainly true (within isolated frames of reference) that both the primes and non-trivial zeros can individually be given a quantitative identity.

However just like the two turns at a crossroads are left and right with respect to each other, from a dynamic experiential context both the primes and the non-trivial zeros are quantitative and qualitative (and qualitative and quantitative) with respect to each other.

So the quantitative (analytic) appreciation of this relationship comes from (initially) interpreting the primes and non-trivial zeros within isolated frames of reference. However the true qualitative (holistic) appreciation of their collective interdependence only can come from a dynamic approach (viewing both poles in a truly complementary manner).


Thus the whole mindset of Conventional Mathematics at present is sadly unsuited to the proper appreciation of the relationship of the primes and the non-trivial zeros.
In formal terms such Mathematics takes place within a linear (1-dimensional) framework. Here, merely isolated frames of reference interpreted analytically in a merely quantitative manner are considered.

However just as we can view the relationship of the primes and the non-trivial zeros for the number system as a whole (in Type 1 terms) equally we can do this within each prime.

Here each prime is considered as a group of related members in an ordinal natural number fashion.

So once again for example 3 as prime is composed thereby of  1st, 2nd and 3rd members.

We have here again the fascinating relationship of quantitative and qualitative notions. So we have 1st, 2nd and 3rd members (in ordinal terms) belonging to a group of 3 (in cardinal terms).

The Zeta 2 zeros here represent the corresponding roots of 1 (with the non-unique root of 1 excluded).

So the 3 roots 1, – .5 + .866i and – .5 – .866i , can each be given a relatively independent identity in quantitative terms.

However the collective identity of these roots is expressed through their sum = 0.

So this sum of roots strictly has no quantitative identity.

However once again (individual) independence and (collective) interdependence can only be appreciated in a relative, rather than absolute manner. So even in demonstrating the collective interdependence of the roots we must arbitrarily fix one position (i.e. the 1st) in an independent manner before considering subsequent interdependent relationships.

So coming back to our original task of estimating the sum of the Riemann (Zeta 1) zeros, the trivial zeros represent a  certain smoothing out with respect to the composite factors.

Therefore instead of two factors at the same number point (as with for example 4), we have two  points chosen at other unique locations.

So with respect to the real scale, corresponding to 2 points at 4, we have the first non-trivial zero corresponding to 2.249... (i.e. 14.134725/2π and  3.346... (i.e. 21.022040/2π).

Now again, the reason why these non-trivial zeros lie on an imaginary line, is because of their true holistic quality. And in holistic mathematical terms, the imaginary notion relates to the indirect attempt to represent such holistic meaning in a linear analytic manner.

So once again in direct terms, the Riemann zeros should be interpreted in a holistic (i.e. dynamically interactive) manner for their proper comprehension.

Therefore if we sum up all the non-trivial zeros (i.e. divided by 2π) to n on the real scale, this should correspond to the sum of each composite number multiplied by its corresponding number of factors (also up to n)

Therefore the sum of non-trivial zeros up to 10 * 2π on the imaginary scale (where the total is divided by 2π), should then correspond approximately to the sum of composite factors (each multiplied by its number of factors) up to 10,

i.e. (4 * 2) + (6 * 3) + (8 * 3) + (9 * 2) + (10 * 3) = 8 + 18 + 24 + 18 + 30  = 98.

The corresponding sum of trivial zeros (adjusted to the real scale) = 91. So this provides already a fairly good approximation.

The simple formula that I then suggest to approximate both measurements is:

{n(n + 1)(log n – 1)}/2.

We will return to further consideration of this formula in a future entry.

Thursday, May 8, 2014

Complementary Views of Same Reality

In my last blog entry, I concluded with the fascinating observation that the same simple formula can be used as an estimate of what - initially - seem as unconnected areas.

Thus as we have seen the formula 2n(n + 1)/π can be used as an estimate of:

1) the accumulated sum of factors of the composite numbers (up to n);

2) the accumulated sum of the (reduced) value of all roots of 1 (up to n).

So once again in the first case, where n for example = 100, we sum up the factors for each composite number (starting with the sum of factors of 4 which  = 2 + 4 = 6), and then accumulate the overall sum for all composite numbers up to 100.

In the second case we obtain the individual t roots of 1 for t = 1, 2, 3, 4,.... 100, and then accumulate the total sum of the reduced values of these roots for each value of t.
Once again in this reduced approach the 3 roots of 3 would be expressed as 1, .5 + .866, and .5 + .866 i.e. .5, 1.366 and 1,336 (correct to 3 decimal places) respectively. so we basically just concentrate on number magnitudes in a positive real manner (ignoring both negative and imaginary signs). 

What is surprising here is that the first case of common factors is intimately associated with the Riemann (Zeta 1) zeros. Indeed we showed in earlier blog entries, how a surprisingly accurate estimate of the frequency of these zeros can be obtained through considering the aggregate total of the number of factors involved (up to n).

However the second case of the roots of 1, is intimately associated with the unrecognised (Zeta 2) zeros.
Indeed all the roots of 1 (except the trivial case in all cases where the root = 1) represent the non-trivial zeros (from this Zeta 2 perspective).

What this clearly suggests is that the Zeta 1 and Zeta 2 zeros respectively represent close complementary perspectives of the same underlying reality with respect to the number system. 

In other words, in both cases the zeros represent an indirect attempt to provide a numerical measurement of the interdependent nature of the number system.

As we have seen from an isolated analytic perspective, we can indeed attempt to view both the primes and natural numbers in independent terms as number quantities.

However the very nature of the relationship between the primes and  natural numbers (which can be viewed from complementary cardinal and ordinal perspectives) entails the qualitative notion of a synchronous form of interdependence that underlies the number system.  

Now in direct terms the appreciation of this notion of qualitative interdependence is of a holistic (rather than analytic) nature. However indirectly it can then be represented (from the two related perspectives) in a quantitative manner..

So the zeta zeros (Zeta 1 and Zeta 2) therefore serve as indirect quantitative measurements of the qualitative synchronous nature of the relationship as between the primes and the natural numbers (and the natural numbers and the  primes).

However, once again it is strictly futile to attempt to grasp this holistic feature of the number system in the conventional analytic manner.

The true test as to whether one can understand in the appropriate holistic fashion, stems from an enhanced ability to see all fundamental relationships in an inherently dynamic interactive manner (as complementary pairings of opposite poles).
Indeed the truly remarkable conclusion is that the Zeta 1 and Zeta 2 zeros in fact are providing fundamentally the same information regarding the qualitative interdependent nature of the number system.  However because this information arises from varying perspectives it has indeed the appearance of being different!

In an earlier blog entry, "Zeta 2 Formulation of the Euler Product" I showed how the famous Euler Product can equally expressed in terms of the Zeta 1 and Zeta 2 Functions.
This indicated therefore that the Zeta 1 and Zeta 2 Functions simply represent two complementary ways of looking at the same reality. So likewise the Zeta 1 and Zeta 2 zeros represent two complementary ways of indirectly measuring the holistic (interdependent) aspect of the number system.

One interesting  implication of this finding is, that just as the Zeta 1 zeros can be used to correct the deviations associated with predicting the frequency of primes to a given number, the Zeta 2 zeros in principle can be used to achieve the same result (from an alternative perspective).

Wednesday, May 7, 2014

Important Connections!

We initially used the accumulated number of factors contained in the composite numbers to give a very close estimate of the frequency of the Riemann (Zeta 1) zeros.

This was based on the simple formula n(log n – 1). Again a delightful complementarity was in evidence here with the inverse version of this formula i.e. n/(log n – 1) giving a surprisingly accurate measurement of the corresponding frequency of primes (up to n).

Once again true appreciation of the complementary dynamics involved here requires appropriate holistic mathematical appreciation.

In earlier blogs in highlighted a remarkable fact in relation to the reciprocal of a number.

For example 4 in cardinal (Type 1) terms is more fully represented as 4 1.

What this implies is that the conventional (analytic) quantitative interpretation of a number entails understanding with respect to the default 1st dimension (where experiential polar opposites such as objective and subjective are absolutely separated from each other).

The reciprocal of 4 (i.e. 1/4) can be represented as 4 – 1.

This implies in holistic terms the direct negation of linear type conscious rational understanding in the generation of unconscious intuition.

Thus the important point to grasp is that the very dynamics by which one is enabled to move from whole to part (and part to whole) in experience requires holistic intuition.

Now once again the switch is made and we obtain 1/4 this is then quickly interpreted in the standard 1-dimensional terms (i.e. in a linear rational fashion).

So 1/4 is now more fully represented as (1/4) 1.

Then again in a reverse manner, to switch from this part to the corresponding whole notion of number, we obtain (1/4) – 1.
Once more this dynamic switch in experience entails the negation of linear type understanding in the generation of holistic intuition.

However when the switch is made the result is quickly reduced in a linear rational manner as 4 i.e. 4 1.

So the important point that is made that the very means by which one is enabled to switch from whole to part and in reverse manner part to whole notions in experience implies the generation of (unconscious) intuition.

And this can be generalised with respect to switching as between the fundamental polarities that necessarily condition all experience (including of course mathematical).

So rather that Mathematics being absolute in rational terms), properly understood, mathematical understanding is of a dynamic relative nature, entailing both quantitative (analytic) and qualitative (holistic) type appreciation. This equally implies that all mathematical understanding properly entails the dynamic interaction of both conscious (rational) and unconscious (intuitive) modes of appreciation.

Now, I have consistently remarked how the primes and Riemann (Zeta 1) zeros are of a complementary nature i.e. analytic and holistic with respect to each other.

Thus when the primes are understood in individual  terms as quantitative, the Riemann zeros then should be appropriately understood in collective terms as qualitative (i.e. as expressing the interdependent nature of the primes).
Likewise, in reverse manner, when the Riemann zeros are understood in individual terms as quantitative, the primes should then be appropriately understood in collective terms as qualitative (i.e. in their overall relationship with the natural numbers).

So this fact is beautifully demonstrated by the very formulae used here to estimate the frequency of primes and Riemann (non-trivial) zeros respectively.

Thus in moving from the frequency of Riemann zeros i.e. n(log n – 1)  to the corresponding frequency of primes we simply use the reciprocal of (log n – 1).
And then once again, to move from the frequency of primes to the corresponding frequency of the Riemann zeros, we simply revert back to the original number by once again taking the reciprocal!


Of course besides the Zeta 1 zeros we also have the (unrecognised) Zeta 2 zeros.

These arise simply as the finite solutions to the equation,

 ζ2(s)  = 1 + s1 + s2 + s3 + …. + s t – 1  = 0.

Put another way, it provides the t – 1 non-trivial roots of the t roots of 1.

As 1 is always a root of unity, in this sense it is a trivial root. However the other (non-trivial) roots are unique for all prime numbers.

So for example if we wish to calculate the 2 non-trivial roots (representing the Zeta 2 zeros) of the 3 roots of 1, we solve for,

1 + s+ s  = 0.

These two roots (correct to 3 decimal places) are – .5 + .866i and – .5 – .866i respectively.

Now the demonstration of the holistic interdependence associated with these zeros requires their incorporation with the default (trivial) root of 1.

So the sum of 1, – .5 + .866i and – .5 – .866i = 0 and indeed this result universally applies with respect to the sum of the t roots of 1!

However it was in the attempt to give a reduced quantitative expression to such (qualitative) interdependence that I considered an alternative method of measurement.

In this reduced form of measurement all magnitudes are considered in a positive real manner (with both negative and imaginary signs ignored).

Therefore in this modified approach, the sum of the 3 roots of 1 (i.e. the sum of the 2 non-trivial and 1 trivial Zeta 2 zeros)

= 1 + .5 + .866 + .5 + .866 = 3.732.

The average of the 3 roots = 1.244.

I found that the average value of all roots quickly converged towards 4/π (i.e. 1.273...) with both cos and sin parts converging towards 2/π  respectively.

So  we could then consider adding up all the roots for each number and then summing up the total sum of roots for each number up to n!
So for example if n = 100, this would thereby entail multiplying 4/π by (1 + 2 + 3 + ....100) i.e. by n(n + 1)/2.

Thus 4/π  * {n(n + 1)/2} = 2n(n + 1)/π.

This however is exactly the same formula we used to estimate the cumulative sum of factors up to n!

Thus there seems to be an important connection between the two approaches which we will explore in the next blog entry.

Estimating the Total Sum of Composite Factors

In earlier blog entries, see "Simple Estimate of Frequency of Prime Numbers 1" and "Simple Estimate of Frequency of Prime Numbers 2" ,I sought to demonstrate how the frequency of the Riemann (non-trivial) zeros is closely related to the factors of the composite numbers.

I also sought to highlight the dynamic complementary nature of these findings. The composite numbers represent the interdependent aspect of number (i.e. as being composed of constituent factors). The primes by contrast represent the independent aspect (i.e. in containing no constituent factors other than themselves and 1).

So the non-trivial zeros in being directly related to the factors of the composite numbers thereby represent the complementary shadow of the primes.

There is a direct link here also with psychological experience where both conscious and unconscious serve as dynamic complements of each other. So in Jungian terms the personal shadow projected by the unconscious, represents the unrecognised unconscious aspect (which properly complements conscious type understanding).

Thus we cannot hope to properly understand the relationship between the primes and the non-trivial zeros without equal recognition of the need to balance both conscious (analytic) and unconscious (holistic) type appreciation in mathematical understanding.

And as I have repeatedly stated in these blog entries, Conventional Mathematics - being based formally on merely conscious analytic type notions - remains in total denial of its unconscious shadow.

Thus the attempt is made to understand both the primes and the zeros in a merely absolute quantitative type manner, when in effect the relationship between them is of a dynamic relative nature being analytic (quantitative) and holistic (qualitative) in relationship to each other.

Thus when we view the primes from an analytic perspective in terms of their individual (quantitative) identity, the non-trivial zeros then collectively represent the holistic complement to the primes in qualitative terms.
Equally from the alternative perspective, when we view each non-trivial zeros from an individual (quantitative) perspective, the prime numbers as a collective group thereby represent the holistic complement to the zeros in qualitative

Therefore depending on perspective, which in dynamic interactive terms  is always with respect to complementary (opposite) aspects of understanding, both the primes and the non-trivial zeros can be given both an analytic (quantitative) and holistic (qualitative) interpretation.

However these crucial relationships are rendered strictly meaningless when we attempt to view all number relationships in a merely analytic (quantitative) manner!

Now again in measuring the frequency of the non-trivial zeros, I was at pains to measure the number of factors contained by the composite numbers. And the rationale behind this approach was to include the number itself (when composite) as a factor while excluding the number 1.

So once again, from this perspective, the number 12 for example would contain 5 constituent factors (i.e. 2, 3, 4, 6 and 12).

However having completed these two blog entries, I then started to consider the related problem of finding an estimate for the accumulated sum of factors of the composite numbers.

So to illustrate this more clearly, I will demonstrate how this sum is arrived at with respect to the first 10 numbers.
Now four of these 2, 3, 4 and 6 are prime (which we can ignore) as indeed the number 1..
The factors of 4 are 2 and 4 with the sum = 6.
The factors of 6 are 2, 3 and 6 with the sum = 11.
The factors of 8 are 2, 4 and 8 with the sum = 14.
The factors of 9 are 3 and 9 with the sum = 12.
Finally, the factors of 10 are 2, 5 and 10 with the sum = 17.

So the accumulated sum of factors up to 10 = 6 + 11 + 14 + 12 + 17 = 60.

Now I continued on calculating these actual accumulated sums of factors up to 110.

I then came up with a simple formula 2n(n + 1)/π, that attempts to achieve consistency with respect to a close estimate of the actual values.

Up to n
Acc.  Factor Total
Est. Total
% Accuracy
  10
    60
    70
85.71 (over est.)
  20
  242
  267
90.64 (over est.)
  30
  609
  592
97.21 (under est.)
  40
1111
1044
93.97 (under est.)
  50
1706
1623
95.13 (under est.)
  60
2618
2330
89.00 (under est.)
  70
3428
3164
92.30 (under est.)
  80
4439
4125
92.93 (under est.)
  90
5653
5214
92.24 (under est.)
100
7112
6430
90.41 (under est.)
110
8382
7773
92.73 (under est.)
   
The accuracy here is far from stunning. However as we ascend the number scale it consistently seems to be predicting at over 90%. So we are still at a very low point on the number scale. Unfortunately it becomes progressively more difficult to manually calculate the sums of factors of the composite numbers as these increase. Also we would expect considerable local variations as the sum of factors of even just one highly composite number can make a considerable contribution to the the overall total.

However there is a distinct rationale as to how this formula was arrived at!

n(n + 1)/2 is the formula for the sum of natural numbers from 1 to n.

So for example the sum of 1 to 100 = 100(101)/2 = 5050.

Now if we multiply n(n + 1)/2 by 4/π , we get 2n(n + 1)/π, which is the formula I have used to estimate the accumulated sum of factors (of composite numbers).

So again the estimate for this accumulated sum up to 100 is 6430 (as against the actual total of 7112).

And 4/π, which is the link between the two formulae has a special significance from a holistic mathematical perspective.

Consider the following simple geometrical diagram!
 So we have here a circle inscribed in a square.

Now if we measure the perimeter of the square and then divide by the circumference of the circle the answer = 4/π.

Alternatively, if we take the area of the square and divide by the area of the circle, again the answer = 4/π.

Now, from a qualitative holistic perspective, this relationship is deeply symbolic of the intersection of linear (quantitative) with circular (qualitative) understanding i.e. where notions of (individual) independence and (collective) interdependence coincide.

We will explore the deeper relevance of this connection in the next blog entry.

Tuesday, May 6, 2014

Randomness and Order Among the Primes

It is often stated that rather like the tosses of an unbiased coin that the prime numbers are distributed in an fully random manner among the natural system.
However this clearly is not the case.

The random tossing of a coin implies independent events so that outcome on any toss is uninfluenced by what went before. So the probability of H or T on each toss therefore = .5.

However though it is indeed customary to refer to the primes in independent terms as  "the building blocks" of the natural number system, such independence is of a merely relative nature that needs careful qualification.

There is indeed from one perspective, a valid (i.e. cardinal Type 1) sense in which the primes can be viewed as independent. However what is not properly recognised is that when viewed from the equally valid (i.e. ordinal Type 2) perspective, each prime number can be viewed as a unique interdependent group of related members.

Therefore from the dynamic interactive perspective - which is the appropriate way of viewing the matter - the number system is characterised necessarily by the complementary notions of independence and interdependence respectively.

In other words the number system is defined by a delicate dynamic balance as between notions of randomness and order.

We can perhaps illustrate better the limitation of the notion of randomness as applied to primes with the following example.

Imagine on an estate 100 houses numbered from 1 - 100 with the owners entered into a raffle  where 25 prizes are on offer.

So from the truly random perspective, where each house is equally likely to be chosen (and where all houses are included in each draw) there is a chance of 1 in 4 (or probability of 1/4) of an owner winning a prize.

Therefore, if for example no. 23 is the first number drawn and I am the owner of house no. 24, I still have an equal chance with every one else of being successful in subsequent draws.

However imagine a scenario where the 25 winners are chosen on the basis of the 25 prime numbers from 1 to 100.

Clearly therefore in this scenario, if I  live in an even numbered house (other than 2), I have no chance of being successful in the draw.

So prime numbers are not truly random in this sense as - again apart from 2 - all even numbers are excluded from consideration. Also in a base 10 system, by definition, a number ending in 5 (apart from 5) cannot be prime.


So the randomness of prime numbers is clearly of a qualified relative nature.

The most correct way of stating the issue is that the individual nature of the primes is as random as is possible while maintaining the equally important requirement of their overall collective relationship with the natural numbers being as ordered as is possible.

And both of these characteristics - order and randomness - relate to the two equally important aspects of independence and interdependence respectively.

Unfortunately, the very paradigm that characterises conventional mathematical understanding is fundamentally unsuited to grasping the dynamic relative nature of the number system.

Because of an inherent absolute nature, it is thereby biased towards an over-emphasis of the independent aspect of numbers. So notions of order and relatedness can thereby only be dealt in conventional mathematical terms in a reduced manner.

Thus from a proper dynamic perspective, independence (randomness) and interdependence (order) are complementary opposite notions with a merely relative validity.

So again, one could validly maintain in dynamic relative terms, that from the cardinal (Type 1) perspective, the individual primes are distributed as randomly as possible as is consistent with their overall collective order with the natural numbers; equally we could say that the primes are as collectively ordered as is possible that is consistent with their unique individual identities.
Indeed this points once again to the central mystery of the number system, that relates to the marvelous manner in which both the quantitative aspect of independence and qualitative aspect of interdependence, though uniquely distinct, can be yet consistently reconciled with each other.

From another equivalent dynamic perspective, the random nature of the individual  primes is replicated (as their shadow identity) by the ordered collective nature of the non-trivial (Zeta 1) zeros.

Then from the opposite complementary perspective, the ordered nature of the collective overall  set of primes is replicated (again as their shadow identity) by the individual nature of each non-trivial (Zeta 1) zero.

However we equally - and of course in a balanced approach - should also look at the relationship between the primes and natural numbers from the alternative ordinal (Type 2) perspective.
Here, each prime number is viewed as a collective group of related ordinally ranked  members. So the prime no. 3 is thereby composed of the set of 1st, 2nd and 3rd members.  

Here the random nature of each prime (representing an internal grouping of members) is replicated (as their shadow identity) by the ordered collective nature of the Zeta 2 zeros.

Thus for example the sum of the 3 roots of 1 (representing the Zeta 2 zeros for the prime number 3) = 0

i.e. the sum of 1,  .5  + .866i and  –  .5   .866i  = 0.

Now strictly speaking one of these roots (i.e. 1) is trivial in the sense that it it is not unique and in fact inseparable from the cardinal notion of 1.

So this in fact illustrates my very point that notions of independence and interdependence are of a merely relative nature.

We can only deal with interdependent notions with reference to a fixed notion (that is independent).

Likewise we can only deal with independent notions against the background assumption of overall order with respect to the number system (that assumes interdependence).

However from the Type 2 perspective, the Zeta 2 zeros do also possess an individual ordinal identity, while once again the collection of primes (now representing groups of members) possesses an ordered collective identity.

(I illustrated this Type 2 collective identity of the primes recently in "Alternative Approach to Frequency of Primes").


When one understands this complementary two-way relationship with respect to the random (independent) and ordered (interdependent)  nature of the primes with respect to the number system, and then appreciates this in both Type 1 (cardinal) and Type 2 (ordinal) terms, then it becomes quickly apparent that the ultimate relationship of the primes to the natural numbers is one of pure interdependence (in an absolute ineffable manner).

The illusion of some definite causal relationship as between both (i.e. primes and natural numbers) stems from attempting to view such a relationship in a limited partial context (where dynamic complementarity does not operate).

We can indeed from a partial context trace the relationship between the primes and the natural numbers in both Type 1 (cardinal) and Type 2 (ordinal) terms.

However though left (E) and right (W) turns at a crossroads may indeed appear unambiguous when approached from merely one direction (either North or South), when both North and South are recognised as complementary directions, then left and right turns are rendered paradoxical (with a merely arbitrary identity in any limited defined context).

In this important sense, the relationship between the primes and the natural numbers is exactly similar.

Wednesday, April 30, 2014

The Limits of Conventional Logic

Marcus du Sautoy in the "Music of the Primes" refers to the Continuum Hypothesis which was the first on Hilbert's famous list of 23 great unsolved mathematical problems.

In 1963, Paul Cohen proved that this represented one of Godel's undecidable propositions. In other words on the basis of the accepted mathematical axioms it was not possible to prove (or disprove) this proposition as to whether another set of infinite numbers exists between - as it were - the rational fractions and the real numbers.

In fact Cohen was able to construct two axiomatic worlds where the proposition could be proven true in one and false in the other!

He then goes on to argue that the Riemann (8th on Hilbert's list) is distinct from the Continuum Hypothesis in a very important sense.

So according to du Sautoy if the Riemann Hypothesis is undecidable then two possible outcomes exists
(1) it is either true and we can't prove it or (2) it is false and we can't prove it.
However if it is false there is a zero off the critical line which we can use to prove it is false. So it can't be false without us being able to prove that it is false.

So therefore according to this logic, the only way that the Riemann Hypothesis can be undecidable is that it is indeed true (without us being being able to prove that it is true).

However, I would strongly question the validity of this use of logic. From my perspective, the Riemann Hypothesis - by its very nature - already transcends our accepted mathematical axioms.

Once again, I would see the Hypothesis as relating ultimately to an a priori assumption that both the quantitative (as independent) and qualitative aspects (as interdependent) of mathematical interpretation can be consistently combined with each other.

However the reduced nature of conventional mathematical proof implicitly assumes such consistency to start with. In other words the very use of its axioms thereby assumes the truth of the Riemann Hypothesis (as an a priori act of faith).

Therefore there is no way that such an a priori assumption can be either proven (or disproven) through the use of such axioms.

So du Sautoy maintains that if the Riemann Hypothesis is in fact false, that we can find a zero off the critical line (and thereby prove that it is false).

However even if one was to accept in principle that a zero might lie off the critical line, this does not infer that we can thereby automatically show that it is off the line.

For example from one valid perspective, it could be so beyond the range of finite magnitudes that can conceivably be investigated that it would remain practically impossible to experimentally detect it!

However there is a much more crucial difficulty which du Sautoy has overlooked.

If my basic premise is correct that the the Riemann Hypothesis is an a priori assumption that underlines the very consistency of conventional axioms (which cannot be proven or disproven through use of these axioms), then if the Hypothesis is indeed false then we no longer have a sufficient basis for trusting the ultimate consistency of any of our mathematical procedures.

Therefore if a zero was somehow to be experimentally verified as existing off the critical line, this would imply that the whole mathematical edifice is ultimately built on inconsistent premises.

Therefore in such a circumstance we could not use the subsequent emergence of a zero off the critical line to disprove the Riemann Hypothesis (in a conventional manner) as this assumes the inherent consistency of our mathematical approach.

Now, realistically I do not expect that a zero will ever be found off the line!
So acceptance of the Riemann Hypothesis is already built into our assumptions of how the mathematical world operates. However this acceptance strictly exists as an act of faith (rather than logic).
However this does imply that uncertainty is fundamentally an inherent part of Mathematics, with the possibility always remaining that this act of faith (in what is implied by the Riemann Hypothesis) is ultimately unwarranted.

There is also the interesting case of the Class Number Conjecture (referred to by du Sautoy).
In 1916, a German mathematician Erich Hecke, succeeded in proving that if the Riemann Hypothesis was true then Gauss's Class Number Conjecture was also true.
Then later three mathematicians Max Deuring, Louis Mordell and Hans Heilbronn succeeded in showing that if the Riemann Hypothesis is false, this could also be used to prove that the Class Number Conjecture was also true.

The significance of this finding for me really points to the inadequacy of using conventional linear logic to interpret the nature of the Riemann Hypothesis.

Conventional Mathematics is 1-dimensional in nature solely based on quantitative notions of interpretation.

However viewed more comprehensively Mathematics properly entails the dynamic interaction in a relative manner of both quantitative (analytic) and qualitative (holistic) aspects of interpretation. And this is the truth to which the Riemann Hypothesis directly relates.
Therefore if we insist on just one pole of interpretation (in an absolute manner) then the very nature of the Hypothesis is rendered paradoxical.

Thus is the the quantitative aspect (as objective) is true in absolute fashion, the qualitative aspect (as mental interpretation) is thereby rendered absolutely false; in turn if the qualitative aspect (as interpretation) is absolutely true, then the quantitative aspect (as objective) is absolutely false.

This in this restricted (linear) logical sense, the Riemann Hypothesis can be accepted as absolutely true and absolutely false at the same time.

Thus the "proof" of the Class Number Problem really depends on the acceptance that the Riemann Hypothesis transcends conventional notions of logical understanding.

However strictly all mathematical proof is based on the same supposition  (which itself cannot be proven or disproven).