Wednesday, April 30, 2014

The Limits of Conventional Logic

Marcus du Sautoy in the "Music of the Primes" refers to the Continuum Hypothesis which was the first on Hilbert's famous list of 23 great unsolved mathematical problems.

In 1963, Paul Cohen proved that this represented one of Godel's undecidable propositions. In other words on the basis of the accepted mathematical axioms it was not possible to prove (or disprove) this proposition as to whether another set of infinite numbers exists between - as it were - the rational fractions and the real numbers.

In fact Cohen was able to construct two axiomatic worlds where the proposition could be proven true in one and false in the other!

He then goes on to argue that the Riemann (8th on Hilbert's list) is distinct from the Continuum Hypothesis in a very important sense.

So according to du Sautoy if the Riemann Hypothesis is undecidable then two possible outcomes exists
(1) it is either true and we can't prove it or (2) it is false and we can't prove it.
However if it is false there is a zero off the critical line which we can use to prove it is false. So it can't be false without us being able to prove that it is false.

So therefore according to this logic, the only way that the Riemann Hypothesis can be undecidable is that it is indeed true (without us being being able to prove that it is true).

However, I would strongly question the validity of this use of logic. From my perspective, the Riemann Hypothesis - by its very nature - already transcends our accepted mathematical axioms.

Once again, I would see the Hypothesis as relating ultimately to an a priori assumption that both the quantitative (as independent) and qualitative aspects (as interdependent) of mathematical interpretation can be consistently combined with each other.

However the reduced nature of conventional mathematical proof implicitly assumes such consistency to start with. In other words the very use of its axioms thereby assumes the truth of the Riemann Hypothesis (as an a priori act of faith).

Therefore there is no way that such an a priori assumption can be either proven (or disproven) through the use of such axioms.

So du Sautoy maintains that if the Riemann Hypothesis is in fact false, that we can find a zero off the critical line (and thereby prove that it is false).

However even if one was to accept in principle that a zero might lie off the critical line, this does not infer that we can thereby automatically show that it is off the line.

For example from one valid perspective, it could be so beyond the range of finite magnitudes that can conceivably be investigated that it would remain practically impossible to experimentally detect it!

However there is a much more crucial difficulty which du Sautoy has overlooked.

If my basic premise is correct that the the Riemann Hypothesis is an a priori assumption that underlines the very consistency of conventional axioms (which cannot be proven or disproven through use of these axioms), then if the Hypothesis is indeed false then we no longer have a sufficient basis for trusting the ultimate consistency of any of our mathematical procedures.

Therefore if a zero was somehow to be experimentally verified as existing off the critical line, this would imply that the whole mathematical edifice is ultimately built on inconsistent premises.

Therefore in such a circumstance we could not use the subsequent emergence of a zero off the critical line to disprove the Riemann Hypothesis (in a conventional manner) as this assumes the inherent consistency of our mathematical approach.

Now, realistically I do not expect that a zero will ever be found off the line!
So acceptance of the Riemann Hypothesis is already built into our assumptions of how the mathematical world operates. However this acceptance strictly exists as an act of faith (rather than logic).
However this does imply that uncertainty is fundamentally an inherent part of Mathematics, with the possibility always remaining that this act of faith (in what is implied by the Riemann Hypothesis) is ultimately unwarranted.

There is also the interesting case of the Class Number Conjecture (referred to by du Sautoy).
In 1916, a German mathematician Erich Hecke, succeeded in proving that if the Riemann Hypothesis was true then Gauss's Class Number Conjecture was also true.
Then later three mathematicians Max Deuring, Louis Mordell and Hans Heilbronn succeeded in showing that if the Riemann Hypothesis is false, this could also be used to prove that the Class Number Conjecture was also true.

The significance of this finding for me really points to the inadequacy of using conventional linear logic to interpret the nature of the Riemann Hypothesis.

Conventional Mathematics is 1-dimensional in nature solely based on quantitative notions of interpretation.

However viewed more comprehensively Mathematics properly entails the dynamic interaction in a relative manner of both quantitative (analytic) and qualitative (holistic) aspects of interpretation. And this is the truth to which the Riemann Hypothesis directly relates.
Therefore if we insist on just one pole of interpretation (in an absolute manner) then the very nature of the Hypothesis is rendered paradoxical.

Thus is the the quantitative aspect (as objective) is true in absolute fashion, the qualitative aspect (as mental interpretation) is thereby rendered absolutely false; in turn if the qualitative aspect (as interpretation) is absolutely true, then the quantitative aspect (as objective) is absolutely false.

This in this restricted (linear) logical sense, the Riemann Hypothesis can be accepted as absolutely true and absolutely false at the same time.

Thus the "proof" of the Class Number Problem really depends on the acceptance that the Riemann Hypothesis transcends conventional notions of logical understanding.

However strictly all mathematical proof is based on the same supposition  (which itself cannot be proven or disproven).

Tuesday, April 29, 2014

Alternative Approach to Frequency of Primes

We have seen that there are two ways to look at prime numbers.

The conventional (Type 1) approach is to treat each prime number as a whole unit in cardinal terms.

So 2 represents a prime number (i.e. one prime unit) in this sense; 3 then represents another prime unit and so on.

However there is an alternative (Type 2) manner of looking at the primes where each prime number is viewed as constituting a group of related members in ordinal terms.

So 2 is this sense represents a prime that is made up of (unique) 1st and 2nd members; 3 represents a prime made up of unique 1st, 2nd and 3rd members and so on.

Likewise the conventional manner of measuring the frequency of primes is based on the Type 1 approach.

Therefore in considering for example how many of the first hundred natural numbers are prime, we are treating the occurrence of each prime as a single unit.

And as we have seen a simple and surprisingly accurate formula can be given for the frequency of primes from this perspective i.e. n/(log n – 1).

However there is an alternative way (based on the type 2 perspective) of measuring the frequency of primes.
Here each prime is not treated as a single unit (in cardinal terms) but as a multiple group of related members.

So therefore 2 (as prime) is made up of 2 units, 3 made up of 3 units and so on.

Of course this is true of the natural numbers also.

Therefore in considering the same example of the frequency of primes in the first 100 numbers basically we need to sum the total for members for the prime numbers (contained up to 100)

i.e. 2 + 3 + 5 + 7 +........ + 97.

Now a simple formula that I suggest to obtain this result is based on the corresponding sum of the first n numbers divided by (log n – 1).

The formula for the sum of n numbers = n(n + 1)/2

Therefore the corresponding formula for the sum of (Type 2) primes,

= n(n + 1)/2(log n – 1).

In the following table I present the estimated total of prime members (using this formula) as against the actual total.


Up to
Actual No.
Estimated No.
% Accuracy
  100
    1060
    1401
75.66
  200
    4227
    4676
90.40
  300
    8275
    9599
86.21
  400
  13877
  16067
86.37
  500
  21536
  24019
89.66
  600
  29296
  33408
87.69
  700
  38612
  44199
87.36
  800
  49078
  56363
87.07
  900
  61797
  69876
88.44
1000
  75127
  84719
88.67
1100
  91953
100873
91.57
1200
110728
118324
93.58

What is interesting is that it does not provide nearly as accurate predictions as the corresponding Type 1 approach (based on n/(log n – 1).

However overall the relative accuracy does gradually increase so that once we pass 1000, it seems to be consistently now over 90%.

Monday, April 28, 2014

Holistic Appreciation of the Riemann Zeros

I was reading in the past few days how Hugh Montgomery originally became interested in the Riemann zeros in an attempt to throw further light on the factorisation of imaginary numbers.

Montgomery initially believed that the zeros should be randomly distributed (in the same manner as the primes).

So in studying the Riemann zeros that were postulated to lie on an imaginary line, he was seeking experimental evidence to back up his initial hunch regarding their random nature.

However to his surprise he quickly found that this was not the case and that the zeros in fact tended to repel each other. So the observational evidence was very much against the notion of randomly distributed zeros.

Eventually Montgomery formed a conjecture (still unproven) as to the actual manner of distribution of the zeros. This was ultimately to lead to a chance meeting with the physicist Freeman Dyson, who could readily appreciate the relevance of the same distribution for the behaviour of excited energy states at the sub-atomic quantum level.

So this was to lead to the important realisation of the hitherto unexpected connection as between the Riemann zeros and quantum physics.

However from a holistic perspective it is pretty obvious why the Riemann zeros would not indeed be randomly distributed.

The very notion of random distribution is based on the assumption of independent events.

So for example if we repeatedly toss an unbiased coin we would expect the resulting number of heads and tails to be randomly distributed as each toss could be viewed as independent of - and thereby uninfluenced by - all other tosses.

However from a dynamic interactive perspective, the behaviour of the Riemann (non-trivial) zeros is directly complementary with the behaviour of the primes.

So therefore just as each prime represents the independent extreme of the number system (where a number has no factors other than itself and 1), the Riemann zeros by contrast express the interdependent number extreme.

Indeed I recently used this very fact to show how the frequency of the zeros is intimately related to the common factors of the composite numbers!

Thus we would not expect - by their very nature - that the Riemann zeros would be randomly distributed.

Rather rather like time series analysis in statistics they represent the smoothing out of the independent behaviour of each individual prime, so that each individual occurence can thereby be made fully compatible with the overall collective nature of the primes (where their interdependence with the natural number system is expressed).

Indeed an important clue as to this "smoothing" behaviour is given by the simple formula for estimation of the frequency of the primes.

As I have suggested an intimate complementary link exists as between the estimation of the frequency of primes and the common factors of the composite numbers.

So we can express the frequency of primes (up to n) as n/(log n – 1).

The corresponding frequency of the common factors of the composite numbers (up to n) is then given in a complementary manner as n(log n – 1).

Now the latter formula is directly linked with the formula for calculation of the frequency of non-trivial (Riemann) zeros.

However just as the primes as independent correspond with linear notions, the zeros (representing the corresponding collective interdependent of the primes) correspond with circular notions.

So we let n = t/2π

Then the formula for calculation of the frequency of the non-trivial zeros (up to t) on the imaginary line is given as

t/2π(log t/2π – 1).

Again the very fact that these zeros are postulated to lie on an imaginary line, directly suggests the notion of an interdependent identity for, from a holistic perspective, the imaginary notion represents an indirect manner of representing the notion of interdependence in a linear (i.e. independent) manner.   

Now what is remarkable about the formula for  the calculation of the frequency on non-trivial zeros, is that it is stunningly accurate (generally within 1 in absolute terms of the correct answer).

This contrasts sharply with the corresponding accuracy for calculation of the frequency of primes. Though this does indeed improve in relative terms as the value of n increases, the deviation from the actual frequency of primes likewise increases.in absolute terms.

However the non-trivial zeros do likewise have an independent identity. So, behaviour here is complementary to that of the primes.

Once again each individual prime has an independent identity (from a quantitative cardinal perspective).
However the overall collective behaviour of the primes has an interdependent identity (in being intimately related to the natural number system).

By contrast, in reverse manner, each individual non-trivial zeros has an interdependent identity in a qualitative holistic manner. However the collection of all trivial zeros has an independent identity in a quantitative fashion.  This indeed is why the non-trivial zeros can collectively be used to eliminate the deviations arising in the general estimation of prime number frequency.

Sunday, April 27, 2014

More on Type 1 and Type 2 Conversions

We have seen in recent blog entries, how a number expressed with respect to its Type 1 aspect can then be converted in Type 2 terms.

In reverse we have likewise seen how a number expressed initially with respect to its Type 2 aspect, can be converted in a Type 1 manner.

As the Type 1 and Type 2 aspects related directly to the cardinal (quantitative) and ordinal (qualitative) aspects of number respectively, such conversion as between the two aspects of the number system ultimately relates to the consistency as between both cardinal and ordinal interpretation respectively.

Now as Conventional Mathematics is confined to a merely reduced quantitative interpretation of number, this key issue as to the consistency of both the cardinal and ordinal aspects of number does not even arise.

However, when appropriately understood in a dynamic interactive manner it relates directly to the very nature nature of the Riemann Hypothesis, which in fact is the fundamental condition required to ensure the consistency of both cardinal and ordinal aspects.

What we have shown so far is that the conversion of a Type 1 natural number (on the real line) leads to a corresponding Type 2 number (on the imaginary line).

We have also shown that for the natural numbers, the Type 2 conversions have a negative value.
However the reciprocals of the natural numbers will also lie on the imaginary line (with a positive value).

Now an equally interesting pattern of conversion arises when we attempt to convert a Type 1 circular number i.e. lying on the circle of unit radius in the complex plane) in a corresponding Type 2 fashion.

Now the simplest and perhaps most important example relates to – 1.

So we set   (– 1)1  = 1x .

Therefore log  (– 1) = x log 1

  iπ = x (2 iπ)

Therefore x = 1/2

So (– 1)= 11/2

The next most important example relates to i.

So again i is a circular number (with respect to the Type 1 aspect of the number system).

So when we express i in Type 2 terms we get 1/4.

So i= 11/4

In general terms therefore when we express a circular number (with respect to Type 1) in Type 2 terms we get a linear fractional number (on the real line)

Alternatively when we express a real linear fraction (with respect to the Type 2), this converts into a circular number (with respect to the Type 1).

And this is all of crucial relevance with respect to the solutions of the Zeta 1 and Zeta 2 zeros respectively.

The Zeta 1 zeros all lie on the imaginary number line; the Zeta 2 zeros (for finite equations) then lie on the unit circle. Then for infinite equations the Zeta 2 solutions = 1/2.

So the requirement that the imaginary line for the Zeta 1 zeros goes through 1/2, is a simple consequence of the Zeta 2 infinite results.

Saturday, April 26, 2014

Where Addition and Multiplication Meet

I have stressed repeatedly in these blog entries that - properly understood - there are two aspects of the number system (Type 1 and Type 2 respectively) in dynamic interaction with each other.

In defining a number fully, we must define it with respect to both a base and dimensional aspect (where the dimensional aspect represents the power to which the number is raised).

Therefore with respect to nx, n represents the base and x the dimensional aspects of the number respectively.

The Type 1 aspect is then always defined with respect to a fixed default dimensional value of 1.

So  nthereby (where a can take on any base value)  represents a number expression defined with respect to its Type 1 aspect. This aspect is directly suited to the cardinal treatment of number.

The Type 2 aspect, in an inverse manner, is always defined with respect to a fixed base value of 1.

So 1n (where n can now take on any dimensional value) represents a number expression defined with respect to its Type 2 aspect. This aspect is directly suited to the ordinal treatment of number.

The considerable issue then arises as how to convert a number, initially defined with respect to the Type 1 aspect as n1, indirectly in Type 2 terms.

So in general terms we set n1  = 1x(where x represents the Type 2 expression of the number).

Then taking natural logs on both sides 1(log n) = x(log 1).

Therefore x = log n/log 1 = log n/(2iπ) = – {log n/(2π)}i

So therefore for the simple example of 2 (i.e. where 2 is expressed in Type 1 terms as 21), its corresponding Type 2 expression is given as – {log 2/(2π)}i = – .1103178 i (correct to six decimal places.

So fully expressed in terms of the Type 1 and  Type 2 aspects of the number system,

21 – .1103178 i

Now we can use these two aspects of the number system to illustrate precisely the relationship as between multiplication and addition respectively.

Put simply, multiplication in Type 1 format  is expressed as addition in terms of the corresponding Type 2 aspect.

So for example, in Type 1 format, 2 * 3 = 6. 

More precisely, 2* 3= 61.

However in terms of the Type 2 aspect,

21 – {log 2/2π)} i  – .1103178 i

3 – {log 3/2π)} i  – .174850 i


Then, 2* 31  (in Type 1 terms) = – (.1103178 i + 174850 i)


So when we multiply the two numbers (as base) in Type 1 terms, we add the two numbers (as dimensional powers) in the corresponding Type 2 manner.


Therefore  61 = – .285167 i 

Of course, we can equally apply this in reverse, so what is addition (from the Type 2 perspective) is represented trough multiplication (in corresponding Type 1 terms).


Now, to convert from the Type 2 aspect to its corresponding Type 1 expression we simply replace the base number 1 (in the Type 2) with e2iπ (as dictated by the famous Euler identity).

Therefore – .285167 i   =   e2iπ  * ( – .285167 i)   =    e 1.791757   

= 6 (or 6in a precise Type 1 manner).

So we have used this reverse means of conversion (from the Type 2 to the the Type 1 aspect) to verify that

2* 31  = 6 can be consistently expressed in a Type 2 manner.


However it is important to stress once again that the Type 1 and Type 2 aspects of the number system are quite distinct. This also implies directly that the operations of addition and multiplication respectively are also quite distinct.

So once again whereas the Type 1 aspect is directly associated with the quantitative aspect of number (as independent) the Type 2 aspect is directly associated with its qualitative aspect (as interdependent with other numbers).

This also implies that whereas addition directly relates to the quantitative aspect of number, that in relative terms, multiplication relates to its corresponding qualitative aspect.

Of course in dynamic interaction with each other, these aspects overlap, so that both addition and multiplication entail (depending on the precise context) both quantitative (analytic) and qualitative (holistic) aspects.

However the key conclusion of all this is that the very paradigm which defines present mathematical interpretation is quite unsuited to the task.

Because of its underlying abstract assumptions regarding the nature of mathematical symbols such as numbers, this entails that qualitative holistic considerations are inevitably reduced (with respect to every context) in an absolute quantitative manner.

However, as I have sought to demonstrate through the number system, appropriately understood, the nature of mathematical reality is inherently dynamic, with both quantitative (analytic) and qualitative (holistic) aspects of interpretation.

And this fundamentally applies to the very nature of addition and multiplication, which can only be properly appreciated therefore in a dynamic relative manner.

Thursday, April 24, 2014

Estimation of Frequency of Prime Numbers (2)

In a previous blog entry, I suggested a simple improvement to the simple log formula (n/log n) as a means of predicting the frequency of primes.

Here I set n1 = n/log n and then obtained the new modified estimate,

i.e. n/log n + n1/log n 

However an even simpler improvement - and ultimately more accurate estimate - is obtained though the slight modification of the original log formula i.e. n/(log n – 1).

So once again I provide a table in multiples of 10 up to 10,000,000,000 showing the actual occurrence of primes at each stage as against the predicted values using the original simple estimate (n/log n) and the new modified version n/(log n – 1). 

Up to n
Actual no.
n/log n
n/(log n – 1)
10
4
4
8
100
25
22
28
1000
168
145
169
10000
1229
1086
1218
100000
9592
8686
9512
1000000
78498
72382
78030
10000000
664579
620421
661459
100000000
5761455
5428681
5740304
1000000000
50847534
48254942
50701542
10000000000
455052511
434294482
454011971

It can be seen readily from the above table that n/(log n – 1) gives a much more accurate estimation of the frequency of primes than n/log n. Indeed in the final entry in the table for 1010, the accuracy of the first log formula is 95.44% whereas with the modified version it is 99.77%.

It is also apparent from the above that both formulae give an under estimate of the actual number of primes.

By contrast the earlier modified estimate, n/log n + n1/log n1, gives an over estimate.

This would suggest that a simple mean of the two estimates i.e. n/(log n – 1) and  n/log n + n1/log nwould therefore give a more accurate estimate and indeed over the range of values in the table this is certainly the case with the actual value roughly the midpoint of the two modified estimates. However at higher values (and I tested to 10^25) a distinct bias enters in with the excess in the estimate given by n/log n + n1/log n1, becoming increasingly greater than the corresponding deficit in the estimate of n/(log n – 1).

There is an important reason why n/(log n  – 1) should prove  a better estimate than n/log n.

Log n gives the average spread or gap as between the primes. For example in the region of 1000 we would expect the average gap as between primes to approximate 7 so that the relative frequency = 1/7 (= 1/log n). Another way of expressing this is by saying that in the region of 1000 we would on average expect an unbroken sequence of 6 composite numbers before encountering a primes (i.e. log n  – 1) .

Now as I have stated before the primes (from the cardinal perspective) represent the independent aspect with respect to the number system (so that all other composite numbers are derived from the relationship between primes (as building blocks). 
So the composite numbers represent the interdependent aspect with respect to the number system. And in dynamic interactive terms, the relationship of primes expresses this complementary relationship as between independence and interdependence respectively.

Thus properly speaking the relationship of the primes is not directly with the overall number system (which includes primes) but rather with those composite members (that are qualitatively distinct from primes).   

The beauty of this modified formulation n/(log n – 1) is then not only in the fact that it serves as a much better estimator of the primes but that in turn it can be seen to bear a simple inverse relationship with the formula n(log n – 1).
This, as we have seen, accurately predicts the corresponding number of  factors with respect to the composite numbers (bearing in turn a very close relationship with the frequency of occurrence of the non-trivial zeros).

So therefore the very purpose of this blog entry (and so many before) is to show how the ability to look at the number system in a dynamic interactive manner can reveal many of its great secrets in a very simple form.

Monday, April 21, 2014

Ordinal Nature of Prime Numbers

One of the great limitation of the conventional approach to  the Riemann Hypothesis, is that it attempts to interpret the primes and natural numbers in a merely cardinal (i.e Type 1) manner.

However the primes and natural numbers can be equally given a distinctive ordinal (i.e. Type 2) interpretation.

So the mystery of the relationship of the primes to the natural numbers (and the natural numbers to the primes) can only be properly understood in a dynamic fashion, entailing the two-way interaction of both cardinal and ordinal aspects.

We have already looked at the ordinal aspect of each individual prime number form the Type 2 perspective.

So once again a prime such as 3 is - by definition - composed of three members i.e. 1st, 2nd and 3rd, in a natural number ordinal fashion.

Ist, 2nd and 3rd refer directly to a qualitative rather than quantitative notion of number. This is due to the fact that their respective meanings imply interdependence through a necessary relationship with other group members.

So what is 1st in the context of 3, implies 3 group members and in principle what is 1st could entail any one of these members (depending on context).
What is then 2nd implies two remaining members. However what is 3rd (in the context of 3) then implies only one possible member. So the notion of interdependence here no longer holds.

In an indirect quantitative manner, these 3 ordinal numbers can be expressed in The Type 2 system through raising 1 to 1/3, 2/3 and 3/3 respectively.

So 1^1/3 (i.e. – .5 + .866...i  is the indirect quantitative representation of the notion of 1st (in the context of 3 members).

1^2/3 (i.e.   . .866...i is then the indirect quantitative representation of  the notion of 2nd (in the context of 3 members).

1^3  i.e. 1, is finally the indirect quantitative representation of the notion of 3rd (in the context of 3 members).
As this final result is always 1 (and indistinguishable from the cardinal notion of 1), it can be considered as a trivial result.

So where t is a prime number, the ordinal notion of the nth member (in the context of t group members) is always trivial in this sense (= 1).

The remaining (t – 1) non-trivial results are then the various solutions to the finite equation,

1 + s+ s+ s+….. + st – 1   = 0.

So therefore in the context of a group of 3 members (where t = 3),

the two non-trivial results are the solutions of the equation,

 1 + s1  + s2    = 0.

These again are what I refer to as the Zeta 2 non-trivial zeros.

As I have stated before, this finite equation can be extended in an infinite cyclical manner (with each full cycle made up of 3 successive terms).

Fascinatingly the expected value of this infinite equation = 1/2.

So the ordinal nature of each prime (in this Type 2 sense) derives from the fact that each prime is composed by definition of a natural number succession of terms in an ordinal manner).

So again for example, 3 in this sense, is composed of individual 1st, 2nd and 3rd members!

However there is another complementary manner in which the ordinal nature of prime numbers arises.

As is well known every natural number (except 1) represents a unique combination of prime number factors (in cardinal terms).

So therefore 6 for example us uniquely expressed as the product of 2 and 3 (i.e. 2 * 3) in cardinal terms.

However the ordinal nature of primes arises in the related context that the notion of 6th, likewise reflects a unique product combination of 2nd and 3rd (in an ordinal manner).

Now indirectly this qualitative notion of number is expressed through the Type 2 aspect of the number system through using the reciprocals of the dimensional powers involved.

So 1^1/6  = 1^{(1/2) * (1/3)} = .5  .866...i.


Now with respect to each prime number, a perfect balance is maintained as between its individual members (indirectly expressed in a quantitative manner) and its collective identity (qualitatively expressed through the sum of its members).

So, for example, when we add the three roots of 1 (as quantitative expressions of 1st, 2nd and 3rd in the context of 3) the resulting collective sum = 0.

Therefore the combined interdependence of the 3 members (representing a strictly qualitative notion) thereby has no quantitative value!

So the Zeta 2 non-trivial zeros uniquely reconcile within each prime number group, the relative independence of each individual member, in quantitative terms, with the corresponding relative interdependence of the collective group of members in a qualitative manner.

The Zeta 1 non-trivial zeros as solutions of s to the infinite equation,

1–s  + 2–s  + 3–s  + 4–s  +…….. = 0,

solve a complementary problem.

So the Zeta 1 non-trivial zeros (that occur in pairs of the form .5 + it and .5 – it respectively) uniquely reconcile in a complementary manner, for the number system as a whole, the corresponding relative independence of each prime number (in an individual quantitative manner) with the overall relative interdependence of the prime numbers as a collective group (in the natural number system).

Once again we are accustomed to look at the prime numbers in a quantitative cardinal manner.

However the qualitative (ordinal) nature of the primes - representing their corresponding collective interdependence with the natural number system - is expressed through the Zeta 1 (i.e. Riemann) non-trivial zeros.

So now in a complementary manner (to the Type 2), the qualitative nature of the zeros is expressed through each individual zero (thereby representing a formless energy state) while their quantitative nature is expressed through the overall collective nature of the zeros.

There is of course a two-way dynamic interdependence as between Zeta 1 and Zeta 2 zeros (in quantitative and qualitative terms).

The reason why the Zeta 2 zeros uniquely express the individual nature of the members of each prime group, is because the Zeta 1 zeros seamlessly preserve the interdependence of the prime numbers with the number system as a whole.

And the reason why the Zeta 1 zeros uniquely express the collective nature of the primes with the natural number system is because the Zeta 2 zeros uniquely achieve such uniqueness for each individual member of a prime (considered as a group).

In fact, this two-way relationship as between the primes and the natural numbers, perfectly expresses in a mathematical fashion the original relationship as between whole and part and part and whole in both quantitative and qualitative terms.

Thursday, April 17, 2014

Conversion Between the Two Aspects of the Number System

I have stated before on many occasions how - properly understood - there are two aspects of the number system that are in dynamic interaction with each other.

Once again, I refer to these two aspects as Type 1 and Type 2 respectively.

The Type 1 aspect is the standard approach suited directly to the treatment of the cardinal aspect of number where each number is defined with respect to the default dimensional value of 1.

So the natural number system from this perspective is:

1^1, 2^1, 3^1, 4^1,....

The Type 2 approach is the alternative - largely unrecognised - approach, suited directly to the treatment of the ordinal aspect of number where each number (representing a dimensional power) is defined with respect to the default base value of 1.

So the natural number system from this perspective is:

1^1, 1^2, 1^3, 1^4,.....

In the actual experience of number a continual shifting takes place as between each number with respect to both its base and dimensional definition.

So for example the number 2 does not have one unambiguous fixed meaning, but rather continually alternates as between its base number appreciation (as defined in Type 1 terms) and its corresponding dimensional number appreciation (defined in corresponding Type 2 terms).

And quite simply it is this alternative switching of meanings that enables both the appreciation of the cardinal and ordinal aspects of number respectively to take place.

Therefore conventional mathematical interpretation suffers from a gross form of reductionism in the manner in which it attempts to derive ordinal directly from mere cardinal type interpretation!

When one recognises these two aspects of number, i.e. Type 1 and Type 2 as relatively distinct, the question the arises as to their consistent use in terms of each other.

Put another way, we then need to find an indirect means of translating (or converting) the Type 1 aspect in terms of the Type 2, and equally from the opposite perspective the Type 2 in terms of the Type 1.

In fact I have repeatedly stated in these blog entries that ultimately the Riemann Hypothesis serves as the basic requirement for the successful reconciliation, throughout the number system, of both types of meaning.


Up to this I have largely concentrated on the task of converting numbers in the Type 2 system indirectly in Type 1 terms.

So for example to convert the simplest (non-trivial) case i.e. 1^2 in Type 1 terms, we in fact express 1 with respect to its reciprocal i.e. 1^(1/2).

So the solution here is obtained through the equation x^2 = 1, so that x = – 1.

Now this number (indirectly representing a quantitative value) lies on the circle of unit radius in the complex plane.

So the very essence of the Type 2 aspect of number is that it relates inherently to a circular - rather than linear - type understanding.

In other words, whereas the cardinal notion of number is based on the independent identity of each number (which befits linear interpretation), by contrast the ordinal notion relates directly to an interdependent identity i.e. relationship between members of a number group (which befits circular interpretation).

So when use refer to the number 2 for example in a cardinal sense, we give it an (isolated) independent identity. However the corresponding ordinal meaning of 2nd can only have meaning through the interdependent relationship of the members of a number group. So when we have two members in a group the notion of 2nd always implies the relationship with another member (which in this context is designated as 1st).

 So circular understanding (of a qualitative nature) in the context of two members, implies  the dynamic notion of the complementarity of opposites. So if one member is posited in any context as the 1st, this automatically implies the corresponding negation of the other member which is thereby 2nd.

And the quantitative representation of this complementary type understanding is given through the two roots of 1, i.e. strictly the two roots of 1^2 and 1^1 respectively.
So the 1st member is thereby represented as + 1 , and the 2nd as  – 1 respectively.

So we can seen perhaps in this example how the very significance of the (indirect) Type 1 translation of the Type 2  aspect of the number system is that it enables us to express ordinal type notions, expressing qualitative notions of number interdependence, in an indirect quantitative manner (with respect to a circular number scale).

However we also have the opposite problem of successfully converting number defined with respect to the Type 1 aspect, indirectly in a Type 2 manner.

So if a n is a cardinal number in the Type 1 aspect, representing the base (defined with respect to the default dimensional value of 1), then we need to express it indirectly in Type 2 terms (defined with respect  to a default base value of 1).

So in general terms  n^1 = 1^x

Therefore taking natural logs on both sides

Log n = x * log 1.

Thus x = log n/ log 1

= log n/2iπ.


So therefore once again, in the simplest case, to convert the number 2 i.e. where n = 2 (now defined with respect to the Type 1 aspect) in Type 2 terms as x,

x = log 2/2iπ  = –  (log 2/2π) i

As we have seen in our earlier conversion (from Type 2 to Type 1) there is a close connection as between the number 2 and its reciprocal 1/2 (in qualitative and quantitative terms).

It is quite similar in terms of this latter conversion (from Type 1 to Type 2).

log 1/2 = – log 2.

Therefore when n = 1/2

x = (log 2/2π) i.

We thus have the fascinating conclusion that - by definition - all numbers, initially defined in Type 1 terms, with respect to the real number line, are correspondingly defined in Type 2 terms as numbers lying on the imaginary number line (both positive and negative directions).  

Tuesday, April 15, 2014

Estimation of Frequency of Prime Numbers.

As is well known, the simplest estimate of the frequency of prime numbers (up to a given number n) is given by the formula, n/log n (using the natural logarithm based on e). For our purposes we will refer to as version (1) of the log formula.

However though the accuracy of this formula improves in relative terms as n increases, ultimately approaching 100% accuracy, it is not really very accurate in absolute terms. In the following table, I give the actual occurrence of primes to various powers of 10 (to 1016). I also give the predicted number using the simple log formula and then its percentage accuracy.


Up to n
No. of  primes
Estimated no.
% accuracy (1)
% accuracy (2)
10
4
4
100
57.14
100
25
22
88
88.61
1000
168
145
86.31
96.55
10000
1229
1086
88.36
99.03
100000
9592
8686
90.55
99.46
1000000
78498
72382
92.21
99.55
10000000
664579
620421
93.36
99.65
100000000
5761455
5428681
94.22
99.70
1000000000
50847534
48254942
94.90
99.74
10000000000
455052511
434294482
95.44
99.76

As we can see, though the absolute deviation from the true number of primes significantly increases, the relative percentage of estimated primes (in terms of the actual number) steadily increases so that by 1010, it is over 95% accurate.

However a significant improvement can be achieved in a recursive manner by defining


n1 = n/log n and then obtaining the new modified estimate,
i.e. n/log n + n1/log n as in (2) above.

As can be seen from the above the new modified version of the formula quickly becomes a much more accurate predictor of the percentage of primes (over this range). For example, already at 10,000 it has reached 99% accuracy (as opposed to 88% using the traditional method).

One other interesting feature is that whereas the traditional method always under-predicts (over the ranges yet capable of estimation), the modified version always over-predicts.


The largest estimate for actual primes that I dealt with was for 1016 , which is
10,000,000,000,000,000. The actual number of primes to this number is 279,238,341,033,925.

Now the simplest log estimate (1) predicts 271,434,051,189,532 primes (an underestimate) which is 97.21% accurate.

The modified log estimate (2) then predicts 279,601,229,526,797 primes (an overestimate) which is 99.87% accurate.


Now these estimates will still remain quite poor when compared to the Gaussian Li estimate and also Riemann's function. 

However its still remains of great interest in pointing to what appears to be strong evidence for the recursive nature of overall prime behaviour. 

It is not readily apparent how Littlewood's proof that the Gaussian Li conjecture (i.e. that it would always lead to an underestimate of the primes) was in error, would apply to the much less accurate crude log estimate which in version (1) substantially underestimates the number of  actual primes.