In the latter part of my last blog entry I emphasised the inherent holistic nature of the Type 2 aspect of the number system. Ultimately this relates to the notion of number with respect to its qualitative - rather than quantitative - identity.

Alternatively, it relates to the understanding of number as interdependent (which conflicts strongly with the cultural dominance of numbers as representing solely independent entities).

The clue again with respect to this latter holistic emphasis is to view a number i.e. natural, with respect to the relationship as between its individual ordinal members.

So for example the analytic linear notion of 2 as cardinal relates to this number in static absolute terms as representing a collective whole (whose individuals units are necessarily of a homogenous - and thereby - indistinguishable nature).

However the corresponding holistic circular notion of 2 as ordinal relates to this same number, now understood in a dynamic relative fashion, as representing the interaction of its individual members (i.e. 1st and 2nd) which can be uniquely distinguished in qualitative terms.

Once again, this interaction entails two polar directions (i.e. 1st and 2nd dimensions which are inversely associated with the 1st and 2nd roots of 1) that are positive and negative with respect to each other represented as + 1 and – 1 respectively.

So the combined fusion of both as interdependent is represented as (+) 1 – 1 = 0.

Therefore from this holistic perspective every number can be given a qualitative meaning as a pure energy state (= 0 from a quantitative perspective). Of course in actual experience, this holistic energy state interacts with the customary rational understanding of number (as discrete form).

Then in formal interpretation of a conventional kind, the holistic aspect (which is of a uniquely distinct nature) is reduced in a mere rational fashion! We are thereby conditioned to view numbers as representing distinct quantitative entities with no recognition therefore of their corresponding holistic qualitative aspect as energy states (in both physical and psychological terms).

Thus, to emphasise once more, the holistic notion of any number relates to the dynamic interaction of its individual ordinal members (which are uniquely defined for the number).

Through reference to the corresponding roots of 1 for each number, this does indeed enable one to demonstrate the holistic aspect of all individual numbers. However, in itself it does not directly lend itself to quantitative type analysis, as by definition the sum of all these roots (demonstrating the qualitative relationship aspect of ordinal members) = 0.

However just as the Type 1 (analytic) aspect indirectly can be given a holistic interpretation (with respect to the nature of Zeta 1 trivial zeros) equally the Type 2 (holistic) aspect indirectly can be given an analytic expression (with respect to the Zeta 2 trivial zeros)!

Once again a fascinating form of complementarity is at work. The Zeta 1 non-trivial zeros are expressed in quantitative terms in a linear (1-dimensional) manner. The key to the holistic understanding is then to provide a “higher” dimensional interpretation for these zeros in a qualitative manner.

In reverse fashion, the Zeta 2 non-trivial zeros are expressed - representing their qualitative meaning - directly in a circular (higher dimensional) manner.

So again for example, I have been at pains to express the precise qualitative meaning of 2 as a dimensional number (i.e. with respect to its 1st and 2nd ordinal members).

Therefore the key here is to represent higher dimensional roots in a reduced linear (1-dimensional) fashion from a quantitative perspective.

In effect this implies treating negative signs as positive and imaginary numbers as real!

So when we convert the various roots of 1 in this quantitative manner, a fascinating new set of analytic type relationships emerge, leading both to an alternative Type 2 Prime Number Theorem and Riemann Hypothesis respectively.

I have expressed the nature of this alternative formulation on many occasions previously in my blog entries.

Once again we initially treat both the cos and sin parts of the roots of 1 (now expressed in this reduced linear quantitative fashion) obtaining the average for each prime number grouping (representing all of its ordinal number members)

So to illustrate where p = 3 the three roots (1st, 2nd and 3rd) are 1, .5 + .866 and .5 + .866.

So the sum of the 3 cos parts = 2 and three sin parts (the first of which is zero) = 1.732.

The average of the cos parts therefore = .666.. and the sin parts = .5773.

Now when we obtain the average of cos and sin parts for higher prime numbered root,s the answer quickly converges for both parts (with a small remaining deviation) on the value 2/π = .6366….

The alternative Prime Number Theorem relates to the fact that 2/π = i/log i.

So i/log i with respect to the Type 2 aspect (as the mean reduced average for both parts of the ordinal number roots of p, as p increases without limit) plays a complementary role to that which n/log n plays with respect to the Type 1 (representing the average number of primes among the natural numbers as n increases without limit).

Also it quickly becomes apparent that the average for the cos part always exceeds 2/π (= i/log i) while the corresponding sin part always is less than 2/π (= i/log i).

Indeed the absolute value of the ratio of the deviation of the cos part to the deviation of the sin part from 2/π (= i/log i) quickly converges towards .5 (which operates as the alternative Riemann Hypothesis with respect to the Type 2 aspect).

Even here further complementarity is in evidence: From the Type 1 perspective, n/log n is an approximate varying quantity, whereas .5 according to the Riemann Hypothesis is the fixed point on the real axis through which the imaginary line on which all the non-trivial zeros lie; however from the Type 2 perspective i/log i is a fixed value with .5 as the absolute ratio of cos and sin deviations by contrast an approximate varying quantity.

Now here is the important point!

As we saw in the last blog entry, in the Zeta 1 function, both the prime numbers (with respect to the natural numbers) and trivial zeros each possess extreme analytic properties (representing independence) and extreme holistic properties (representing interdependence) respectively depending on the frame of reference from which they are viewed.

So from one perspective we can use the non-trivial zeros to eliminate all remaining (independent) deviations of the prime numbers with respect to their overall collective (interdependent) relationship with the real natural numbers.

Equally from the other perspective we can use the prime numbers (as a group) to eliminate all (independent) deviations of non-trivial zeros with respect to their overall collective (interdependent) relationship with the imaginary number system.

Again a complementary type connection exists with respect to the Zeta 2 function. Here the two aspects (independence and interdependence respectively) are reconciled within the same distribution

I have already illustrated how the prime numbers (and by extension all the natural numbers) can be used to explain to a very high degree of accuracy the deviations of the average of both cos and sin parts from 2/π (= i/log i).

However what is perhaps even more interesting is that these deviations can in principle be used in such a manner that the frequency of primes within the natural numbers can be approximated to any degree of accuracy.

It would work something like this. In the normal general calculation of prime number frequency (among the primes) each natural number is given a weighting of 1.

However using the (reduced) Zeta 2 deviations a slightly modified weighting would be given with earliest natural numbers differing most. So for example when predicting the frequency of primes relating to the first 100 natural numbers (i.e. n = 100) we would substitute a replacement number based on the total representing the sum of each slightly modified weighted natural number (up to 100) which could then correctly predict from the general formula the actual frequency associated with 100!

So whereas with the Zeta 1 function, the role of deviations associated with the non-trivial zeros must be understood in the context of the number system as a whole, here in the Zeta 2 approach we would have a one-to one relationship of a new weighted number (differing slightly from 1) with each natural number (based on the default weighting of 1).

Once again the greatest deviations would occur - with respect to these newly weighted members - amongst the earliest natural numbers. Then as we ascend up the natural number scale the weightings would approach ever closer to the default natural number weightings of 1.

## No comments:

## Post a Comment