Jump to content
Science Forums

An Exact Value For The Fine Structure Constant.


Don Blazys

Recommended Posts

...

We are the very first to even attempt a counting function for this most important sequence of numbers! That's a historical fact.

Thus, we are indeed making history, and future mathematical historians will definitely judge us as to how we cooperated with

and treated each other.

 

...May they have mercy on us!

 

Don

 

i have no interest in how history judges my behavior or that historians even know of me. when i proposed calculating the natural density of the polygonals & non-polygonals it was only in accord with the practice of taking such measures and i did not -and do not- ascribe any great importance to it nor do i see any evidence for its importance, your calculations & equations notwithstanding. i'm not looking for a debate; just making my stance clear. :turtle:

Link to comment
Share on other sites

Hi again- been away doing translations from Ngbaka-French into English (don't ask...). Anyway had a bit of a brainstorm this evening. PERSPECTIVE.

 

Its a big issue in linguistics- language structure seems (universally?) to allow for multiple points of view, vantages, etc. What you see, hear, do, and so on depends a great deal on where you are/were/will be relative to a larger or smaller scene. For this languages have created all sorts of orientational/directional marking systems, which tell the listener(s) which perspective is being chosen. But the phenomenon is actually much bigger- all sorts of words are differentiated solely by point of view, otherwise meanings generally are the same or nearly so. This can involve something's or someone's place in a hierarchy of rank, of force, of time, and so on. It makes language very powerful as a tool.

 

Why could this possibly be of interest in a discussion of the mathematical basis of the FSC? In the past years working on various systemic analyses on various phenomena in different fields (its what floats MY particular boat) I've picked up many hints of parallel motivational structures at every level of material reality. So for a very basic example people have noticed Phi in all sorts of places- I found it at the atomic level, as noted in some of my posts here. But there is far more. There is a very great variation in language structure, having to do with the size and shape of the sound system used to convey meanings. There is variation in how words, clitics, and affixes are ordered relative to each other, how clauses and phrases are ordered, etc. But there is method to the madness, which is still being pieced out in the past 50 or so years by linguistic typology.

 

I discovered some years back (and add to it as folks from both sides chime in with new findings) that GENOME structure has parallels to linguistic structure. That is, different types of organisms organize their genomes radically differently, in ways that strongly remind one of the different language types. This includes ordering of elements in a string, tightness of packing, how they allow unpacking of parts for processing, movement of elements along the string, between different strings. The list goes on and on. People have also been finding that the genome displays Phi-based phenomena, including keeping the relative proportion of the 4 nucleotide bases between limits defined by such numbers (for reasons???). There is probably a lot more that will be found in the years to come. Much of the machinery in both languages and genome is concerned with context- both have mechanisms for 'switching gears' at different structural levels. This isn't always apparent when considering just one genome or linguistic type, or any particular instantiation of same.

 

Examples here include the genetic code itself. Many of course know that triplets of nucleotides in the DNA or RNA string act as code words (codons) that get translated into choices of amino acids for incorporation into growing protein chains. Not so well known is that fact that the code is NOT arbitrary or random. Different families of triplets (64) associate with different amino acids (usually 21), but are not of equal rank. That is, one might have several triplets coding for the same amino acid, but the mediating mechanism linking reading to production, involving tRNAs reacts to different physical/environmental contexts (and these can be tweaked even further by feedback/feedforward mechanisms). Sometimes a cigar is just a cigar, but not always. Language does this too- in many languages intonation, stress, and other forces alter how the meaning is to be taken- seriously, jokingly, literally or not, etc.

 

The coding system of the triplets can be further broken down- different nucleotides in different triplet positions can be more or less strongly associated with physical properties (this was discovered decades ago by researchers)- such a solubility in hydrophilic or hydrophobic substances, the size and shape and chemical nature of the amino acid side chains, stopping and starting of the translation under different conditions, and even folding partials for the translated protein structure. There may also be self-referential coding for the DNA and RNA itself. Thus the genetic code multitasks in a big way, and there appear to be context-sensitive mechanisms that both helped create and manage this variability. There is even some recent evidence that the code itself can be read numerous different ways by higher structural levels. Life doesn't miss a trick!

 

In linguistics, for many years I studied what is called sound symbolism or phonosemantics. This is the linking of form to meaning in words that flies in the face of received wisdom in the field for just about a century now. Most linguists will tell you that the meaning of a form is completely separate from the shape of the form, except for onomatopoeia, and even there they claim a lot of learned conventionality. Actually the facts are far more complex. EUROPEAN languages are rather weak in sound symbolism, both because of their language type (which emphasizes higher hierarchical levels of structure) and their particular histories. BUT many other languages and their families display very strong connections in this regard. In the language family I'm working on now, called Gbaya (in central Africa), each daughter language contains many thousands of phonosemantically transparent words called ideophones. While normal words, between the Gbaya dialects, tend to be relatively conserved (that is you will find essentially the same words, even if pronounced slightly differently), so that the sets are nearly identical, ideophones instead are more uniquely instantiated from place to place, and from person to person, often making them the mark of personal, family, village, or regional identity. That doesn't mean that you can't find the same individual ideophones here and there, but the distribution becomes spottier. Phonosemantically transparent ideophones CODE linguistic meaning very similarly to what one sees with the genetic code in biology. That is, individual sounds, even the features of these sounds, very strongly associate with certain meanings, in ways that normal vocabular doesn't, though in reality its a continuum, since ideophones often end up shifted classwise to normal vocabulary even as historical changes render the form/meaning mappings opaque. Its all about the time frame, language type, and so on. Even though the parts of an ideophone code mappings to various phenomena including size, shape, texture, how one feels about something, mental states, water content, etc., there is also a system for shifting these. So for example lets say that one term refers to having an overly large head of a particular shape. The speaker can also play with the form and alter the meaning by shifting features- maybe the head is flatter on top than is usually the case for a big-headed person, or its pointy. Codes within codes within codes. Such complexity has historically made analysis of such systems quite difficult, so most linguists either shy away or just throw up their hands in defeat. Another secret, one they often keep even from themselves.

 

Now, I've also found hints of similar parallels between these levels of communication and commerce in the atomic periodic system. Many of the anomalous features there could be explained in this fashion. This would include ground state electronic and nuclear configurations, the existence of isotopes and so forth, as well as the behaviors of things like superatoms, ultracold condensates, etc. Don't believe everything they told you in 4th grade about the periodic table- everything is NOT fully explained by quantum physics, even if it makes a good story for children. Its one of the best-kept secrets in chemistry and physics. We know, for example, that biological systems can be affected by the isotopic content of the molecules they utilize. If you drink a couple of glasses of deuterated water, which is structurally identical to normal water, except for the energetics, molecular size, etc., I'll be coming to your funeral. Cells have developed mechanisms that get rid of deuterated water, Carbon-14 is treated differently from carbon-12, and so on. But what is going on, say, inside STARS, which could be said to be alive to a certain extent, and metabolizing (though they eventually drown in their own wastes)- how much context sensitive stuff is happening in there? We know that the amount of 'metal' in a star (everything heavier than H and He) strongly affects nuclear burning, radiativity, star size, temperature, temporal scaling and many other things. Are things like this just simple facts, or can stellar-internal structures actually manipulate these?

 

Lets assume further parallels, and see where it gets us. What about the FSC? It CHANGES depending on the context. The energy of the system. The size of a nucleus. Probably the size/energy of the electron (that is muons or tauons versus normal electrons). I've already noted the connectivity between the dimensionless numbers and another that comes out of the Phi relation. We have four numbers (3+1, which is a split very common in physics as noted by many researchers, and also the most basic Pascal Triangle triangular number pair!!), that relate to each other (with the caveat that they are the whole-number parts of the reciprocals) such that their sum is 600 exactly, which is unexpected by itself, whose individual pairings deviate from 300 by every other tetrahedral number (the ones based on squares of odd integers).

 

Now we bring in perspective. It is true that my observation has not captured what is to be done with the fractional parts of the reciprocals of the dimensionless constant variants. But this may be only because I'm looking at the system in 2D, that is flat on the page, just as I'm writing here on my screen. This has been a problem for many reconstructions in different sciences. It occurs in linguistics- where the phonological system defining the sounds of a language have to be written flat on the page and have psychologically constrained both writers and readers from thinking 'bigger picture'. In fact the phonological systems are minimally 3D- a percentage of linguists have broken out of the 2D pen (won't say 'box' since 3D). In my own work I found that the meaning shifting in the sound symbolism of phonosemantically transparent ideophones depended on the internal mappings between phonemes and their features in a multidimensional model. What is really interesting here is that the model changes between different languages and language types, in coherent ways implying some sort of law set behind them. Kinda like Rubik's Cube, if you suddenly decide that a scrambled variant is now the 'new normal' and repaint it that way. Even the geometry of the system changes, as well as numbers of primary elements within. But the changes are far from random or arbitrary- and they link to other features of the linguistic system. Even word order in sentences is tied to how words are stressed. Reminds me a lot of the genome.

 

In the genetic code as written on the flat page one usually finds a depiction which is a matrix that misses the extra dimension necessary to adequately represent the system, but luckily it was recognized early on that a cubic representation will do. Even so, this isn't the end of the analysis since reordering the cubic matrix edge values from the same order for all three axes maps much better to the actual assocated physical and chemical properties. Quite a few people discovered this over the decades independently, including myself.

 

And of course I laid out early on in my postings my work discovering the Pascal Triangle relationships for the atomic and nuclear periodic systems, and how at least for the electronic side, it allows one to create tetrahedral models of the system from the idealized quantum perspective (not taking relativistic effects, which by the way can be taken as continuous and CONTEXTUAL, in to account).

 

A higher-dimensional geometrical bias appears to show up all over the place in Nature. We see this too now in the various String, M-, Brane theories. Where does it end? How many dimensions are 'hidden'. What do they do? Are they just passively 'there' or do they play some active role? Even in language structures we see evidence of these. Movement and extraction of forms from the string, to positions where they are no longer connected, leaving 'traces' to use the Chomskyan term.

 

Now we (gasp, FINALLY!) have the FSC system. What do we do about those nasty little fractional parts??? Flat on a page they make no sense, and people have struggled with the fractional portion of 1/137 for ages. What about in a higher dimension??? We have the tetrahedral number deviation from 300 for any pair of the four numbers. Tetrahedrality is also very strong in the periodic system, at both electronic and nuclear levels. Helium has four nucleons, and is the 'currency' of many nuclear transformations. DNA/RNA uses four bases. Buckminster Fuller, in his geometrical work, speculated that the ultimate structure of the universe derived from the tetrahedron, which is also the simplest of the classical Platonic solids, and unlike the others, is its own dual.

 

Lets map the FSC numbers, then, onto the vertices of a tetrahedron, perhaps as part of a still larger tetrahedral mapping taking all the other dimensionless constants. Now, from a single vantage point relative to that tetahedron, different vertices will generally have nonidentical values positionally, both in terms of each other versus the viewer, and between each other (orientation, depth, etc.). There are a small handful of 'privileged' perspectives, such that one will see orthogonality between the four vertices in terms of the edges connecting them (if transparent). But even here depth can't be ignored, so that the perceived edges cannot all be the exact same length.

 

So in such a circumstance, can we find any evidence that the actual noninteger values of the numbers in the FSC proportion, versus the 2D 'ideal', could be derived so easily as by just altering one's perspective in viewing it? In an earlier posting I mentioned that if one takes a cube, with a plane running through it containing the center of the cube and two vertices, that one could rotate the plane relative to the cube such that projections from the vertices onto the plane will be in a length relation 0,1,2,3 (reminiscent of the numerator values of 1/3 charges in fermions), if at least two vertices (on opposite sides of the cube), starting either from the plane or perpendicular to it with equal distances on either side (i.e an inverted T), with the rotation CW or CCW had a value of the Arctan(sqrt27), or 79-something degrees off 90 or 270 in either direction, or off 0 or 180 in either direction. All the involved angles are relatively simple though I don't remember the specifics (usually takes a while to work it out on paper).

 

What would be the equivalent of doing something like this in a tetrahedral space? Note importantly of course that one can nicely nest a tetrahedron into a cube- and who knows whether other 3D or higher dimensioned solids might be involved (for example the 5- or 10-tet compound inside a dodecahedron (linking the tet to Phi and the square root of 2 (so both Golden and Silver Ratios), with an icosahedral intersection set- I found sometime back that 'shadows' in the 5-tet system onto one tetrahedron land on places in the periodic system, modeled as a tetrahedron, where observed behaviors of elements tended to be anomalous!

 

Jess Tauber (and if you got through this without going all head-explodey you get a gold star, with apologies to those whose brains have to be wiped off the walls by indignant wives or variant significant others...)

Link to comment
Share on other sites

Quoting pascal:

Don, exactly what is it that you think is generating the FSC physically,

if one assumes that your equations work. For me, in my own work,

I've hypothesized that it is something woven into the fabric of

space-time-energy-mass itself, perhaps relatable (there's that word again...)

to the Planck units.

 

In any case the numbers are unlikely to just pop out of nowhere.

What is making them go?

 

I really don't know. Probably some yet to be discovered mechanism at the quantum level.

 

Nature is extraordinarily efficient and seems to store and display information in ways that

prevent it from being lost. For instance, we can deduce both [math]\phi[/math] and

the Fibonacci sequence from a sunflower.

Link to comment
Share on other sites

 

...Buckminster Fuller, in his geometrical work, speculated that the ultimate structure of the universe derived from the tetrahedron, which is also the simplest of the classical Platonic solids, and unlike the others, is its own dual.

...

Jess Tauber (and if you got through this without going all head-explodey you get a gold star, with apologies to those whose brains have to be wiped off the walls by indignant wives or variant significant others...)

 

i did a rather in-depth look at fuller's synergetics and, giving him every benefit of the doubt, found him completely full of baloney when it comes to associating geometry to physics, his domes notwithstanding. here's the thread: >> synergetics: explorations in the geometry of thinking

Link to comment
Share on other sites

Not everyone gets everything right- look at Linus Pauling on vitamin C, or Mendeleev in his later insistence that there was an 'ether' or his refusal to accept the new findings that led ultimately to quantum theory after his passing. There is a gigantic graveyard out there of discarded ideas developed by the brightest minds of their times, rejected by their peers. But they wouldn't be remembered if they hadn't got SOME things right on the nose. I think the trick is never to completely commit to one's own hypotheses, to be always ready to amend or drop them when evidence starts to come in that things are other than you thought. Psychologically it seems many are incapable of this- they come up with something great and then get stuck. Partly this may be due to the way they treat their public image or 'face'. Once something is out there, in print or by word of mouth, they feel compelled to defend it to the death, and given the competitive nature of professional life, who could blame them? But what is more important, one's social/sociological status of the moment, or the 'truth' whatever that might be? Fuller's treatment of the atomic nucleus was clever, but ultimately a failure. Even so, it inspired me to look deeper, which is how I was able to extend my Pascal-based tetrahedral model of the idealized quantum-level atomic electronic system to the nucleus. Others had had similar inspirations. Currently I'm still mulling over Pauling's nuclear spheron model, which though completely rejected by nuclear physicists/chemists except for a couple of more cranky diehards, does seem to have some interesting aspects that Pauling didn't notice (for example he missed the double-tetrahedral number properties of most of the (semi)magic numbers even though he had them sitting right in front of his face, in his own notebooks, which are available online). The idea of secondary, tertiary structural subdivision within a nucleus as it builds up nucleons is especially compelling. We already know that alpha particles are a step up from single electrons, positrons, hadrons, etc. hierarchically. We see in fission tendencies for certain mass groupings over plain chaotic division. Some recent work suggests that the nucleus has some form of 3D (4?) version of a Fibonacci spiral structure. Things like this would heavily constrain what a nucleus is capable of doing- not absolutely, but in terms of more or less strong tendencies, depending on the context. A lot of the natural world seems like that.

 

Now, Fuller's idea that the tetrahedron is the basic unit of the universe hasn't much to support it, I'll admit. All I did was state his hypothesis. I don't cite others' ideas as being supportive of my own unless I specifically state them to be as such. Yet absence of evidence as you know is not evidence of absence. Still, my own bias is that things will be much more complex than this, given what I have observed in the mathematical meshworks that seem to surround various physical phenomena. It may be that if there is a natural tendency towards geometrical coherence in Nature, then the simplest structures, such as tetrahedra, may also be the defaults, seen under 'all other factors being equal' circumstances. That doesn't mean that natural systems can't break out of the defaults. Certainly this is the case in language structure, in genomic structure, in the atomic system. Flexibility has its charms, too! Order and disorder helping to keep things running smoothly.

 

Jess Tauber

 

 

i did a rather in-depth look at fuller's synergetics and, giving him every benefit of the doubt, found him completely full of baloney when it comes to associating geometry to physics, his domes notwithstanding. here's the thread: >> synergetics: explorations in the geometry of thinking

Link to comment
Share on other sites

Not everyone gets everything right-

 

Well, of course not.

 

In science you formulate your hypothesis and test it against available data.

If it fits, then it is accepted.

Later new data, and new understanding comes along, which leads to a new hypothesis in accordance with the new data.

This does not mean that the previous hypothesis was "wrong", it was the best that could be done at the time.

 

Isaac Newton, a genius of his time, formulated his theory of gravity, later superseeded by Einsteins relativity version.

And it is quite possible that Einsteins theory will replaced by something else in the future.

 

That's how science progresses, without labelling people as being "wrong".

Link to comment
Share on other sites

Yes, and Newton was a committed alchemist (some of his personality quirks have been attributed to his breathing various fumes, licking chemicals, and so on, activities that are still popular today). And Einstein's quest for a theory of everything based on the scanty data of the time, and his rejection of 'spooky' quantum phenomena now known to be true, come to mind. I was one of those who had hoped in 1989 that 'cold fusion' was real, and was for a short time similarly hopeful about the Italian 'Energy Catalyzer' in the past year. I have my own ideas about all that- had contacted Rossi about my observations about the hidden Lucas numerical patterns in the nuclide production data of Miley. Now convinced the guy might be incompetent at best, a charlatan at worst. Dunno about the Miley data now, since newer work along those lines produces different minima/maxima that disrupt my neat little scheme. OTOH competitiveness and secrecy are starting to show their ugly heads, and the newer data may be a smokescreen- people are rushing to the patent offices all over the place, and are attacking the work of everyone else. A real panoply going all the way from mainstream to fringe to pathalogical science, everyone falling over each other to get a slice of the perceived pie, showing the full range of communicative strategies, and strategems. Because the structures of science themselves are negotiable, if the price is right... :-(

 

Jess Tauber

 

Well, of course not.

 

In science you formulate your hypothesis and test it against available data.

If it fits, then it is accepted.

Later new data, and new understanding comes along, which leads to a new hypothesis in accordance with the new data.

This does not mean that the previous hypothesis was "wrong", it was the best that could be done at the time.

 

Isaac Newton, a genius of his time, formulated his theory of gravity, later superseeded by Einsteins relativity version.

And it is quite possible that Einsteins theory will replaced by something else in the future.

 

That's how science progresses, without labelling people as being "wrong".

Link to comment
Share on other sites

Not everyone gets everything right-

 

:lol: roger that.

 

... The idea of secondary, tertiary structural subdivision within a nucleus as it builds up nucleons is especially compelling. We already know that alpha particles are a step up from single electrons, positrons, hadrons, etc. hierarchically. We see in fission tendencies for certain mass groupings over plain chaotic division. Some recent work suggests that the nucleus has some form of 3D (4?) version of a Fibonacci spiral structure. Things like this would heavily constrain what a nucleus is capable of doing- not absolutely, but in terms of more or less strong tendencies, depending on the context. A lot of the natural world seems like that.

 

while i do read a lot of the nuanced physics discussions here, i have little to no competance in evaluating the validity of the assertions. my grasp is limited to a few areas of number theory. you might find my poking into fibonacci'esque structures here interesting, particularly crag's furtherings. >> fibonacci bricks

 

Now, Fuller's idea that the tetrahedron is the basic unit of the universe hasn't much to support it, I'll admit. All I did was state his hypothesis. I don't cite others' ideas as being supportive of my own unless I specifically state them to be as such. Yet absence of evidence as you know is not evidence of absence. Still, my own bias is that things will be much more complex than this, given what I have observed in the mathematical meshworks that seem to surround various physical phenomena. It may be that if there is a natural tendency towards geometrical coherence in Nature, then the simplest structures, such as tetrahedra, may also be the defaults, seen under 'all other factors being equal' circumstances. That doesn't mean that natural systems can't break out of the defaults. Certainly this is the case in language structure, in genomic structure, in the atomic system. Flexibility has its charms, too! Order and disorder helping to keep things running smoothly.

 

Jess Tauber

 

the final straw for me with fuller was his belief that base ten was the ultimate "nomenclature" because we have 10 fingers. while i gave considerable leeway to his language structure, in the end i could only conclude it is little more than word salad.

Link to comment
Share on other sites

At least Fuller wasn't claiming that sitting underneath a tetrahedral pyramid would cure your baldness, or let you perform 'when the moment is right'. :-)

 

Jess Tauber

:lol:

perhaps not, but i found many of his claims in a similar class. in researching some of his stuff i found there is an ongoing "hobby" group of folks -i forget what they call themselves or their activity- who take standard cubic/orthogonal geometric systems/structures and translate them into fullerian tetrahedral terms. even so, i found no claims or evidence that these tranforms shed any new light or insights on physics.

 

i did borrow some of fuller's transform ideas myself in developing a method for producing triangular tables from square tables, however i produced just one version of an ever increasing possible number of transforms. even then, i did not find any particularly new enlightenement, which is not to say such a one does not exist. see attached example below.

Link to comment
Share on other sites

Interesting that you should mention this- in my posts I've described how I've found both Fibonacci and Lucas number mappings in the periodic table. These then come from (among others) the classical (1,1) sided and sister (2,1) sided Pascal Triangles respectively (though you can also get an upstream-shifted Fib series on the other side of the (2,1)). Then we have the whole triangle-based versus squares based biases in these two Pascal systems, for the third diagonal, and then tetrahedra versus square pyramids, for the fourth. Perhaps in some odd fashion these are mirror-images of each other? Remember in the periodic system Fib associates with the leftmost half-orbital positions, and (less perfectly) Lucas associates with rightmost, for first and last singlet or doublet valence electrons. Is there a method to this madness? Something missed?

 

Jess Tauber

 

 

:lol:

perhaps not, but i found many of his claims in a similar class. in researching some of his stuff i found there is an ongoing "hobby" group of folks -i forget what they call themselves or their activity- who take standard cubic/orthogonal geometric systems/structures and translate them into fullerian tetrahedral terms. even so, i found no claims or evidence that these tranforms shed any new light or insights on physics.

Link to comment
Share on other sites

Interesting that you should mention this- in my posts I've described how I've found both Fibonacci and Lucas number mappings in the periodic table. These then come from (among others) the classical (1,1) sided and sister (2,1) sided Pascal Triangles respectively (though you can also get an upstream-shifted Fib series on the other side of the (2,1)). Then we have the whole triangle-based versus squares based biases in these two Pascal systems, for the third diagonal, and then tetrahedra versus square pyramids, for the fourth. Perhaps in some odd fashion these are mirror-images of each other? Remember in the periodic system Fib associates with the leftmost half-orbital positions, and (less perfectly) Lucas associates with rightmost, for first and last singlet or doublet valence electrons. Is there a method to this madness? Something missed?

 

Jess Tauber

 

like physics, i'm not well versed in chemistry. perhaps this is more a topic for don to take up.

 

don, i hope i haven't ventured too far off-topic for your comfort. if so, let me know and pascal or i can start another thread to pursue some of this.

 

EDIT: pascal, perhaps a thread titled "number patterns in the periodic table"?

Link to comment
Share on other sites

  • 1 month later...

As we have seen, each and every version of my counting function

remains extraordinarily accurate to at least [math]x=10^{15}[/math].

 

However, I do believe that the following version:

 

[math] \varpi(x)\approx\left(\left(\sqrt{\left(\left(1-\frac{1}{\left(\alpha*\pi*e+e\right)}\right)*x\right)}-\frac{1}{4}\right)^{2}-\frac{1}{16}\right)*\left(1-\frac{\alpha}{\left(6*\pi^{5}-\pi^{2}\right)}\right) [/math]

 

where:

 

[math] \alpha=\left(\left(A^{-1}*\pi*e+e\right)*\left(\pi^{e}+e^{\left(\frac{-\pi}{2}\right)}\right)-\frac{\left(\left(\pi^e+e^{\frac{-\pi}{2}}+4+\frac{5}{16}\right)*\left(\ln\left(x\right)\right)^{-1}+1\right)}{\left(6*\pi^{5}*e^{2}-2*e^{2}\right)}\right)^{-1} [/math]

 

and which results in the following table:

 

[math]x[/math]_______________________[math]\varpi(x)[/math]_________________ [math] B(x_{F\alpha}) [/math]_________Difference

10_______________________3______________________5___________________2

100______________________57_____________________60__________________3

1,000____________________622____________________628_________________6

10,000___________________6,357__________________6,364________________7

100,000__________________63,889_________________63,910_______________21

1,000,000________________639,946________________639,963______________17

10,000,000_______________6,402,325______________6,402,362_____________37

100,000,000______________64,032,121_____________64,032,273____________152

1,000,000,000____________640,349,979____________640,350,090____________111

10,000,000,000___________6,403,587,409__________6,403,587,408__________-1

100,000,000,000__________64,036,148,166_________64,036,147,620_________-546

1,000,000,000,000________640,362,343,980________640,362,340,975________-3005

10,000,000,000,000_______6,403,626,146,905______6,403,626,142,352_______-4554

100,000,000,000,000______64,036,270,046,655_____64,036,270,047,131_______476

200,000,000,000,000______128,072,542,422,652____128,072,542,422,781______129

300,000,000,000,000______192,108,815,175,881____192,108,815,178,717______2836

400,000,000,000,000______256,145,088,132,145____256,145,088,130,891_____-1254

500,000,000,000,000______320,181,361,209,667____320,181,361,208,163_____-1504

600,000,000,000,000______384,217,634,373,721____384,217,634,374,108______387

700,000,000,000,000______448,253,907,613,837____448,253,907,607,119_____-6718

800,000,000,000,000______512,290,180,895,369____512,290,180,893,137_____-2232

900,000,000,000,000______576,326,454,221,727____576,326,454,222,404______677

1,000,000,000,000,000____640,362,727,589,917____640,362,727,587,828_____-2089

 

will maintain its incredible accuracy all the way into infinity.

 

Here's why...

 

If we use the last 10 values of [math]x[/math] and [math]\varpi(x)[/math] to solve for [math]A[/math],

and then inject those values of [math]A[/math] into the expression:

 

[math] \alpha=\left(\left(A^{-1}*\pi*e+e\right)*\left(\pi^{e}+e^{\left(\frac{-\pi}{2}\right)}\right)-\frac{1}{\left(6*\pi^{5}*e^{2}-2*e^{2}\right)}\right)^{-1} [/math]

 

the results will be as follows:

 

 

________[math]x[/math]_____________________[math]\varpi(x)[/math]________________[math]A[/math]_______________[math]\alpha^{-1}[/math]_________

100,000,000,000,000______64,036,270,046,655_____2.5665438294154____137.03599916477

200,000,000,000,000______128,072,542,422,652____2.5665438318173____137.03599909419

300,000,000,000,000______192,108,815,175,881____2.5665438266710____137.03599924542

400,000,000,000,000______256,145,088,132,145____2.5665438340142____137.03599902963

500,000,000,000,000______320,181,361,209,667____2.5665438339138____137.03599903258

600,000,000,000,000______384,217,634,373,721____2.5665438318063____137.03599909451

700,000,000,000,000______448,253,907,613,837____2.5665438377183____137.03599892078

800,000,000,000,000______512,290,180,895,369____2.5665438337865____137.03599903632

900,000,000,000,000______576,326,454,221,727____2.5665438317301____137.03599909675

1,000,000,000,000,000____640,362,727,589,917____2.5665438334003____137.03599904767

 

Taking the average of the [math]A[/math] column results in: [math]A=2.566543832[/math],

which is very good considering that we only used 10 samples from relatively low values of [math]\varpi(x)[/math],

and taking the average of the [math]\alpha^{-1}[/math] column results in: [math]\alpha^{-1}=137.035999076262[/math]

which is very close to the Gabrielse value, and indeed, matches the latest Codata value perfectly!

 

So, in theory, if we had sufficiently large values of [math]\varpi(x)[/math], say , to about [math]x=10^{20}[/math] or so...

then we can simply take the average of sufficiently many random samples of [math]A[/math] to get

[math]A=2.566543832171388844467529...[/math] to as many decimal places as we like,

and thereby generate the entire sequence of primes in sequential order!

 

It's essentially the same principle as flipping a coin sufficiently many times

and averaging out the results in order to get as close to [math]\frac{1}{2}[/math] as we like.

 

I really like the idea and principle of using randomity to generate the primes,

which are, after all, a random sequence. It's kind of like fighting fire with fire.

 

Don.

Link to comment
Share on other sites

  • 2 weeks later...

In order to demonstrate the uncanny accuracy of my equations,

let's extend them several decimal places beyond [math]\alpha^{-1}= 137.035999084(51)[/math],

which is by far the most precise experimental value of the fine structure constant ever determined.

 

The results are as follows:

 

[math]\alpha_b^{-1}=137.03599913476=(A^{-1}*\pi*e+e)*(\pi^{e}+e^{(\frac{-\pi}{2})})-\frac{1}{((M_n/M_e)*e^{2}-2*e^{\frac{5}{2}})}[/math]

 

[math]\alpha_s^{-1}=137.03599908378=(A^{-1}*\pi*e+e)*(\pi^{e}+e^{(\frac{-\pi}{2})})-\frac{1}{(6*\pi^{5}*e^{2}-2*e^{2})}[/math]

 

[math]\alpha_L^{-1}=137.03599903278=(A^{-1}*\pi*e+e)*(\pi^{e}+e^{(\frac{-\pi}{2})})-\frac{1}{(\mu*e^{2}-g*e^{\frac{5}{2}})}[/math]

 

where

 

[math]M_n/M_e=1838.6836605(11)[/math] is the neutron-electron mass ratio,

[math]\mu=1836.15267245(75)[/math] is the proton to electron mass ratio, and

[math]g=2.00231930436146(56)[/math] is the electron magnetic moment.

 

Now, note that adding .00000000023 to each of the above values gives us:

 

[math]\alpha_b^{-1}=137.03599913499[/math]

 

[math]\alpha_s^{-1}=137.03599908401[/math] and

 

[math]\alpha_L^{-1}=137.03599903301[/math]'

 

which matches with absolute precision, not only

the center value, but the "margin of error" as well!

 

So clearly, these results demonstrate that what Professor Gabrielse thinks is

a "margin of error", is in fact an inherent upper and lower bound, which is really wierd,

because Gabrielse's experiment did not involve protons or neutrons in any way,

but only a single electron caught in a "Penning trap".

 

Don.

Link to comment
Share on other sites

Quoting "Pincho Paxton",

I believe that taking the mass of an electron from the orbit of another particle is not accurate.

In order to determine the value of the fine structure constant,

Professor Gabrielse measured the electron magnetic moment,

not the electron mass as you seem to think.

 

Moreover, Gabrielse's ingeneous experiments are now universally regarded as

the most accurate verification of any prediction in the entire history of physics!

 

Now, when the results of physical experiments become that accurate,

then it is time for mathematicians to "rise to the occasion" and develop constructs

which not only describe those results, but allow new results to be predicted as well.

 

As it turns out, my counting function for non-trivial polygonal numbers,

which is ranked "first page" by Google and is referenced in

the On Line Encyclopedia of Integer Sequences does exactly that!

 

It predicts that the fine structure constant will remain at essentially its

present value of 137.035999084(51), regardless of any further advancements

in Penning trap or Feynman diagram calculation technology.

 

Most importantly however, my counting function proves that the fine structure constant is

required, necessary, and utterly indespensable if the density of non-trivial polygonals is

to be approximated to the greatest possible degree of accuracy.

 

The implications of this are enormous, for if the possibility of life is

contingent on the value of the fine structure constant, which in turn is

based on "eternal" mathematical principles, then life itself must be,

in some sense, "eternal"!

 

Don.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...