At the Strings Conference in July 2000, theorists were asked which mysteries still need to be unraveled in the 21st century. Participants were invited to help formulate the ten most important unsolved problems in fundamental physics, which were ultimately selected and ranked by a distinguished jury of David Gross, Edward Witten and Michael Duff. No question was worthier than the first two problems posed by Gross and Witten respectively: #1: *Are all (measurable) dimensionless parameters that characterize the physical universe calculable in principle, or are some merely determined by historical or quantum mechanical coincidences and unpredictable?* #2: *How can quantum gravity help explain the origin of the universe?*

A newspaper article about these millennial mysteries made some interesting comments on Question #1. Perhaps Einstein actually “put it more succinctly: *Did God have a choice in creating the universe?*’ – which also sums up Dilemma #2. While the Eternal certainly had a “choice” in creation, the following arguments will conclude that the answer to Einstein’s question is an emphatic “no.” Because even safer a full range of unprecedented, precise physical basic parameters are demonstrably calculable within a *only dimensionless universal system* that includes, of course, a literal “*monolith*.”

Likewise, the article went on to question whether the speed of light, Planck’s constant, and electric charge are determined indiscriminately — “or must the values be what they are because of some deep, hidden logic.” A riddle involving a mysterious number called Alpha: If you square the electron’s charge and then divide by the speed of light times Planck’s (‘reduced’) constant (multiplied by 4p times the vacuum permittivity), you get all (metric) dimensions (of mass, time and distance) cancel out and result in what is called a “pure number” – alpha, which is just over 1/137. But why isn’t it exactly 1/137 or some other value entirely? Physicists and even mystics have tried in vain to explain why.”

That is, while constants such as a fundamental particle mass can be expressed as a dimensionless relationship relative to the Planck scale, or as a ratio to some more precisely known or available unit of mass, the inverse of the electromagnetic coupling constant alpha is clearly dimensionless as pure *‘Fine structure number’ a* ~ 137,036. Assuming a unique, immutably discrete or *I agree* Fine structure exists numerically as a “literal constant”, the value as a quotient of two still has to be confirmed empirically *in exactly* determinable ‘metric constants’, h-bars and electric charge e (where the speed of light c is exact *Are defined* adopting the 1983 SI convention as an integer of meters per second.)

Although this enigma was profoundly enigmatic almost from the start, my impression, reading this article in a morning paper, was utterly astonished that a numerological problem of invariance deserved such a distinction by eminent modern authorities. For I had been obsessingly obsessed with the fs number in the context of my colleague AJ Meyer’s model for some years, but had learned to accept its experimental determination in practice, regularly pondering the dimensionless problem in vain. Gross’s question thus served as a trigger for my complacency; Acknowledgment of a unique position as the only Fellow who could provide a categorically complete and consistent answer in the context of Meyer’s most important fundamental parameter. Still, my pretentious instincts led to two months of silly intellectual posturing until I sensibly repeated a simple procedure explored a few years earlier. only me **saw** at the result with the 98-00 CODATA value of *a*and the solution that followed immediately struck with full heuristic force.

This is because the fine structure ratio effectively quantizes (via h-bars) the electromagnetic coupling between a discrete unit of electric charge (e) and a photon of light; in the same sense *integer is discreet ‘quantized’* compared to the ‘broken continuum’ between it and 240 or 242. One can easily see what this means by considering another integer, 203, from which we subtract the 2-based exponential of the square of 2pi. Now add the reciprocal of 241 to the resulting number by multiplying the product by the natural logarithm of 2. It follows that this pure calculation of the fine structure number is exactly the same

**137.0359996502301…**– which is given here (/100) as 15, but can be calculated to any number of decimal places.

In comparison, given the experimental uncertainty in h-bars and e, the NIST rating varies up or down around the mid-6 of ‘965’ in the immutable sequence defined above. The following table gives the values of h-bar, e, their calculated ratio as, and the actual NIST choice for *a* in each year of their archives, as well as the 1973 CODATA, where the standard two-digit +/- experimental uncertainty is in bold in parentheses.

Year…*H-* = N*H**10^-34 Js…… e = Ne*10^-19 C….. *H/*e^2 = *a * =….. NIST value & ±(**SD**):

2006: 1,054,571,628 (0**53**) 1,602,176 487(0**40**) 137.035.999.**6**61 137.035.999 679(0**94**)

2002: 1,054,571,680(**18**x) 1,602,176 53o(**14**o) 137.035.999.**0**62 137.035.999 11o(**46**O)

1998: 1,054,571,596 (0**82**) 1,602,176 462(0**63**) 137.035.999.**7**79 137.035.999 76o(**50**O)

1986: 1,054,572 66x(**63**x) 1,602,177 33x(**49**x) 137.035.9**8th**9,558 137,035,989 5xx(**61**xx)

1973: 1,054,588 7xx(**57**xx) 1,602,189 2xx(**46**xx) 137.03**6**.043.335 137.036. 04x(**11**x)

So it appears that the NIST choice is roughly determined by the measured values for *H* and e alone. However, as explained below http://physics.nist.gov/cuu/Constants/alpha.htmlin the 1980s, interest shifted to a new approach that allows direct determination of *a *by exploiting the quantum Hall effect, which has been independently confirmed both by theory and by experiment, of the electron magnetic moment anomaly, thereby reducing the already fine-tuned uncertainty. Nevertheless, it took 20 years before an improved measurement of the magnetic moment was possible *G*/2 factor was published in mid-2006, where this group (led by Gabrielse for Hussle at Harvard.edu) provided the first estimate for *a* was (A:) 137.035999. 710(0**96**) – which explains the much lower uncertainty in the new NIST list compared to that in *H*-bar and e. Recently, however, a numerical error was discovered in the initial QED calculation (A:) (we refer to it as the 2nd paper B:) that shifted the value of a to (B:) 137.035999. 070 (0**98**).

Although reflecting an almost identically small uncertainty, this assessment is well outside the NIST value, which is consistent with h-bar and elementary charge estimates determined independently by different experiments. NIST has three years to resolve this, but in the meantime faces an awkward irony, as at least the 06 choices for h-bar and e seem to skew slightly toward expected suitability for *a*! For example, fitting the last three digits of the 06 dates for h and e to our pure fs number gives an imperceptible fit to e alone into the ratio h628/e487.065. Had the QCD error been corrected prior to the actual NIST publication in 2007, it could have been evenly adjusted to h626/e489 fairly easily; although its coherence in the last 3 digits is questioned *a *in relation to the comparative data of 02 and 98. In any case, for comparable error reduction for h and e, far larger improvements in several experimental designs will be required to definitively solve this problem.

But even then, no matter how “precisely” the metric is followed, it’s still infinitely short of “literal accuracy,” while our pure fs number matches current h628/e487 values fairly closely. In the first respect, I recently discovered a mathematician named James Gilson (see ) also developed a pure number = 137.0359997867…closer to the revised 98-01 standard. Gilson further claims that he calculated numerous parameters of the Standard Model, such as the dimensionless ratio between the masses of a weak Z and W boson. But I know he could never construct a single proof using equivalences capable of doing so *Derivation of Z and/or W masses per se from this accurately Approved crowds heavy cottage cheese and *

__(see paper referenced in resource box) which themselves result from a single superordinate dimensionless tautology. For the numerical discretion of the fraction 1/241 one accepts it__

*Higgs fields*__to construct__

*physically meaningful dimensionless equations*. If you take Gilson’s numerology instead or the refined empirical value of Gabreilse et. al., for the fs number it would destroy that discretion, precise self-consistency and leveling ability

*write*a meaningful dimensionless equation! In contrast, perhaps not too surprising then, is that once I literally “found” the integer 241 and derived the exact fine structure number from the resulting “monolith number”, it only took me about 2 weeks to find all six quark masses using true dimensionlessness to calculate analysis and various fine-structured relations.

But since we’re not really talking about the fine structure number per se, we’re talking about the integer 137, the result *definitely answers* Big question. These “dimensional parameters characterizing the physical universe” (including alpha) are ratios between selected metric parameters that lack a single unified dimensionless mapping system from which metric parameters such as particle masses are calculated from theorems. The “Standard Model” gives you a single system of parameters, but **no** means to calculate __predict__ each and/or all within a single system – thus the experimental parameters are arbitrarily entered by hand.

Final irony: I am doomed to be humiliated as a “numerologist” by “experimentalists” who consistently fail to recognize hard empirical evidence for quark, Higgs, or hadron masses that can be used to set the current standard for the most accurate known to calculate exactly and heaviest mass in high energy physics (the Z). Well, on the contrary, dumb ghouls: empirical confirmation is just the last cherry on top the chef puts on before presenting “pudding proof” that no sentient being could resist just because he didn’t assemble it himself, so he does instead, a mock mess that the real deal doesn’t resemble. Because the base of this pudding is made from melons I call mumbers, which are really just numbers, pure and simple!

Thanks to Sean Sheeter | #Pure #Derivation #Exact #Fine #Structure #Constant #Ratio #Inexact #Metric #Constant