Jump to content
Science Forums

7 Reasons To Abandon Quantum Mechanics-And Embrace This New Theory


andrewgray

Recommended Posts

OK,

 

Here are the miniBOONE diagrams from Cosmic Variance

 

Setup:

miniboone.gif

Data:

miniboone2.jpg

So let me make sure that I am seeing this correctly. In the first column of the graph, the background (BG) event count is about 200 (per .1 GeV) and the miniboone event count was about 260. This leaves about 60 events vs. 200 background noise. Correct? So in the first column, we are talking about the background being 3 times the supposed reactor event count, correct?

 

In the third column, the background is about 5 times the reactor events. Correct?

 

Then, starting with the 4th column, the background measurements and the miniboone measurements become indistinguishable. That is, no reactor events can be seen at all statistically in columns 4-9. Correct?

 

So, if we actually plot the reactor data on the graph, it might look like this (shown in red😞

miniboone3.jpg

 

So we see that most of the events are way below background noise levels, and that electrons and muons are "mysteriously appearing out of nowhere" more than they are supposedly appearing from the reactor neutrinos.

 

Dr Ray goes on to say that this experiment disagrees with the "LSND" experiment, and that . . .

 

Quote

Quote from Ray

If LSND’s observation is found to be a true fact of nature, the Standard Model of physics cannot fully accommodate/explain neutrino interactions! This “breaking” of the Standard Model is very exciting to physicists, and indicates there is new physics that we haven’t previously thought possible. Many things could be true - there could be new allowed interactions for neutrinos (Lorentz Violation, CP/CPT violation, the list goes on!), or there could be additional particles - sterile neutrinos, which don’t interact with other matter but only can been seen through mixing with other neutrinos.

 

And we see that we are probably on the verge of adding "sterile neutrinos" to the theory to accommodate new data.

 

However, confirmation bias may keep "sterile neutrinos" out for a time, while other experimentalist try to discredit LSND.

 

Andrew A. Gray

Edited by andrewgray
Link to comment
Share on other sites

So let me make sure that I am seeing this correctly. In the first column of the graph, the background (BG) event count is about 200 (per .1 GeV) and the miniboone event count was about 260. This leaves about 60 events vs. 200 background noise. Correct? So in the first column, we are talking about the background being 3 times the supposed reactor event count, correct?

 

I think you misunderstand. The background is the total number of electrons and muons being dumped (along with neutrinos) into the detector. This background is well known theoretically and well measured experimentally. The extra 60 or so events make up electrons/muons that have appeared through neutrino decays. Its also worth noting that the background is NOT noise.

 

So we see that most of the events are way below background noise levels, and that electrons and muons are "mysteriously appearing out of nowhere" more than they are supposedly appearing from the reactor neutrinos.

 

Again, you misunderstand what a background is. It is not noise, its the standard electrons/muons from the beam that hit the detector.

 

Dr Ray goes on to say that this experiment disagrees with the "LSND" experiment, and that . . .

 

And we see that we are probably on the verge of adding "sterile neutrinos" to the theory to accommodate new data.

 

However, confirmation bias may keep "sterile neutrinos" out for a time, while other experimentalist try to discredit LSND.

 

I think you misunderstand again. LSND had a very small signal that they thought MIGHT indicate new neutrino types. MiniBOONE was designed to see if they could obtain the same results- they did not and there signal and sheer volume of data put to rest the issues LSND had brought up. Hence, we don't need sterile neutrinos or anything else to explain the current data. Once again (unfortunately) the standard model prevails.

 

Also, as a side note- getting rid of neutrinos means getting rid of conservation of momentum, conservation of angular momentum, conservation of energy, etc. I'm not sure thats prudent.

-Will

Link to comment
Share on other sites

OK,

 

Thanks for all the info.

 

Will, I am not denying that the beryllium reactor is causing scintillation events in the detector. All I am saying is that I doubt that the events are caused by particles going through 450 meters of dirt.

 

8 Billion eV protons are smashed into beryllium. These protons have huge pulsation frequencies! If we use De Broglie's approximation for the proton frequencies we get that the proton pulsations are at about

 

1 x 1024 Hz

 

This to be compared to gamma ray frequencies of about

 

1 x 1022 Hz

 

So the potential for Ultra-High frequency gamma rays is serious. I would actually believe that UH-gamma rays could pass through 450 meters of dirt and cause scintillations in the detectors. But I must admit, I remain very skeptical that particles can do this.

 

And since the official neutrino theory seems to be a mish-mash of conjectures that will probably have to change again in the not too distant future, I think that the truly logical mind would at least be open to the possibility that these events in the detector were caused by UH-gamma rays easily going through the dirt.

 

Now to some of your points.

 

Again, you misunderstand what a background is. It is not noise, its the standard electrons/muons from the beam that hit the detector.

 

What I was saying was that the "background" events are what you get when the beryllium reaction beam is turned off.

 

Is that not true? The only reason that I call it "noise" is from the tradition in electronics of calling the signal with everything turned off, "noise". We can call it just "background" if you like. OK with me.

 

But you seem to be implying that the "background" originates from the beam. (??) Are you saying that the "background" comes from when the beam is ON or when it is turned OFF? Perhaps you are confused a bit.

 

"Background", I believe, comes from decays from the earth around the detector and possibly from UH-gamma rays from the sun, even while the beam is turned off.

 

getting rid of neutrinos means getting rid of conservation of momentum, conservation of angular momentum, conservation of energy, etc.

 

Only during microscopic tunneling. The two choices, it seems to me, are 1) the neutrino hypothesis, or 2) non-conservation of E and P during tunneling. Usually, E and P would be conserved on the time average, and macroscopically, E and P conservation would still be a law.

 

Andrew A. Gray

Link to comment
Share on other sites

And since the official neutrino theory seems to be a mish-mash of conjectures that will probably have to change again in the not too distant future

 

Neutrino theory hasn't changes since Salam, Glashow and Weinberg wrote down their theory of the weak interaction(with the exception of the possible addition of a majorana mass,which doesn't really effect the theory). It is far from a mish-mash of conjectures, it's quite elegant.

 

What I was saying was that the "background" events are what you get when the beryllium reaction beam is turned off.

 

I was under the impression that the majority of the background was from pions and K ons in the decay pipe for the experiment. Hence, turning off the beam would remove the background.

 

Also, in particle physics the term background usually refers to all the extra crap the beam produces that your detectors also pick up. This is distinct from noise in the detectors, which can be made arbitrarily small by averaging over many, many events.

 

Only during microscopic tunneling. The two choices, it seems to me, are 1) the neutrino hypothesis, or 2) non-conservation of E and P during tunneling. Usually, E and P would be conserved on the time average, and macroscopically, E and P conservation would still be a law.

 

But if we get rid of the neutrinos, every time there is a weak decay we lose energy, momentum, and angular momentum. Hence, the time average doesn't lead to conservation but in fact makes the problem worse.

 

Also, I fear we are pulling your thread off topic- this should be about discussing your theory, right? Perhaps we should start another thread on neutrinos?

-Will

Link to comment
Share on other sites

OK, good enough. Thanks for all the neutrino information.

 

Getting back, there were just a few loose ends in our discussion of Thermal Radiation.

 

Will had made a comment about reflections, which made me realize that Max Planck, the father of QM, had a self-inconsistency in his original theory of thermal (blackbody) radiation.

 

I am quite certain that Planck's original theory was a "quantized extension" of the Rayleigh-Jeans theory of blackbody cavities.

 

I am certain that Rayleigh-Jeans counted standing wave modes in a cavity and integrated the average energy:

 

[math]\bar \epsilon = \frac{\int_o^{\infty}A \epsilon e ^{- \epsilon / kT } d \epsilon }{\int_o^{\infty}Ae ^{- \epsilon / kT } d \epsilon} = kT [/math]

 

while Planck assumed discreet energy levels and simply changed the integral to a sum:

 

[math]\bar \epsilon = \frac{\sum_o^{\infty}A \epsilon e ^{- \epsilon / kT } d \epsilon }{\sum_o^{\infty}Ae ^{- \epsilon / kT } d \epsilon} = \frac{h \nu}{e^{h \nu / kT} - 1} [/math]

 

where the energy [imath]\epsilon[/imath] was no longer continuous but only [imath]nh \nu[/imath] where [imath]n=0,1,2, . . .[/imath].

 

So Planck simply "quantized" Rayleigh-Jeans. Thus, Planck's theory was a "quantum extension" of Rayleigh-Jeans. Planck incorporated all the original Rayleigh-Jeans assumptions up until he took the sum instead of a continuous integral.

 

But that has philosophical trouble, it seems to me. The original Rayleigh-Jeans assumptions were that the blackbody cavity was perfectly black, (and as Will has pointed out), reflections were not considered possible. However, later the derivation relies on standing waves, which requires reflections.

 

This self-inconsistency in Planck's original derivation seems troubling. Planck saw the data, and knew what he was looking for. He took a silly, self-inconsistent Rayleigh-Jeans theory and "quantized" it to come out with something that could match the data. I see no way around this realization. I too, used to be in awe of Planck's early derivation, but no longer.

 

So let's look at the big picture (or the whole picture after the whole picture is painted.) Everything emits IR at room temperature, whether it is black, white, reflective, or absorptive. Then as things get hotter, everything emits visible red, then visible white. What is the explanation for this?

 

1) Is it because light in a cavity can only have certain modes?

2)Or is it because one is thermally agitating the outer electron orbits, causing them to radiate?

 

I say, take a step back and look at the whole picture!

 

Andrew A. Gray

Link to comment
Share on other sites

But that has philosophical trouble, it seems to me. The original Rayleigh-Jeans assumptions were that the blackbody cavity was perfectly black, (and as Will has pointed out), reflections were not considered possible. However, later the derivation relies on standing waves, which requires reflections.

 

I have tried to explain several times that we have had a miscommunication. The Rayleigh-Jeans formula was done on a reflecting cavity with a small hole in it. This is because a cavity with a small hole in it simulates the thermal spectrum of a blackbody because any light that goes into the hole will bounce around many, many times before it makes it out. Because nothing is perfectly reflective, it pretty much has to get absorbed before it finds is way back out the hole. Hence, the spectrum of radiation coming out of the hole is exactly the same as that emitted by a perfectly black body. Rayleigh-Jeans and Planck calculated the spectrum of this cavity radiation.

 

In more modern derivations, one often imagines a perfectly black body inside the cavity. The derivation is the same because we imagine the black body to be in thermal equilibrium with the radiation (just as much is emitted as absorbed). If one likes, the cavity can even be removed via a limit at the end. This calculation is pedagogically nice because it makes it obvious that the cavity must have the exact same spectrum as a purely black object. i.e. it makes the connection that the Rayleigh-Jeans/Planck calculations apply not just to cavities, but any perfectly black emitter.

-Will

 

Edit:

What is the explanation for this?

 

1) Is it because light in a cavity can only have certain modes?

2)Or is it because one is thermally agitating the outer electron orbits, causing them to radiate?

 

I say, take a step back and look at the whole picture!

 

I don't think anyone will argue the cause of thermal radiation to be electron transitions. However, the whole power of statistical mechanics/thermodynamics is that it applies to EVERYTHING in equilibrium. The calculation is independent of the method of creating the radiation- all that matters is the thermal equilibrium.

Link to comment
Share on other sites

lampblack.gifWill,

 

I see what you are saying now. However, historically, this was not the case:

 

 

http://arxiv.org/PS_cache/physics/pdf/0402/0402064v1.pdf

 

So now I see that Rayleigh-Jeans and Planck did have a self-inconsistent approach, but it is "cleanup-able". Hmmm.

 

OK, so the point is that this New Theory has the potential to explain why thermal radiation is as it is, with a UV catastrophe and all. Planck's radiation law depends on cavities, and it does nothing for my curiosity for why my steel pipe glows red then white when I heat it. In other words, I want an explanation that matches what is really going on.

 

I wonder if QM addresses the thermal light emissions of solids in any way other than Planck. (??)

 

Doesn't QM even assume the Universe is a "blackbody" in order to calculate the "temperature" of the background radiation?

 

Andrew A. Gray

Edited by andrewgray
Link to comment
Share on other sites

The Experiments

 

Now we will cover some of the New Experiments that will prove this New Theory correct.

 

1) The New Electron Interference Experiment.

 

This new experiment will prove that interfering electron beams in an electron microscope are actually due to preferred paths to the detector while in flight and not because of "wave function" interference at the detector.

 

As you recall, this New Theory has described "electron interference" as a pulsating group of electrons that go around a positively charged filament. If the pulsations of the electrons are "OFF" when they cross, then they will continue on to the detector (a maximum). If the pulsatons of the electrons are "ON" when they cross, then the huge repulsion will knock them off their paths, and they will not make it to their original detector position (a minimum). This was shown in this animation:

e_interference.gif

 

According to this New Theory, the electrons "interfere" while in flight, and before they strike the detector. According to QM, the electron's wave function interferes right at the detector. We should be able to take advantage of this difference to prove that this New Theory is correct.

 

Suppose that we increase the filament voltage so that the two sides of the beam do not overlap on the film. If the voltage gets large enough, a "gap" will appear in between the two areas where each side lands. Like this:

e_interference2.gif

 

Thus, according to this theory, since the beams still interfere on their way to the film, fringes still appear.

 

However, according to QM, the wave functions do not overlap at the film, hence no interference fringes will appear.

 

Only one of these theories can be right. We'll see.

 

 

Andrew A. Gray

Edited by andrewgray
Link to comment
Share on other sites

2)The New Stern Gerlach Experiment

 

We have already discussed this experiment in brief here:

 

Post 17

 

 

but for completeness we discuss it in more detail. In review, the original Stern Gerlach

Experiment looks like this:

SternGerlach.gif

 

Silver atoms are put through a huge nonuniform magnetic field, and the magnetic moments cause separations. How does this work?

 

Well, consider a hydrogen atom in a nonuniform magnet field:

Stern3.gif

 

 

When the electron is at the position on the left, the magnetic force, which goes as

 

[math] \vec { F} = \frac{e}{c}(\vec V \times \vec B ) [/math]

 

is directed mostly outward. However, there is a vertical component. The same is true when the electron is at the right position. This force will basically cause two things:

 

1) A net upward force.

2) A wild nutation/precession from magnet torque.

 

Next consider the atom in the same scenario, but the electron orbit is going in the opposite way:

Stern4.gif

 

This force will basically cause two things:

 

1) A net downward force.

2) A wild nutation/precession from magnet torque.

 

In the first case, the atom will be deflected upward, and in the second case the atom will be deflected downward.

 

In both cases we showed in post 17 that the angular momentum L will also nutate wildly in the huge magnetic field and that the z component of the angular momentum does too. This causes radiation friction and Lz is free to change. In short, the huge magnetic field first induces L into one of the UP or DOWN states. After the atom is quickly induced into one of these UP or DOWN states, the corresponding upward or downward deflections will seem like they are "quantized", but they are not.

 

We wish to come up with a way to prove this. The trick would be to keep the upward or downward forces, but to minimize the huge magnet torque that is responsible for changing the values of Lz.

 

It turns out the the upward/downward force on the atom is given approximately by:

 

[math] F_z = \frac{\partial B_z}{\partial z} \; \mu _z [/math]

 

where [math]\mu _z[/math] is the z-component of the atom's magnetic moment which is proportional to Lz.

 

However, the magnetic torque is dependent on [math]\vec B[/math]:

 

[math]\vec \tau \; = \; \vec r \, \times \; \frac{e}{c}(\vec V \times \vec B)[/math]

 

So we wish to minimize [math]\vec B[/math] and maximize [math]\frac{\partial B_z}{\partial z}[/math].

 

To do this, one must use a magnetic quadrapole, instead of a magnet dipole. This way, the value for [math]\vec B[/math] itself can be small, but its z-derivative can be large. Like this:

Qpole.gif

 

See Beam focusing

 

Notice that near the center, [math]\vec B \equiv 0[/math], yet the derivitives are not. So, if we use a setup like this:

Stern5.gif

 

Then the final setup looks like this:

Stern6.gif

 

 

Thus, the magnetic field itself will be small, but [math]\frac{\partial B_z}{\partial z}[/math] will still be substanial. This will allow the recovery of the continuous values of Lz, proving that the magnetic moments of the atoms were not quantized.

 

 

Andrew A. Gray

Edited by andrewgray
Link to comment
Share on other sites

In the first experiment proposed (electron diffraction off of a point at high voltage), quantum mechanics WILL predict interference, same as your theory. The reason is simple: remember in quantum mechanics an electron will explore all paths to the detector, and even one particle can interfere with itself. The two theories may predict different interference- to test this I will need quantitative predictions from your theory.

 

Now, the problem I see with your quantum hall set up is that you have gradients in both the z and x direction but no field in the center. The quantum mechanical effect comes from the fact that the moment can either be aligned or anti-aligned to the magnetic field, bug the dipole field is not uniform across the beam, so different atoms will align in different directions. In short- I'm reasonably certain that quantum mechanics will predict some spread in this experiment as well. I haven't completely convinced myself yet, but I'm pretty sure.

-Will

Link to comment
Share on other sites

Erasmus said:
In the first experiment proposed, quantum mechanics WILL predict interference, same as your theory. The reason is simple: remember in quantum mechanics an electron will explore all paths to the detector, and even one particle can interfere with itself. The two theories may predict different interference- to test this I will need quantitative predictions from your theory.

 

I think we are agreed on this one. QM will predict interference in the overlap zone in front of the film. Here is where Ψ will overlap and interfere.

overlap.gif

 

However, on the film, there aren't any places for interfering paths to overlap. QM will predict no interference right on the film, once the filament voltage gets large enough to separate the two sides.

 

However, this New Theory predicts that the two sides actually interfere while in-flight, and that there still will be fringes on the film, approximately the same spacing as when the two areas still overlapped. And note that the interference is around a filament wire (and not around a point).

 

Erasmus said:
Now, the problem I see with your quantum hall [stern-Gerlach] set up is that you have gradients in both the z and x direction but no field in the center. The quantum mechanical effect comes from the fact that the moment can either be aligned or anti-aligned to the magnetic field, but the dipole field is not uniform across the beam, so different atoms will align in different directions. In short- I'm reasonably certain that quantum mechanics will predict some spread in this experiment as well. I haven't completely convinced myself yet, but I'm pretty sure.

 

Will,

 

I have edited the quadrupole picture slightly to clear up the field direction along the vertical. In the beam path, the B field is vertical:

Stern5.gif

 

Even in the original Stern-Gerlach experiments, there are both z and x nonzero derivatives in the field. The only thing necessary for the x-changes is that they be symmetrical about the vertical, just like the original. The upper part of the quadrupole field is identical in structure to the Stern Gerlach field.

 

The only difference is that the vertical B field is weak while still allowing for a large z-derivative. So since one will be able to measure Lz along this vertical B field in this experiment, QM must predict quantization.

 

Andrew A. Gray

Edited by andrewgray
Link to comment
Share on other sites

3)The New Bremsstrahlung Cutoff Experiment

 

We covered the Bremsstrahlung Cutoff Experiment briefly in post #1. In review, the setup is like this:

bremsstrahlung3.gif

 

 

 

25 KeV electrons are blasted into a metal plate. The decelerations cause emissions of x-rays in all directions up to a cutoff frequency νmax=E/h. We showed how the cutoff frequency resulted from pulsating electrons that have a Nyquist Cutoff Frequency. That is, if the movement frequency is much greater than the pulsation frequency, radiation at this much higher frequency cannot be generated because the electron is OFF during much of the accelerations. The radiation aliases back down to a lower frequency. (See post #1).

 

We seek a method to prove this. According to QM, the cutoff frequency is dependent on the max energy of an x-ray photon, produced during the interaction. According to our New Theory, the cutoff frequency is dependent of the pulsation frequency of the electrons. So we wish to change the pulsation frequency of the incident electrons without changing their energy. If the maximum frequency found is different than E/h, then the QM theory will be proven false.

 

Again, we seek a way to change the pulsation frequency of the incident electrons without changing their energy. To do this, we use a cyclotron instead of a ordinary voltage. There is no guarantee that a centripetal acceleration will affect electron pulsations like a linear acceleration. So our first test would involve a setup with a cyclotron like this:

bremsstrahlung6.gif

 

 

According to QM, the max frequency will remain unchanged, νmax=E/h. However, this New Theory allows for the possibility that the max frequency could be drastically reduced. Why would that be? Because in a cyclotron, the accelerations are in opposite directions with each ½ revolution. Since the electron structure is being affected, the opposing accelerations might cancel each other and drastically reduce the electron's pulsation frequency. This would in turn reduce the max Bremsstrahlung cutoff frequency. If this setup produced cutoff frequencies significantly below E/h, the photon hypothesis would indeed have to be abandoned.

 

Andrew A. Gray

Edited by andrewgray
Link to comment
Share on other sites

  • 1 year later...

Are you guys still there? I was wondering if anyone had looked at the work of Ruggero Maria Santilli.

 

THE RUTHERFORD-SANTILLI NEUTRON<BR> by J. V. Kadeisvili<br> The Institutye for Basic Research

 

Please read and comment about his findings on neutron synthesis.

Here is an excerpt: (if you have time, read the whole thing from the above link)

 

11.Inapplicability of quantum mechanics for the synthesis of neutrons from protons and electrons

 

While quantum mechanics is exactly valid for the structure of the hydrogen atom, and only approximately valid for the structure of the deuterium, Santilli has established that quantum mechanics is inapplicable (and not violated) for any quantitative representation of the synthesis of neutrons as it occurs in stars, from protons and electrons, for numerous independent reasons, each one implying a catastrophic inapplicability, such as:

 

1) All consistent quantum mechanical bound states A + B = C, as they occur in atoms, nuclei and molecules, have a mass defect, namely, the rest energy of the bound state C is smaller then the sum of the rest energies of the original states A and B, resulting in the very principle for which nuclear fusions release energy. The above mass defect is represented by a negative binding energy in the Schroedinger equation for the bound state that, under these conditions, is fully consistent. By comparison, from Eqs. (1), the rest energy of the neutron is 0.782 MeV bigger than the sum of the rest energies of the proton and the electron. As a result, any possible treatment of the neutron synthesis p + e => n + ? would require a positive binding energy that is sheer anathema for quantum mechanics because, under such binding energies the Schroedinger's equations becomes physically inconsistent, without any possibility this time to add unknown parameters for the usual political aim of "fixing things" and adapting nature to a preferred theory.

 

2) It is popularly believed that the energy of at least 0.78 MeV missing in the synthesis of the neutron can be provided by the relative kinetic energy between the proton and the electron. This view has no serious scientific content, because the cross section of the proton and electron at 0.78 MeV mutual energy is extremely small (of about 10-20 barn) in which case any possibility for the proton and the electron to coalesce and form the neutron is impossible. As we shall see, this limitation can be resolved by assuming a participation of space as a universal medium known as aether, but this requires ab initio to exit from the boundary of quantum mechanics.

 

3) Assuming that, via hitherto unknown manipulations, incompatibilities 1) and 2) could be resolved, simple calculations via the use of quantum mechanics show that the electron can be retained inside the proton for extremely small periods of time (of the order of 10-15 seconds). But the neutron has a lifetime of about 14 minutes. Hence, the error by quantum mechanics in the representation of the lifetime of an isolated neutron is of the order of 10,000,000,000.000 fold!

 

4) Quantum mechanics does not allow the achievement of the spin 1/2 of the neutron via two particles, the proton and the electron, each having spin 1/2. As shown below, the Pauli-Fermi hypothesis of the emission of a neutrino in the synthesis, Eq. (1), is far from being settled, e.g., because the mechanism for a proton and an electron to a kind of "decomposing" themselves in order to produce the neutrino is vastly unknown.

 

5) Assuming that all the above incompatibilities (that are per se irreconcilable for all qualified physicists) are somewhat resolved, still quantum mechanics cannot represent the magnetic moment of the neutron from the known magnetic moments of the proton and the electron (see Santilli [3], Volume IV).

 

In summary, political supporters of quantum mechanics as the final theory of nature can manage to add unknown parameters, manipulate things, adjust unknown functions and do all sort of tricks to represent experimental data, and then conclude that "quantum mechanics is valid" for numerous cases. However, this manipulation of scientific knowledge is impossible for the neutron synthesis because no matter what manipulation can be dreamed up, no quantitative representation of the neutron synthesis is permitted by quantum mechanics.

 

In conclusion, the most fundamental synthesis of nature, the synthesis of neutrons from protons and electrons in the core of stars, cut out all politics on the final character of quantum mechanics, establishes the irreconcilable inapplicability of the theory. This establishes the need for a covering theory.

 

 

12. Insufficiencies of the neutrino hypothesis for the neutron synthesis

 

As recalled in Section 1, Pauli's objection on the inability to represent the spin 1/2 of the neutron according to Rutherford, led to Fermi's hypothesis of the neutrino according to Eq. (1).

 

Despite the success of the Pauli-Fermi hypothesis, Santilli has identified a litany of unresolved problems in the neutrino conjecture. To begin, the neutrino conjecture has no explanation on how the proton and/or the electron experience a kind of "decomposition" to produce a neutrino.

 

The complementary hypothesis of the anti-neutrino via the reaction

 

(6) p+ + e- + anti-v => n

 

is even more controversial than reaction (1) because the antineutrino has a null cross section with the proton and the electron. Consequently, there is no possibility whether, not even remote, that the antineutrino can deliver the 0.78 MeV needed for the neutron synthesis. hence, even assuming that conjecture (6) resolves the problem of the spin (which it does not), the problem of the missing 0.78 meV remains unsolved (Santilli, Loc. Cit.).

 

Additionally, recent studies (see monograph [19]) have established that the sole possibility for scientific democracy between matter and antimatter, thus including a consistent classical theory of antimatter, requires that the anti-neutrino has a negative energy although referred to a negative unit. Consequently, reaction (6) is predicted to require energy, rather than supply the missing 0.78 MeV.

 

Additionally, according to quantum mechanical bound state, hypothesis (6) would require that the neutron is a three-body bound state of a proton, an electron and an antineutrino, which view is pure nonscientific nonsense because there is no possibility whatsoever, not even remote, to permanently bound a neutrino inside the small volume of the proton as needed for the deuteron.

 

Additionally, Fermi's original hypothesis of one neutrino and one antineutrino has been more recently incorporated in the standard model and this has caused a proliferations of controversies that are increasing in time.

 

To begin, the standard model first required the increase from one neutrino and one antineutrino to three neutrinos (the electron, muon and tau neutrinos) and three antineutrinos that, for physical consistency, must be different, although no experimentally verifiable difference has been provided to date by academia [21,24].

 

Due to the insufficiency of this first generalizations, the neutrinos and antineutrinos were then assumed to have masses that, in reality, are free parameters introduced to "fix things." In fact, the "neutrino masses" are fitted from the experimental data and not derived from first independent principles of the theory.

 

Due to the insufficiency of the latter conjecture, it has been conjectured that neutrinos have different masses, and the chain of conjectures each one ventured in the hope of resolving a preceding unverifiable conjecture is continuing, thus turning science into a pure theology and academic manipulation.

 

Even the so-called "neutrino detections" are themselves very questionable in their very definition because neutrinos cannot be directly detected. Hence, the scientifically correct statement should be that the detections here here considered refer to physical particles predicted by the neutrino theory. But then, there are other theories without the use of the neutrino conjecture that interpret these "experimental data" [24].

 

The most implausible feature of the neutrino conjecture is that neutrinos are believed to traverse entire stars without any collision. This view was already questionable according to Fermi's original assumption that neutrinos are massless. Nowadays, the belief that massive neutrinos can traverse stars without collision has no scientific credibility whatsoever, being pure theology.

 

In summary, the conjecture on the existence of the neutrinos is extremely unsettled to this day, and plagued by a number of unresolved problems that increase, rather than decrease in time.

 

One can now begin to appreciate the importance of Santilli's theoretical and experimental studies on the neutron synthesis because they mandate the addressing of basic problems that would otherwise remain completely ignored. This feature also illustrate the extreme opposition by academia against the study of the neutron synthesis [26,27].

 

 

13. Insufficiencies of the quark hypothesis for the neutron synthesis

 

The biggest obstacles against the utilization of the energy contained in the neutron is the widespread belief that quarks are physical constituents of the neutron and of hadrons at large.

 

In fact, in the event quarks are the constituents of the neutron, no possibility exists or is conceivable for the utilization of the energy in its interior. On the contrary, if the electron is indeed a physical constituent of the neutron, said energy can indeed be utilized, as we shall see below, via its stimulated decay.

 

Santilli [21,24] accepts the SU(3)-color classification of hadrons as final; he recognizes that quarks are necessary for the technical elaboration of SU(3) theories; but Santilli's view is that quarks are purely mathematical representations, defined in a purely mathematical, complex-valued internal unitary space without any possible definition in our spacetime, for the following reasons:

 

1) According to quark believers, permanently stable particles, such as the proton and the electron, simply "disappear" at the time of the synthesis of the neutron inside stars to be replaced by the hypothetical quarks. This view is purely political without scientific credibility or backing [24].

 

2) Also according to quark believers, at the time of the spontaneous decay of the neutron, the proton and the electron simply "reappear" in the universe. In fact, according to the standard model, the proton and the electron are claimed to be "recreated" at the time of the neutron decay, although without any explanation whatsoever on how this might be possible. This belief is pure nonscientific nonsense intended to serve personal interest and definitely cannot be considered serious science [24].

 

3) Assuming that the above problems can be somewhat bypassed [24], Santilli has provided rigorous proof that, in the event the neutron is made up of quarks, it cannot have any gravity at all. In fact, as stated by Albert Einstein, gravity can only be defined in our spacetime, while quarks absolutely cannot be defined in our spacetime, since they can only be defined in a mathematical complex-valued unitary space.

 

There are numerous additional technical reasons for the impossibility of quarks to be physical particles in our spacetime. One of them is the very argument according to which quark believers dismiss the Rutherford-Santilli model of the neutron. The "argument" is that, according to quantum mechanics, Heisenberg's uncertainty principle does not allow the electron to be permanently bound inside the proton for the lifetime of the neutron. The politics in this case is established by the fact that the same argument is not used by quark believers to prove the impossibility for quarks to be permanently bound inside the neutron.

 

The understanding of the scheme is formalized by the fact that quarks are centrally based on the use of the conventional quantum mechanics for their very definition, while the Rutherford-Santilli electron obeys a covering of quantum mechanics. Hence, the "argument" based on the uncertainty principle definitely applies to quarks, and definitely has no sense for the Rutherford-Santilli electron.

 

 

14. Incompatibility of the neutron synthesis with the cold fusion

 

Physicists interested in preserving old knowledge, rather than seeking new knowledge, generally use the insufficiencies of the cold fusion as evidence for the impossibility of synthesizing neutrons from protons and electrons. This view should be disqualified, particularly when proffered by experts.

 

In fact, the neutron synthesis requires energy, while the cold fusion aims at producing energy. Consequently, the mathematical and physical laws that are effective for the former event have to be changed for the different features of the latter event.

 

Additionally, the synthesis of the neutrons occurs in stars from the sole use of protons and electrons. By comparison, the neutrons detected in certain cold fusions originate from nuclear synthesis, that is, the neutrons released in nuclear fusions occur from nuclear processes such as excess neutrons in the synthesized nucleus, and definitely not from protons and electrons.

 

In summary, the Rutherford-Santilli neutron is strictly referred to neutrons synthesized from the sole use of protons and electrons as occurring in stars. Any use of information from cold fusion, nuclear syntheses and the like, for the Rutherford-Santilli neutron is not scientific, irrespective of wether in favor or against said synthesis.

Link to comment
Share on other sites

1) All consistent quantum mechanical bound states A + B = C, as they occur in atoms, nuclei and molecules, have a mass defect, namely, the rest energy of the bound state C is smaller then the sum of the rest energies of the original states A and B, resulting in the very principle for which nuclear fusions release energy.

 

This is only true of STABLE bound states, and further, there is no evidence that a neutron is an electron bound to a proton (in fact, evidence from deep inelastic scattering would suggest this is definitively not the case).

 

So this argument doesn't apply to neutron formation (which is not a bound state of an electron+proton), AND even if it did, it is wrong.

 

2) It is popularly believed that the energy of at least 0.78 MeV missing in the synthesis of the neutron can be provided by the relative kinetic energy between the proton and the electron. This view has no serious scientific content, because the cross section of the proton and electron at 0.78 MeV mutual energy is extremely small (of about 10-20 barn) in which case any possibility for the proton and the electron to coalesce and form the neutron is impossible.

 

A barn is a huge unit, 10-20 barns is a huge cross section.

 

3) Assuming that, via hitherto unknown manipulations, incompatibilities 1) and 2) could be resolved, simple calculations via the use of quantum mechanics show that the electron can be retained inside the proton for extremely small periods of time (of the order of 10-15 seconds). But the neutron has a lifetime of about 14 minutes. Hence, the error by quantum mechanics in the representation of the lifetime of an isolated neutron is of the order of 10,000,000,000.000 fold!

 

This is making the same mistake as before (neutron is an electron/proton bound state). Further, he has not indicated how he has arrived at his 10^(-15) second order of magnitude guess. I imagine is an uncertainty principle argument, but he has not even attempted to explain.

 

4) Quantum mechanics does not allow the achievement of the spin 1/2 of the neutron via two particles, the proton and the electron, each having spin 1/2. As shown below, the Pauli-Fermi hypothesis of the emission of a neutrino in the synthesis, Eq. (1), is far from being settled, e.g., because the mechanism for a proton and an electron to a kind of "decomposing" themselves in order to produce the neutrino is vastly unknown.

 

Neutrino omission from the sun has been observed. This, I believe, settles the neutrino hypothesis.

 

5) still quantum mechanics cannot represent the magnetic moment of the neutron from the known magnetic moments of the proton and the electron (see Santilli [3], Volume IV).

 

Again, the same mistake is made. Much like photon numbers aren't conserved, we should expect relativistically that electron/neutrino/proton/neutron numbers aren't conserved.

Link to comment
Share on other sites

Thanks for the reply, Will.

 

This is all so very interesting to me. OK, I understand what you are saying about bound states and the neutron is not truly a bound state. OK.

 

Next, let's talk about the general process of neutron synthesis in star cores. Now, I assume that we can all agree that stars are initially made up of hydrogen (1 proton and 1 electron), can we not? If not, we will have to have an aside here. I hope not.

 

So if all we have is hydrogen in early star cores, then can we not logically conclude that radically compressed hydrogen synthesizes into neutrons?

 

Santilli writes:

 

p+ + e- + anti-v => n

 

So, Will, are you saying that as the proton and electron are compressed, then they somehow find an anti neutrino before they can become a neutron? Or are you saying that the whole premise is wrong about hydrogen compression?

 

So this argument doesn't apply to neutron formation (which is not a bound state of an electron+proton), AND even if it did, it is wrong.

 

 

Again, thanks for the clarifications. I have a few more questions.

 

 

Andrew

Link to comment
Share on other sites

Will,

 

OK, I understand what you are saying. However, I understand what Santilli is saying, too.

The proton/electron have 0.78 MeV less mass than the neutron.

 

P+e- -> n + v_e.

 

Yet, the neutrino must go on the right side of the equation, according to you.

The masses don't seem to jive. This is what Santilli seems to be saying.

Please explain the missing mass.

 

Thanks in advance.

 

Andrew

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...