Jump to content
Science Forums

Anybody interested in Dirac's equation?


Doctordick
 Share

Recommended Posts

Well Anssi, I finally decided to post this thing. I think the evidence is pretty good that you and I are the only people taking the issue seriously and we could probably accomplish as much in private communications but I still have some hopes that there are some intelligent people thinking about the things I say so I will post on. I am posting this in spite of the fact that I am sure there are still some typographic errors (particularly in the latex expressions). I am sure that I am rapidly approaching mental incompetence as I have lately run into a common problem we old people have. I am sure you have heard about going into another room and then, after you get there, you can't remember why you went there. Well that has occurred to me several times in editing this document. I find an error and then after I manage to find the location of the error in the document, I can't remember what the error was. Getting old is a pain in the ***.

 

So you have a job ahead of you. I will appreciate any flaw you discover and I will help to explain every step. Good luck!

 

Anything correctly deduced from a true statement is a true statement; however, no defense save faith exists for the converse! The fact that a relationship is true cannot be used to justify the model from which it was deduced! Unfortunately, modern science has made exactly the same mistake made by the astrologers: they presume that their mental model of the world is valid because they believe it is valid. When questioned, they both give case after case of "correct" predictions, ignoring the fact that there could be an alternate explanation of exactly the same results they are explaining.

 

There is no need whatsoever to justify my model as I have shown that it is entirely general: i.e., there exists no communicable explanation of anything which cannot be analyzed from the perspective of my model. I need not argue that my view is the only rational view; I need only show that it provides a useful foundation from which real observations may be analyzed with confidence. If you are to show a flaw, you must either show me an explicit error in my deductions or you must show me a universe (a set of numerical reference labels) which cannot be cast into the representation I have already presented. That being the case, what follows is fundamentally a proof of Dirac's equation.

 

As for the tortoise's complaint, “where's the beef”! I would like to point out that Maxwell's equations brought together, in one expression, Gauss's mathematical divergence of the electric field, Faraday's representation of changing magnetic fields causing electric fields and Ampere's connection between currents and changing electric fields with magnetic fields. For this achievement, he was regarded as a preeminent scientist; once described by Richard Feynman as the greatest physicist of the 19th century.

 

In the same vein, I have brought together, in one expression the entire realms of physics represented by Newtonian mechanics, quantum mechanics, electrodynamics and relativity (both special and general). And all this without postulating a theoretical relationship but rather by deduction from the simple limitations required by self consistency. I think one could say there is a bit of “meat” there! But let us get to Dirac's equation.

 

To begin with, the following is a rather common representation of Dirac's equation for an electron coupled to an electromagnetic field.

[math]\left\{c\vec{\alpha}\cdot\left(\vec{p} -\frac{e}{c}\vec{A}\right)+\beta mc^2 +e\Phi \right\}\Psi=i\hbar\frac{\partial \Psi}{\partial t}[/math].

 

The various components of that equation are defined as follows:

[imath]\Psi(\vec{x},t)[/imath] is the “wave function” of the electron represented by Dirac's equation: i.e., the probability of finding an electron at the point defined by [imath]\vec{x}=x\hat{x}+y\hat{y}+z\hat{z}[/imath] at time t is given by [imath]\Psi^\dagger \Psi dV.[/imath] In Dirac's representation, [imath]\Psi(\vec{x},t )[/imath] has four components or, if one wishes to include “imaginary” and “real” as two components, [imath]\Psi(\vec{x},t)[/imath] has eight explicit components. Although this can be thought of as an eight dimensional abstract space, no one expresses [imath]\Psi(\vec{x},t)[/imath] in vector notation; it is simply taken as understood.

 

The partial derivative [imath]i\hbar\frac{\partial}{\partial t}[/imath] is exactly the same energy operator defined through

 

In the Dirac equation, alpha and beta are anti commutating "matrices" essentially derived from the Pauli Spin matrices. The vector matrix alpha consists of three matrix components [imath](\vec{\alpha}= \alpha_x \hat{x}+\alpha_y\hat{y}+\alpha_z\hat{z})[/imath] and the forth matrix, beta, is associated with mc
2
in exactly the same way my [imath]\alpha_\tau[/imath] "operator" is associated with the momentum in the tau direction. (My operators could also be represented by "matrices" but, because there are essentially an infinite number, the idea is not really useful.) There is a minor difference in the definition of respective anti commuting operators. Dirac uses the
whereas my operators are defined somewhat differently (effectively, Dirac's matrices are mine times [imath]\sqrt{2}[/imath]). This is a simple consequence of the fact that the squared magnitude of his operators are defined to be unity whereas mine are defined to be one half. Other than that, they can be thought of as operating in an abstract multidimensional space quite analogous to Dirac's matrices.

 

Dirac's [imath]\vec{p}=-i\hbar\left\{\frac{\partial}{\partial x}\hat{x}+\frac{\partial}{\partial y}\hat{y}+\frac{\partial}{\partial z}\hat{z}\right\} [/imath], is exactly the same momentum operator defined in my deduction of Schrödinger's equation.

 

The factors e and c are the charge on the electron and the speed of light. And finally [imath]\vec{A}[/imath] and [imath]\Phi[/imath] are the standard electromagnetic field potentials defined through the solutions to Maxwell's equations.

My fundamental equation, written in a four dimensional form (the Euclidean space of the representation being, [imath]\hat{x}[/imath], [imath]\hat{y}[/imath], [imath]\hat{z}[/imath] and [imath]\hat{\tau}[/imath], is

[math]\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i+\sum_{i \neq j}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\right\}\vec{\Psi}(\vec{x}_1,\vec{x}_2,\cdots,t)=K\frac{\partial}{\partial t}\vec{\Psi}[/math]

 

It should be clear to the reader that this bears a striking similarity to Dirac's equation (in fact, I have been told by a few professional physicists that it is no more than a re-expression of Dirac's equation; strong evidence they didn't look very closely). Two clear differences stand out: in Dirac's equation, the wave function has a complex value (it is actually a vector in the complex space represented by a real component and an imaginary component) in a four dimensional matrix space, whereas my wave function is a vector in an abstract space of arbitrary dimensionality and, secondly, Dirac's equation has only one spacial argument [imath]\vec{x}[/imath] (it is a “one body” expression) whereas my equation has an infinite number of such arguments (it is a “many body” expression). The first issue is of no serious account (actually it existed in the derivation of Schrödinger's equation though I never pointed it out). My representation merely allows for more complex results; things to be discuss further down the road. The second issue will be handled in essentially the same manner as it was handled in the derivation of Schrödinger's equation.

 

In Dirac's equation, [imath]\vec{A}[/imath] and [imath]\Phi[/imath] are electromagnetic potentials. In my equation, the potentials, V(x,t), are obtained by integrating over the expectations of some specific set of known solutions. Since, in common physics, the electromagnetic potentials arise through the existence of photons, it seems quite reasonable that the electromagnetic potentials (from my fundamental equation) would arise from integrating over the expectations of known massless solutions. In accordance with exactly the attack I took in deriving Schrödinger's equation, I will divide my [imath]\vec{\Psi}[/imath] into three components.

[math]\vec{\Psi}=\vec{\Psi}_1\vec{\Psi}_2\vec{\Psi}_0[/math]

 

where [imath]\vec{\Psi}_0[/imath] represents the entire rest of the universe, taken to be essentially independent of [imath]\vec{\Psi}_1[/imath] and [imath]\vec{\Psi}_2[/imath]. [imath]\vec{\Psi}_1[/imath] is the function we are looking for (the function which yields the expectations for the Dirac particle) and [imath]\vec{\Psi}_2[/imath] is to yield the known expectations for the electromagnetic potential arising from a single photon (that result will be used to generalize the final result: i.e., more photons will be added later). This neglect of all other possible contributions being explicitly in accordance with the approach taken by modern physics, it is entirely reasonable for me to presume no connections exist between [imath]\vec{\Psi}_0[/imath] and the other two functions. Substituting that representation of [imath]\vec{\Psi}[/imath] into my fundamental equation, one obtains the following:

[math]\{\vec{\alpha}_1\cdot\vec{\nabla}_1+\vec{\alpha}_2\cdot\vec{\nabla}_2 +\beta_{12}\delta(\vec{x_1}-\vec{x_2})+\beta_{21}\delta(\vec{x_2}-\vec{x_1})\} \vec{\Psi}_1\vec{\Psi}_2\vec{\Psi}_0[/math]

 

[math]+\left[\left\{\sum_{i=3}^\infty \vec{\alpha}_i \cdot \vec{\nabla}_i +\sum_{i>2 \;\&\;j(>2\;\&\;\neq i)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\right\}\vec{\Psi}_0-K\frac{\partial}{\partial t}\vec{\Psi}_0\right]\vec{\Psi}_1\vec{\Psi}_2[/math]

 

[math]+\left[\sum_3^\infty\{\beta_{1i}\delta(\vec{x}_1-\vec{x}_i) +\beta_{i1}\delta(\vec{x}_i-\vec{x}_1)+ \beta_{2i}\delta(\vec{x}_2-\vec{x}_i)+\beta_{i2}\delta(\vec{x}_i-\vec{x}_2)\}\right] \vec{\Psi}_1\vec{\Psi}_2\vec{\Psi}_0[/math]

 

[math]=\vec{\Psi}_0\left\{K\frac{\partial}{\partial t}\vec{\Psi}_1\vec{\Psi}_2\right\}[/math].

 

Please note that the time derivative of [imath]\vec{\Psi}_0[/imath] has been moved to the left side of the equation. I have written this out as four specific terms for the simple reason that the two expressions in square brackets must vanish exactly from the assumption that the rest of the universe has utterly no impact upon the solution we are looking for (it is, when set to zero, exactly the fundamental constraint on the rest of the universe together with a lack of influence on the two elements of interest). We may then left multiply by [imath]\vec{\Psi}_0^\dagger \cdot[/imath] and integrate over the entire rest of the universe where, because the state of the rest of the universe has absolutely no impact upon the problem we are concerned with, we obtain [imath]\vec{\Psi}_0^\dagger\cdot\vec{\Psi}_0 = 1[/imath] which entirely removes [imath]\vec{\Psi}_0[/imath] from the equation. If we now multiply the entire equation by the factor [imath]-ic\hbar[/imath] and use the definition for the momentum operator as [imath]-i\hbar\vec{\nabla}[/imath] and [imath]c=-\frac{1}{K\sqrt{2}}[/imath]. The minus sign is chosen here in order to obtain exact equivalence to Dirac's equation. (This step will be taken up again at another time as the issue is significant.) Finally, setting the momentum in the tau direction as defining rest mass and defining [imath]2\beta=\beta_{12}+\beta_{21}[/imath], we will obtain the following,

[math]\{c\vec{\alpha}_1\cdot\vec{p}_1+\alpha_{1\tau} m_1c^2+c\vec{\alpha}_2\cdot\vec{p}_2+\alpha_{2 \tau}m_2c^2-2i\hbar c\beta \delta(\vec{x}_1-\vec{x}_2)\}\vec{\Psi}_1\vec{\Psi}_2= \frac{i\hbar}{\sqrt{2}}\frac{\partial}{\partial t}\vec{\Psi}_1\vec{\Psi}_2[/math].

 

(Note that [imath]\vec{\alpha}[/imath] is still a four dimensional operator but that [imath]\vec{p}[/imath] is being taken as three dimensional: i.e., [imath]\alpha_\tau \cdot \vec{p}=0[/imath]). That comment is just to maintain consistency with Dirac's definition of [imath]\vec{p}[/imath]: a three dimensional momentum vector. It is interesting to note that if the second entity (the supposed massless element) is identified with the conventional concept of a free photon, its energy is given by c times its momentum. If the vector [imath]\vec{p}_2[/imath] is taken to be a four dimensional vector momentum in my [imath]x,y,z,\tau[/imath] space we can write the energy relationship for element 2 as

[math]\left\{\frac{i\hbar}{\sqrt{2}}\frac{\partial}{\partial t}\vec{\Psi}_2\right\}\vec{\Psi}_1 =c\vec{\alpha}_2\cdot\vec{p}_2\vec{\Psi}_2\vec{\Psi}_1=\sqrt{\frac{1}{2}}|cp_2|\vec{\Psi}_2\vec{\Psi}_1[/math]

 

using the fact that the value of [imath]\vec{\alpha}_2\cdot\vec{p}_2=\sqrt{\frac{1}{2}}|\vec{p}_2|[/imath] (which can be deduced from the fact that the dot product is the component of alpha in the direction of the momentum times the magnitude of that momentum). This identification drops out both the energy term, the mass term and the momentum term associated with the second element independent of that element's mass and brings the fundamental equation into the form

[math]\{c\vec{\alpha}_1\cdot\vec{p}_1+\alpha_{1\tau}m_1c^2 -2i\hbar c \beta\delta(\vec{x}_1-\vec{x}_2)\}\vec{\Psi}_1\vec{\Psi}_2 = \sqrt{\frac{1}{2}}\vec{\Psi}_2 i\hbar\frac{\partial}{\partial t}\vec{\Psi}_1.[/math]

 

If one now defines a new operator [imath]\vec{\gamma}=\vec{\alpha}_1\beta[/imath], since [imath]\vec{\alpha}_1\cdot\vec{\alpha}_1=2[/imath] (it consists of four components each being 1/2) it should be clear that [imath]\vec{\alpha}_1\cdot\vec{\gamma}=2\beta[/imath]. Making that substitution in the above we have

[math]\{c\vec{\alpha}_1\cdot\vec{p}_1+\alpha_{1\tau}m_1c^2 -i\hbar c \vec{\alpha}_1\cdot\vec{\gamma}\delta(\vec{x}_1-\vec{x}_2)\}\vec{\Psi}_1\vec{\Psi}_2 = \sqrt{\frac{1}{2}}\vec{\Psi}_2 i\hbar\frac{\partial}{\partial t}\vec{\Psi}_1.[/math]

 

The last step in this process is to left multiply by [imath]\vec{\Psi}_2^\dagger\cdot[/imath] and integrate over [imath]\vec{x}_2[/imath]. One will obtain a unit coefficient in every term except for the term containing the Dirac delta function (I will refer to this as the interaction term as it is the only term containing [imath]\vec{x}_2[/imath], the coordinate of the second element). Integration over the interaction term will introduce a coefficient consisting of [imath]\vec{\Psi}_2^\dagger(\vec{x}_1,t)\cdot\vec{\Psi}_2^(\vec{x}_1,t)[/imath] (integration over a delta function yields a spike at the point [imath]\vec{x}_1=\vec{x}_2[/imath]: i.e., we obtain exactly the explicit probability amplitude that element #2 will be at exactly the same point as element #1). After this integration (and multiplying everything by [imath]\sqrt{2}[/imath] the fundamental equation has the form,

[math]\sqrt{2}\left\{c\vec{\alpha}_1\cdot \vec{p}_1 +\alpha_\tau m_1c^2 - \vec{\alpha}_1 \cdot i\hbar c\left[\vec{\gamma}\vec{\Psi}_2^\dagger(x_1,t)\cdot\vec{\Psi}_2(x_1,t)\right]\right\}\vec{\Psi}_1= i\hbar \frac{\partial}{\partial t}\vec{\Psi}_1.[/math]

 

It is almost trivial to identify this with Dirac's equation: one need only recognize that Dirac's vector representation of his anti-commuting alpha matrix (i.e., in their mathematical consequences) is exactly equivalent to [imath]\sqrt{2}[/imath] times the first three components of my vector alpha operator and his anti-commuting beta matrix corresponds exactly to [imath]\sqrt{2}\alpha_\tau[/imath]. In addition, his equation makes the assumption that there exists a four component complex solution: i.e., his function [imath]\Psi(\vec{x},t)[/imath] can certainly be seen as a simplified approximation to my more general [imath]\vec{\Psi}(\vec{x},t)[/imath] (which, of course, includes many much more complex possibilities). (Note further that [imath]\vec{x}_2[/imath] has vanished from the equation so the subscript is no longer necessary). Making these substitutions (and I sincerely apologize for having used the same Greek letters for my operators as Dirac used for his as I understand the confusion it can lead to), we then have

[math]\left\{c\vec{\alpha}\cdot \left(\vec{p} -i\hbar\left[\vec{\gamma} \vec{\Psi}_2^\dagger(x,t)\cdot\vec{\Psi}_2(x,t)\right]\right)+\beta mc^2 -i\hbar c \left[ \gamma_\tau \vec{\Psi}_2^\dagger(x,t)\cdot\vec{\Psi}_2(x,t)\right]\right\}\Psi= i\hbar \frac{\partial}{\partial t}\Psi[/math]

 

Note that the dot product, [imath]\vec{\alpha}_1 \cdot \vec{\gamma}[/imath], yields a factor [imath]\sqrt{\frac{1}{2}}[/imath] which cancels the factor [imath]\sqrt{2}[/imath] multiplying the whole thing.) The dot product can again be seen as [imath]|\vec{\alpha}_1|[/imath] parallel to [imath]\vec{\gamma}[/imath] times the magnitude of [imath]\vec{\gamma}[/imath].

 

Comparing this with Dirac's equation, it is clear that identification of the expectation values of [imath]\vec{\gamma}[/imath] with the standard electromagnetic field potentials [imath]\vec{A}[/imath] and [imath]\Phi[/imath] via the expressions,

[math]\Phi(\vec{x},t)=-i\frac{\hbar c}{e}\left[\gamma_\tau \vec{\Psi}_2^\dagger(\vec{x},t)\cdot \vec{\Psi}_2(\vec{x},t)\right][/math]

 

and

 

[math]\vec{A}(\vec{x},t)=i\frac{\hbar c}{e}\left[\vec{\gamma}\vec{\Psi}_2^\dagger(\vec{x},t)\cdot \vec{\Psi}_2(\vec{x},t)\right][/math].

 

leads to exactly Dirac's equation (note that only the x, y and z components of the [imath]\vec{\gamma}[/imath] are to be used in the definition of [imath]\vec{A}[/imath]). It follows that Dirac's equation can be seen as an approximation to my fundamental equation under some very specific conditions. It is important that these conditions be examined carefully. First, this is an approximate solution to my equation if we have an interaction between two events in total isolation from the rest of the universe. Except for the "two" events part, that is exactly the common approximation made when using Dirac's equation. In the Dirac's equation, the electromagnetic field potentials are certainly not analogous to what I have put forth as a fundamental element: i.e., a “point” entity interacting with the rest of the universe via a Dirac delta function.

 

However, the photon (which does bear a close resemblance to what I have put forth as a fundamental element) is quite often described as being the consequence of quantizing the electromagnetic field. Anyone who has followed the derivation of my fundamental equation knows that, under that derivation, two lone fermions cannot interact: i.e., the Dirac delta function vanishes via the Pauli exclusion principle. Since the electron (the particle element number one is to represent) is a fermion, the second element has to be a boson. If that is the case, the second element in the above deduction must obey Bose-Einstein statistics: i.e., an unlimited number of particles may occupy the same state at the same time.

 

That being the case, under the assumption that interaction between photons is negligible and that any number of bosons may occupy the same state (the same function [imath]\psi(\vec{x},t)[/imath]) we may clearly include as many photons (the name I have used to refer to as element number two in the above deduction) as we wish. Thus it is that “electromagnetic field potentials” controlling the behavior of the electron could be defined by the expressions

[math]\Phi(\vec{x},t)=-i\frac{\hbar c}{e}\sum_{i=2}^\infty \left[\gamma_\tau \vec{\Psi}_i^\dagger(\vec{x},t) \cdot \vec{\Psi}_i(\vec{x},t)\right][/math]

 

and

 

[math]\vec{A}(\vec{x},t)=i\frac{\hbar c}{e}\sum_{i=2}^\infty \left[\vec{\gamma}\vec{\Psi}_i^\dagger(\vec{x},t) \cdot \vec{\Psi}_i(\vec{x},t)\right][/math].

 

This is clearly a representation which yields the electromagnetic field potentials as the collective result of a collection of photons. This picture is essentially quite equivalent to seeing photons as quantized electromagnetic fields except from exactly the opposite direction. The philosophic question is, of course, which is the more fundamental perspective. Modern physics pretty well takes the field picture as more fundamental: i.e., it is their presumption that the quantized elements (what they call particles) are to be discovered by “quantizing field solutions to their physical problems”. For example, right now, the big issue in physics is the attempt to “quantize Einstein's general relativistic field equations” in order to obtain the characteristics of the “graviton”. I will show explicitly that it is more rational to hold the quantized elements as fundamental and the fields as deduced consequences. That view removes a lot of subtle difficulties in the modern physics perspective that field theory is fundamental.

 

Laying that issue aside for the moment, there is still a subtle difficulty with the result I have achieved. I have shown that my fundamental equation yields a result quite analogous to Dirac's equation: i.e., Dirac's equation presumes the electromagnetic potentials are given and, in my deduction of my fundamental equation, I proved there always exists a potential function which will yield the observed behavior. Actually all this proves is that my expression above “could be” a valid expression of the electromagnetic potentials. That really isn't sufficient to identify my result with Dirac's equation as the electromagnetic potentials are specifically defined.

 

The problem here is that, from the perspective of my fundamental equation (and the work above), the electromagnetic potential is a many body problem and, as such, is a problem we cannot solve. That being the case, let me take the same attack to discover the form of [imath]\vec{\Psi}_2[/imath] which I used to discover the form of [imath]\vec{\Psi}_1[/imath]: i.e., presume a solution for [imath]\vec{\Psi}_1[/imath] and examine (via the interaction term) the kind of equation which the expectation value of [imath]\vec{\gamma}[/imath] must obey.

[math]<\vec{\gamma}>=\left[\vec{\gamma}\vec{\Psi}_2^\dagger(\vec{x}_2,t)\cdot \vec{\Psi}_2(\vec{x}_2,t)\right][/math].

 

as derived from

[math]\{c\vec{\alpha}_1\cdot\vec{p}_1+\alpha_{1\tau} m_1c^2+c\vec{\alpha}_2\cdot\vec{p}_2+\alpha_{2 \tau}m_2c^2-2i\hbar c\beta \delta(\vec{x}_1-\vec{x}_2)\}\vec{\Psi}_1\vec{\Psi}_2= \frac{i\hbar}{\sqrt{2}}\frac{\partial}{\partial t}\vec{\Psi}_1\vec{\Psi}_2[/math].

 

(Please note that I have returned to my alpha and beta operators, [imath]\alpha[/imath] and [imath]\beta[/imath] are not Dirac's matrices.) In this case, I will simply approximate the solution for element number one (pulling off the dependence on [imath]\tau[/imath] in terms of the rest mass of the electron) as

[math]\vec{\Psi}_1(x,y,z,\tau,t)=\vec{\sigma}\psi(x,y,z,t)e^{-i\frac{m_e c}{\hbar}\tau}[/math]

 

where [imath]\vec{\sigma}[/imath] supplies the vector component and

 

[math]\psi^\dagger(x,y,z,t)\psi(x,y,z,t)\approx a\delta(\vec{x}-\vec{v}t)[/math]

 

where a is some constant setting the "amplitude" of the effective delta function thus specifying the position of the electron.

 

What I have done is to approximate element number one (the supposed Dirac electron) as a massive point whose position is approximately given by [imath]\vec{v}t[/imath]. This is certainly consistent with the approximations used in the standard definition of electromagnetic potential field theory. If we assume the existence of element number two has negligible impact on the state of element number one (which we have clearly done by postulating that the above solution is a reasonable representation and substituting the definition [imath]-i\hbar \vec{\nabla}_{xyz\tau}=\vec{p}+cm_e \hat{\tau}[/imath]) then it must be true that

[math]\left\{c\vec{\alpha}_1\cdot \left(-i\hbar \vec{\nabla}_1\right) -\frac{i\hbar}{\sqrt{2}}\frac{\partial}{\partial t}\right\}\vec{\Psi}_1=0.[/math]

 

Once again, substituting this relationship into the equation we are trying to solve yields the interaction term as the only term with any dependence upon [imath]\vec{x}_1[/imath] and the resultant equation (again using [imath]-i\hbar \vec{\nabla}=\vec{p}+cm_e \hat{\tau}[/imath]) is as follows:

[math]\left\{-i\hbar c\vec{\alpha}_2\cdot \vec{\nabla}_2 -2i\hbar c \beta \delta (\vec{x}_1-\vec{x}_2)\right\}\psi\vec{\Psi}_2=\frac{i\hbar}{\sqrt{2}}\psi\frac{\partial}{\partial t}\vec{\Psi}_2[/math].

 

The dependence on [imath]\vec{x}_1[/imath] can be eliminated in exactly the same way the dependence on [imath]\vec{x}_2[/imath] was eliminated when deriving Dirac's equation. Multiply through by [imath]\psi^\dagger[/imath] and integrate over [imath]\vec{x}_1[/imath]. Again, the interaction term spikes when [imath]\vec{x}_1=\vec{x}_2[/imath] and the result is the amplitude of the probability that our electron is in the position referred to as [imath]\vec{x}_2[/imath]: i.e., [imath]\psi^\dagger(\vec{x}_2,t)\psi(\vec{x}_2,t)[/imath]. The resultant equation is

[math]\left\{-i\hbar c\vec{\alpha}\cdot \vec{\nabla}_2 -2i\hbar c \beta \psi^\dagger(\vec{x}_2,t)\psi(\vec{x}_2,t) \right\}\vec{\Psi}_2=\frac{i\hbar}{\sqrt{2}}\frac{\partial}{\partial t}\vec{\Psi}_2[/math].

 

This equation implies that, with regard to the function [imath]\vec{\Psi}_2[/imath], the expression

[math]\left\{-i\hbar c\vec{\alpha}\cdot \vec{\nabla}_2 -2i\hbar c \beta \psi^\dagger(\vec{x}_2,t)\psi(\vec{x}_2,t) \right\}=\frac{i\hbar}{\sqrt{2}}\frac{\partial}{\partial t}[/math].

 

can be seen as an operator identity.

 

Applying this identity operator to the equation from which it arose (recognizing that the alpha and beta operators cause the cross terms to vanish and yield a coefficient 1/2 for direct terms) yields

[math]\left\{-\frac{1}{2}\hbar^2 c^2\nabla_2^2 -2\hbar^2 c^2\left(\psi^\dagger(\vec{x}_2,t)\psi(\vec{x}_2,t)\right)^2 \right\}\vec{\Psi}_2=-\hbar^2\frac{1}{2}\frac{\partial^2}{\partial t^2}\vec{\Psi}_2[/math].

 

Dividing this equation by [imath]-\frac{1}{2}\hbar^2c^2[/imath] and dropping the subscript referring to [imath]\vec{x}_2[/imath] (only one spacial argument remains in the equation), we conclude that [imath]\vec{\Psi}_2[/imath] must obey the equation,

[math]\nabla^2\vec{\Psi}_2 +\left(2\psi^\dagger(\vec{x},t)\psi(\vec{x},t)\right)^2\vec{\Psi}_2=\frac{1}{c^2}\frac{\partial^2}{\partial t^2}\vec{\Psi}_2[/math].

 

However, if you look back at the definition of the electromagnetic potential fields we decided were necessary for my equation to be an approximation to Dirac's equation, you will discover that it is the expectation value of [imath]\vec{\gamma}[/imath] which we need to solve for. Clearly, it is necessary to left multiply this equation by [imath]\vec{\Psi}_2 \vec{\gamma}\cdot[/imath] obtaining the following:

[math]\vec{\Psi}_2^\dagger \vec{\gamma}\cdot \nabla^2\vec{\Psi}_2 +\left(2\psi^\dagger(\vec{x},t)\psi(\vec{x},t)\right)^2\vec{\Psi}_2^\dagger\vec{\gamma}\cdot\vec{\Psi}_2=\frac{1}{c^2}\vec{\Psi}_2^\dagger\vec{\gamma}\cdot\frac{\partial^2}{\partial t^2}\vec{\Psi}_2[/math].

 

At this point, I want to bring up an interesting mathematical relationship,

[math]\frac{\partial^2}{\partial x^2}\left\{\vec{\Psi}^\dagger(\vec{x},t)\cdot\vec{\Psi}(\vec{x},t)\right\}=\frac{\partial}{\partial x}\left\{\left( \frac{\partial}{\partial x}\vec{\Psi}^\dagger(\vec{x},t)\right)\cdot\vec{\Psi}(\vec{x},t) + \vec{\Psi}^\dagger(\vec{x},t)\cdot\left(\frac{\partial}{\partial x}\vec{\Psi}(\vec{x},t)\right)\right\}[/math]

 

[math]=\left\{\left( \frac{\partial^2}{\partial x^2}\vec{\Psi}^\dagger(\vec{x},t)\right)\cdot\vec{\Psi}(\vec{x},t) + 2 \left(\frac{\partial}{\partial x}\vec{\Psi}^\dagger(\vec{x},t)\right)\cdot\left(\frac{\partial}{\partial x}\vec{\Psi}(\vec{x},t)\right)+ \vec{\Psi}^\dagger(\vec{x},t)\cdot\left(\frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)\right)\right\}.[/math]

 

Adding to the above the subtle mathematical relationship [imath]\vec{\Phi}_1^\dagger \cdot\vec{\Phi}_2=\left(\vec{\Phi}_1 \cdot\vec{\Phi}_2^\dagger\right)^\dagger[/imath], we can assert that

[math]\frac{\partial^2}{\partial x^2}\left\{\vec{\Psi}^\dagger(\vec{x},t)\cdot\vec{\Psi}(\vec{x},t)\right\}=2\left\{\left( \vec{\Psi}^\dagger(\vec{x},t)\cdot\frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)\right) + \left(\frac{\partial}{\partial x}\vec{\Psi}^\dagger(\vec{x},t)\right)\cdot\left(\frac{\partial}{\partial x}\vec{\Psi}(\vec{x},t)\right)\right\}.[/math]

 

Since this analysis is also valid for partials with respect to the arguments y, z and [imath]\tau[/imath] (of course, if element two is a photon the partial with respect to tau vanish but suppose we leave it in there for the time being), one can immediately write down

[math]\nabla^2\left\{\vec{\Psi}^\dagger(\vec{x},t)\cdot\vec{\Psi}(\vec{x},t)\right\}=2\left\{\left(\vec{\Psi}^\dagger(\vec{x},t) \cdot\nabla^2\vec{\Psi}(\vec{x},t)\right) + \left(\vec{\nabla}\vec{\Psi}^\dagger(\vec{x},t)\right)\cdot\left(\vec{\nabla}\vec{\Psi}(\vec{x},t)\right)\right\}.[/math]

 

Now, just suppose those two terms were the same. If that were the case, then one could write down [imath]\nabla^2\left(\vec{\Psi}^\dagger \cdot \vec{\Psi}\right)=4\vec{\Psi}^\dagger\cdot\nabla^2\vec{\Psi}[/imath]. That says something quite significant: once you understand that [imath]\vec{\nabla}[/imath] is the essential mathematical element of the momentum operator, you should realize that using that equality is totally equivalent to presuming the momentum is an eigenvalue of [imath]\vec{\Psi}[/imath]: i.e., if [imath]\nabla\vec{\Psi}= k\vec{\Psi}[/imath] it would imply [imath]\nabla k\vec{\Psi}= k^2\vec{\Psi}[/imath], thus the assumption requires that the expectation values under discussion are macroscopic entities not subject to quantum fluctuations. Exactly the same arguments (with energy instead of momentum) can be used to set

[math]\frac{\partial^2}{\partial t^2}\left(\vec{\Psi}^\dagger \cdot \vec{\Psi}\right)=4\vec{\Psi}^\dagger\cdot\frac{\partial^2}{\partial t^2}\vec{\Psi}[/math]

 

In using these relationships, a very specific approximation is being made. The consequences of that approximation will need to be discussed; however, I will leave that issue to later. Meanwhile, using these relationships and the fact that [imath]\vec{\gamma}[/imath] commutes with both differential operators, we can write the equation the expectation values of [imath]\vec{\gamma}[/imath] must obey as follows:

[math]\frac{1}{4}\nabla^2\left(\vec{\Psi}_2^\dagger \vec{\gamma}\cdot \vec{\Psi}_2\right) +\left(2\psi^\dagger(\vec{x},t)\psi(\vec{x},t)\right)^2\left(\vec{\Psi}_2^\dagger\vec{\gamma}\cdot\vec{\Psi}_2\right)=\frac{1}{4c^2} \frac{\partial^2}{\partial t^2}\left(\vec{\Psi}_2^\dagger\vec{\gamma}\cdot\vec{\Psi}_2\right)[/math].

 

Or, if [imath]\nabla^2[/imath] is interpreted to be the standard three dimensional version (no partial with respect to tau) we should put in the tau term explicitly and obtain

[math]\frac{1}{4}\nabla^2\left(\vec{\Psi}_2^\dagger \vec{\gamma}\cdot \vec{\Psi}_2\right) -\frac{1}{4}\frac{m_2^2c^2}{\hbar^2}\left(\vec{\Psi}_2^\dagger \vec{\gamma}\cdot \vec{\Psi}_2\right)[/math]

 

[math]+\left(2\psi^\dagger(\vec{x},t)\psi(\vec{x},t)\right)^2\left(\vec{\Psi}_2^\dagger\vec{\gamma}\cdot\vec{\Psi}_2\right) =\frac{1}{4c^2} \frac{\partial^2}{\partial t^2}\left(\vec{\Psi}_2^\dagger\vec{\gamma}\cdot\vec{\Psi}_2\right)[/math].

 

We have already concluded that the electromagnetic potentials [imath]\Phi(\vec{x},t)[/imath] and [imath]\vec{A}(\vec{x},t)[/imath] have to be proportional to the respective expectation values of [imath]\vec{\gamma}[/imath] where the proportional constant is [imath]-i\frac{\hbar c}{e}[/imath] for [imath]\Phi[/imath] and the negative of that for [imath]\vec{A}[/imath]. Since those terms can be factored out, It follows directly that the electromagnetic potentials must obey equations with the following structure

[math]\nabla^2\Phi -\frac{m_2^2c^2}{\hbar^2}\Phi-\frac{1}{c^2}\frac{\partial^2}{\partial t^2}\Phi= -4\pi\rho [/math]

 

where [imath]\rho[/imath] is defined to be

[math]\rho=\frac{1}{\pi}\Phi\sum_i\left(2\psi_i^\dagger(\vec{x},t)\psi_i(\vec{x},t)\right)^2[/math]

 

and

[math]\nabla^2\vec{A}-\frac{m_2^2c^2}{\hbar^2}\vec{A} -\frac{1}{c^2}\frac{\partial^2}{\partial t^2}\vec{A}= -\frac{4\pi}{c^2}\vec{J}[/math]

 

where [imath]\vec{J}[/imath] is defined to be

[math]\vec{J}=\frac{c^2}{\pi}\vec{A}\sum_i\left(2\psi^\dagger(\vec{x},t)\psi(\vec{x},t)\right)^2[/math].

 

If [imath]m_2 =0 [/imath] (that is, we presume the boson is a photon), the above equations are exactly Maxwell's equations expressed in the microscopic Lorenz Gauge. The only questions here are the relationship between “e” and [imath]\left( 2\psi^\dagger(\vec{x},t)\psi(\vec{x},t)\right)^2[/imath] and the role of the electromagnetic potentials in [imath]\rho[/imath] and [imath]\vec{J}[/imath]. The both are fundamentally dependent upon the shape of [imath]\psi(\vec{x},t)[/imath]. We have already made the approximation that [imath]\psi^\dagger(\vec{x},t)\psi(\vec{x},t) \approx a\delta(\vec{x}-\vec{v}t)[/imath] which means that the term is extremely localized (essentially a point) so that both of these factors end up being little more than mechanisms available to set the strength of the interaction: i.e., the specific value of “e”, the electric charge. Dynamically speaking it is exactly the experimental circumstance defined by Dirac's equation together with Maxwell's equations.

 

One last subtle difficulty seems to exist with with my presentation. Both [imath]\Phi[/imath] and [imath]\vec{A}[/imath] are expectation values of [imath]\vec{\gamma}[/imath] explicitly multiplied by [imath]i=\sqrt{-1}[/imath] and some additional “real” numbers. This seems strange in view of the fact that [imath]\Phi[/imath] and [imath]\vec{A}[/imath] are, by electromagnetic theory, real values. In view of that fact, please note the following calculation of the amplitude of [imath]\vec{\gamma}[/imath].

[math]A_\gamma = \sqrt{\vec{\gamma}\cdot\vec{\gamma}}=\left\{\left(\sum_{i=1}^4\alpha_i\beta\hat{x}_i\right)\cdot\left( \sum_{i=1}^4\alpha_i\beta\hat{x}_i\right)\right\}^\frac{1}{2}=\left\{\sum_{i=1}^4\alpha_i\beta\alpha_i\beta\right\}^\frac{1}{2}[/math]

 

[math]=\left\{-\sum_{i=1}^4\alpha_i\alpha_i\beta\beta\right\}^\frac{1}{2}=\left\{-\sum_{i=1}^4\frac{1}{2}\frac{1}{2}\right\}^\frac{1}{2}=\sqrt{-1}=i[/math]

 

It follows that both charge and current densities in this mental model are, in fact, real.

 

There are a few issues which deserve a little attention. First, the issue of the substitution [imath]\nabla^2\left(\vec{\Psi}^\dagger \cdot \vec{\Psi}\right)=4\vec{\Psi}^\dagger\cdot\nabla^2\vec{\Psi}[/imath] essentially says that the resultant equations (and that would be Maxwell's equations) are only valid so long as the energy of the virtual photon is small enough such that quantum fluctuations can be ignored. That explains the old classical electron radius problem from a slightly different perspective. Maxwell's equations are an approximation to the actual situation. The correct solution requires one include additional elements (those quantum fluctuations which arise when the energy exceeds a certain threshold). It could very well be that the energy must be below the energy necessary to create a fluctuation equal to the energy of the electron: the field solutions must include photon-photon interactions before the "classical electron radius" is reached. It is also possible that inclusion of the fluctuations could lead to massive boson creation and another solution. The problem with that fact is that I have not discovered a way to approximate a solution to the required many body problem.

 

A second issue concerns the existence of magnetic monopoles. In this development of Maxwell's equations, the symmetry between the electric and magnetic fields does not exist and likewise, “magnetic monopoles” do not exist.

 

My position (directly opposed to Qfwfq) is that quantization of fields is not the proper approach to understanding reality. It is the quantum elements themselves which are conceptually fundamental, not the supposed fields he (and others) want to quantize. The fields are the consequence of the information to be explained, not the fundamental information itself.

 

From the above we may conclude that Maxwell's equations are an approximation to the fundamental equation and thus, the entire field of Classical Electrodynamics may be deduced from my fundamental constraint which is, in fact, nothing more than "any explanation of the universe must be internally self consistent". It is apparent that, once one defines "charge" and "current", Maxwell's equations are also true by definition.

 

Actually, the analysis just done implies considerable more than Classical Electrodynamics: Quantum Electro Dynamics is a direct consequence of adjusting for additional terms in the fundamental equation (quantum fluctuation). It is also of significance that, in the fundamental equation, the gradient operator was explicitly defined to be a four dimensional entity. So long as the second element (the boson interaction) is a massless element, the tau component of the gradient will vanish and resulting deduced equations will correspond exactly to Maxwell's equations; however, if the interacting boson generating the field is not massless we end up with an additional term in these supposed Maxwell's equations.

 

It is certainly of interest to examine the consequences of such a term. In analogy to electrostatics, let us look at a static solution for [imath]\Phi[/imath] in the vicinity of a Dirac event localized at the origin. In that case (for r not equal to zero, away from the origin) the equation for [imath]\Phi[/imath], expressed in spherical coordinates, is

[math] \frac{1}{r}\frac{\partial^2}{\partial r^2}(r\Phi)+\frac{1}{r^2sin(\theta)}\frac{\partial}{\partial \theta}\left(sin(\theta) \frac{\partial \Phi}{\partial \theta}\right)+\frac{1}{r^2 sin^2(\theta)}\frac{\partial^2 \Phi}{\partial \phi^2}=\left(\frac{m_2c}{\hbar}\right)^2\Phi[/math]

 

The angular solutions are identical to the standard spherical harmonics [imath]Y_{lm}(\theta,\phi)[/imath] presented in any decent discussion electrostatics (you should take a look at the widipedia entry, they have a nice graphic presentation; go down a ways). The radial equation for the case l=0 becomes:

[math]\frac{d^2U}{dr^2}=-\left(\frac{m_2c}{\hbar}\right)^2U\;\;\;\;\;where\;\;\;\;\;\Phi=\frac{1}{r}U®Y_{00}(\theta,\Phi)[/math]

 

with the obvious solution

[math]\Phi(r,\theta,\phi)=\frac{\rho}{r}e^{-\frac{mc}{\hbar}r}[/math]

 

Note that this equation presumes the validity of the representation of the source as a point element. We have two problems with this solution; first, we have the problem of presuming no quantum fluctuation and second, the solution has to be a function of the actual wave function of the Dirac element. However, the above is quite similar to the Yukawa potential. Thus the above constitutes strong evidence that nuclear interactions are also true by definition. Personally I will leave the issue there because I no longer have the mental abilities to carry this thing any further.

 

Essentially, what I have proved is that Dirac's equation is nothing more than an approximation to my fundamental equation for a specific circumstance. Since the fundamental equation is true by definition, we also know that it is merely the definition of the circumstance which makes Dirac's equation true: i.e., it thus becomes obvious that Dirac's equation is true by definition.

 

The existence of Dirac particles does tell us something about the universe: it tells us that certain specific patterns of data exist in our universe. Just as the astrologer points to specific events which occurred together with certain astrological signs, the actual information content is that the events occurred and that the signs were there. That can not be interpreted as a defense that the astrologers world view is correct! Both presentations are nothing more than mechanisms for cataloging information. The apparent advantage of the classical scientific position is that no cases exist which violate his "catalog" or so he tells you. When it comes to actual fact, both the scientist and the astrologer have their apologies for failure ready (mostly that you don't understand the situation or there are exigent circumstances). The astrologer says that there was a unique particular combination of signs the impact of which was not taken into account while the scientist says some new theory (another set of signs?) was not taken into account.

 

What is significant is that the existence of Dirac particles may add to our knowledge of the universe but it adds nothing to our understanding of the universe. This is an important point. The reader should realize that the object of all basic scientific research is to discover the rules which differentiate between all possible universes and the one we actually find ourselves in. Since my fundamental equation must be satisfied by all possible universes, only constraints not specifically required by that equation tell us anything about our universe.

 

Have fun -- Dick

Link to comment
Share on other sites

Well Anssi, I finally decided to post this thing. I think the evidence is pretty good that you and I are the only people taking the issue seriously and we could probably accomplish as much in private communications but I still have some hopes that there are some intelligent people thinking about the things I say so I will post on.

 

Yes, definitely we should keep this public. I am starting to think that these web forums may not be the best place to find people who'd be ready to invest the required time to understand this, but on the other hand it is the best forum for me to ask help with the math issues...

 

I'll have to find a proper time to really be able to focus, before I should reply more to this thread... I'm just doing couple of easy replies on other threads right now.

 

-Anssi

Link to comment
Share on other sites

In the same vein, I have brought together, in one expression the entire realms of physics represented by Newtonian mechanics, quantum mechanics, electrodynamics and relativity (both special and general). And all this without postulating a theoretical relationship but rather by deduction from the simple limitations required by self consistency.

 

Yes, that seems to be exactly what has happened, and I am myself starting to wonder where to find those competent people who could take a look and follow this. If I can understand it, I'm sure there's a lot of people out there who can as well. Albeit perhaps it helps to have some pre-existing understanding of the problems associated with us having to first define the things whose behaviour we take as "the laws of physics"... But I would think this should spark some interest and attention as it solves so many outstanding problems of physics...

 

Onto the topic:

 

To begin with, the following is a rather common representation of Dirac's equation for an electron coupled to an electromagnetic field.

[math]\left\{c\vec{\alpha}\cdot\left(\vec{p} -\frac{e}{c}\vec{A}\right)+\beta mc^2 +e\Phi \right\}\Psi=i\hbar\frac{\partial \Psi}{\partial t}[/math].

 

The various components of that equation are defined as follows:

[imath]\Psi(\vec{x},t)[/imath] is the “wave function” of the electron represented by Dirac's equation: i.e., the probability of finding an electron at the point defined by [imath]\vec{x}=x\hat{x}+y\hat{y}+z\hat{z}[/imath] at time t is given by [imath]\Psi^\dagger \Psi dV.[/imath] In Dirac's representation, [imath]\Psi(\vec{x},t )[/imath] has four components or, if one wishes to include “imaginary” and “real” as two components, [imath]\Psi(\vec{x},t)[/imath] has eight explicit components. Although this can be thought of as an eight dimensional abstract space, no one expresses [imath]\Psi(\vec{x},t)[/imath] in vector notation; it is simply taken as understood.

 

So, by that you mean that [imath]\Psi(\vec{x},t )[/imath] essentially outputs a vector of four (complex) components... And once again its the magnitude that correlates to the probability (via [imath]\Psi^\dagger \Psi dV[/imath]). Does it mean something that it has exactly four components?

 

The partial derivative [imath]i\hbar\frac{\partial}{\partial t}[/imath] is exactly the same energy operator defined through

 

In the Dirac equation, alpha and beta are anti commutating "matrices" essentially derived from the Pauli Spin matrices. The vector matrix alpha consists of three matrix components [imath](\vec{\alpha}= \alpha_x \hat{x}+\alpha_y\hat{y}+\alpha_z\hat{z})[/imath] and the forth matrix, beta, is associated with mc
2
in exactly the same way my [imath]\alpha_\tau[/imath] "operator" is associated with the momentum in the tau direction. (My operators could also be represented by "matrices" but, because there are essentially an infinite number, the idea is not really useful.) There is a minor difference in the definition of respective anti commuting operators. Dirac uses the
whereas my operators are defined somewhat differently (effectively, Dirac's matrices are mine times [imath]\sqrt{2}[/imath]). This is a simple consequence of the fact that the squared magnitude of his operators are defined to be unity whereas mine are defined to be one half. Other than that, they can be thought of as operating in an abstract multidimensional space quite analogous to Dirac's matrices.

 

Well, I'm not really familiar with pauli spin matrices, or matrices in general. (I don't know what it should tell me that something is considered to be "a matrix"... The mathematical representation [imath](\vec{\alpha}= \alpha_x \hat{x}+\alpha_y\hat{y}+\alpha_z\hat{z})[/imath] looks just like a vector to me)

 

Should I teach myself some of that stuff?

 

Dirac's [imath]\vec{p}=-i\hbar\left\{\frac{\partial}{\partial x}\hat{x}+\frac{\partial}{\partial y}\hat{y}+\frac{\partial}{\partial z}\hat{z}\right\} [/imath], is exactly the same momentum operator defined in my deduction of Schrödinger's equation.

 

Yup.

 

The factors e and c are the charge on the electron and the speed of light. And finally [imath]\vec{A}[/imath] and [imath]\Phi[/imath] are the standard electromagnetic field potentials defined through the solutions to Maxwell's equations.

 

Very unfamiliar with that stuff, but okay.

 

My fundamental equation, written in a four dimensional form (the Euclidean space of the representation being, [imath]\hat{x}[/imath], [imath]\hat{y}[/imath], [imath]\hat{z}[/imath] and [imath]\hat{\tau}[/imath], is

[math]\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}+\sum_{i \neq j}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\right\}\vec{\Psi}(\vec{x}_1,\vec{x}_2,\cdots,t)=K\frac{\partial}{\partial t}\vec{\Psi}[/math]

 

...where the [imath]x, y, z, \tau[/imath] parameters are embedded inside [imath]\vec{\alpha}_i[/imath] and [imath]\vec{\nabla}[/imath]. Just thought someone might be wondering why does this "four dimensional form" look exactly like the one dimensional form :)

 

It should be clear to the reader that this bears a striking similarity to Dirac's equation (in fact, I have been told by a few professional physicists that it is no more than a re-expression of Dirac's equation; strong evidence they didn't look very closely). Two clear differences stand out: in Dirac's equation, the wave function has a complex value (it is actually a vector in the complex space represented by a real component and an imaginary component) in a four dimensional matrix space, whereas my wave function is a vector in an abstract space of arbitrary dimensionality and, secondly, Dirac's equation has only one spacial argument [imath]\vec{x}[/imath] (it is a “one body” expression) whereas my equation has an infinite number of such arguments (it is a “many body” expression). The first issue is of no serious account (actually it existed in the derivation of Schrödinger's equation though I never pointed it out). My representation merely allows for more complex results; things to be discuss further down the road. The second issue will be handled in essentially the same manner as it was handled in the derivation of Schrödinger's equation.

 

Yup.

 

In Dirac's equation, [imath]\vec{A}[/imath] and [imath]\Phi[/imath] are electromagnetic potentials. In my equation, the potentials, V(x,t), are obtained by integrating over the expectations of some specific set of known solutions. Since, in common physics, the electromagnetic potentials arise through the existence of photons, it seems quite reasonable that the electromagnetic potentials (from my fundamental equation) would arise from integrating over the expectations of known massless solutions.

 

Yup.

 

In accordance with exactly the attack I took in deriving Schrödinger's equation, I will divide my [imath]\vec{\Psi}[/imath] into three components.

[math]\vec{\Psi}=\vec{\Psi}_1\vec{\Psi}_2\vec{\Psi}_0[/math]

 

where [imath]\vec{\Psi}_0[/imath] represents the entire rest of the universe, taken to be essentially independent of [imath]\vec{\Psi}_1[/imath] and [imath]\vec{\Psi}_2[/imath]. [imath]\vec{\Psi}_1[/imath] is the function we are looking for (the function which yields the expectations for the Dirac particle) and [imath]\vec{\Psi}_2[/imath] is to yield the known expectations for the electromagnetic potential arising from a single photon (that result will be used to generalize the final result: i.e., more photons will be added later). This neglect of all other possible contributions being explicitly in accordance with the approach taken by modern physics, it is entirely reasonable for me to presume no connections exist between [imath]\vec{\Psi}_0[/imath] and the other two functions.

 

Yes, and if that presumption - of no connections - is embedded in Dirac equation, I would think it is not only reasonable, but absolutely required to take [imath]\vec{\Psi}_1[/imath], [imath]\vec{\Psi}_2[/imath] as having no feedback with [imath]\vec{\Psi}_0[/imath]. It is how they are to behave by definition, isn't it?

 

Hmm, so then before I attempt to walk through that first step of algebra, I would like to make sure I understand [imath]\vec{\Psi}=\vec{\Psi}_1\vec{\Psi}_2\vec{\Psi}_0[/imath] correctly...

 

I take it that [imath]\vec{\Psi}_1[/imath] and [imath]\vec{\Psi}_2[/imath] are both a function of [imath]x_1[/imath] and [imath]x_2[/imath]...

But [imath]\vec{\Psi}_0[/imath] is not a function of those two elements at all...

 

So:

[math]

\vec{\Psi}=\vec{\Psi}_1 (\vec{x_1}, \vec{x_2}, t) \vec{\Psi}_2(\vec{x_1}, \vec{x_2}, t)\vec{\Psi}_0(\vec{x_3}, \vec{x_4},..., \vec{x_n},t)

[/math]

 

Is that correct?

 

I'll continue from here later...

 

-Anssi

Link to comment
Share on other sites

I apologize in advance for the length of this post. I guess I just got carried away. I will edit out (or re-express) anything you feel serves no purpose to this thread. I would suggest the reader simply scan the post reading only that which interests them. Thanks for the indulgence.

 

Anssi, you are a very bright person and I am sorely tempted to try to teach you physics; however, there is so much to be covered (much of which are things I haven't thought about for over forty years) that actually communicating is a god awful job. I will try to communicate some of the aspects which come to bear here but I am afraid it will end up being buried in a lot of hand waving. In a way, it reminds me of “Synergetics” and I fully understand how the central issues of physics can easily be expanded into a document of little use beyond being a door stop. Your questions are touching on the issue embodied in comparing “wave mechanics” to “matrix mechanics”; an issue Dirac removed when he proved the two were equivalent. The simple fact that their being equivalent was not evident to the physics community gives meaning to the complexity we are talking about here. I will try to hit the high points but don't expect perfect clarity.

So, by that you mean that [imath]\Psi(\vec{x},t )[/imath] essentially outputs a vector of four (complex) components... And once again its the magnitude that correlates to the probability (via [imath]\Psi^\dagger \Psi dV[/imath]). Does it mean something that it has exactly four components?
If you don't want to read this or get bogged down by it, just skip down to “Termination of hand waving!” :eek_big:

 

You need to understand a little about the consequences of the vector nature of [imath]\Psi[/imath]. If you go back to my original introduction of [imath]\vec{\Psi}(\vec{x},t)[/imath], you will see that I essentially stated that absolutely any mathematical function can be represented by that notation. A mathematical function carries a given set of numbers (called the argument of that function: i.e., [imath](\vec{x},t)[/imath]) into a second set of numbers referred to as [imath]\vec{\Psi}[/imath] (the value of the function: something which can be seen as a vector in an abstract space). As I said at the time, you can see this as a computer program if you wish ([imath](\vec{x},t)[/imath] represents the input data for the program and [imath]\vec{\Psi}[/imath] represents the output). If a process exists for getting from the first set of numbers to the second set of numbers then that process can be thought of as a member of the set of all possible mathematical functions.

 

The output of such a representation can always be transformed into a single number by summing the squares of all the components. The size of that number can always be adjusted by dividing by an appropriate number (found by setting integration over all possibilities to a fixed value: i.e., probability summed over all possibilities is defined to be 1). The point being that, if the probability of seeing a particular set of arguments can be deduced, it can be represented by a mathematical function under the notation [imath]\vec{\Psi}^\dagger\cdot \vec{\Psi}dV[/imath] via the proper definition of “[imath]\dagger[/imath]” and dV.

 

This is directly opposite to the standard physics approach. They come up with some equations (called quantum mechanics and it is really a long story as to how they got there) which determine a function they call [imath]\Psi[/imath]. Then they essentially postulate that this function is a wave function who's amplitude determines the probability the event will occur at the point referred to as the argument of that function (essentially because it yields a rational interpretation of the equations). As opposed to this, what I point out is that, if the probability is determinable, one can always come up with such a function and it has absolutely nothing to do with the existence of their hypothetical theory of quantum mechanics. None the less, we need to understand that hypothetical quantum mechanics for the simple fact that it does indeed amount to a summary of a significant number of solidly known experimental results: i.e., the defined functions constitute explanations of a quite a large domain of physical experiments.

 

What we really need to discuss now is the question of “notation” (how these quantum mechanical relationships are to be represented). An early presentation of “quantum mechanics” was essentially represented by Schrödinger's equation which was referred to as “wave mechanics” because of it's similarity to the differential equations which described waves (seems reasonable :shrug: ). Essentially, this picture resolved the wave/particle duality noticed at the time: i.e., particles were energy quantized waves. I bring this issue up because I want to explain some of the consequences which can be represented by the abstract vector nature of a function [imath]\Psi[/imath].

 

If these waves were to represent the probability of finding a particle, then we certainly can't map them directly into standard waves of water as such. The problem is that positions of high probability (the peaks of the waves) would be propagating along with associated positions of low probability (the troughs of the waves); an interpretation which is simply not consistent with the experiments. It turns out that there is a simple solution out there. If there are two functions, perfectly orthogonal to one another (in an abstract space) then the contributions can offset one another. That solution is to cast into wave mechanics by making [imath]\Psi[/imath] a complex number.

 

To see how that interpretation manages to solve the problem, you need to see the solution as a vector in that two dimensional complex space. Each component is a simple wave (sine or cosine or a sum of sines and cosines) and the two components are exactly 90 degrees out of phase. That is, associated with one term representable by “[imath]sin\left(\frac{x-vt}{\lambda}\right)[/imath]” is another term representable by “[imath]cos\left(\frac{x-vt}{\lambda}\right)[/imath]”. These, being in that complex space, can be represented by [imath]\Psi=a+ib[/imath]. If one defines “[imath]\dagger[/imath]” to mean “change the sign of the imaginary component”, then [imath]\Psi^\dagger\Psi= (a-ib)(a+ib)=a^2-iba+aib-i^2b^2=a^2+b^2[/imath] (since i squared is -1). Now if a and b are constructed with sine and cosine functions 90 degrees out of phase, a^2+b^2 (being a sum of terms like [imath]sin^2+cos^2[/imath]) will be a constant. Essentially the peaks of one function line up exactly with the troughs of the other and the probability of finding the particle becomes a broad function sans these peaks and troughs. At the same time, the interference between these waves can still generate exactly the interference patterns based on those very same peaks and troughs (being interference of the actual waves with those two components, they are a function of [imath]\Psi[/imath] and not directly of [imath]\Psi^\dagger\Psi[/imath], the probability of finding the particle).

 

This presentation was an attempt to give you an understanding of the kind of phenomena the vector characteristic of [imath]\Psi[/imath] can produce. The output of [imath]\Psi[/imath] can be seen as either a complex number or as a vector in a two dimensional abstract space. More complex abstract spaces can produce other, more complex, relationships. I brought this up because of the importance of “matrix mechanics” developed by Werner Heisenberg and others. “Matrix mechanics” and “wave mechanics” were contemporary theories. Actually I think matrix mechanics came first but I don't think the issue is really significant in view of the fact that Dirac proved that the two were totally equivalent to one another. The only real difference is notation. Personally, I am of the opinion that the various different notations hide various significant issues but that is not the way the subject is professionally presented. I myself prefer the Schrödinger notation as I see it as inherently yielding exactly what mathematical processes are required to calculate experimental results (i.e., it maps most easily into the common classical world view).

 

In fact, notation is a pet peeve of mine. It has always hit me that professional scientists tend to use notation to hide their logic. The end result tends to be very similar to the secret magic spells used by ancient magicians to confine their knowledge to initiates in their secret organizations. Even the people knowledgeable in the field seem to think that the physics is in the notation and the procedures the notation stands for. I was always astounded by the fact that Richard Feynman (who, believe me, I think was one of our greatest physicists as he had an understanding of physics only touched upon by others) actually got his Nobel prize for inventing a notation to keep track of terms in a perturbation expansion invented by others. (“He created a new formalism which he made very useful for practical calculations by introducing a graphical interpretation called Feynman diagrams, which have become an important feature of modern physics.” A direct quote from the award ceremony in 1965.) But that is just another rant; we need to get back to notations in quantum mechanics.

 

In my opinion, Dirac's notation is the most inherently obscure of the three. In Dirac's notation one has the concept of “operators” and “states”. “States” are represented by named “bras” (a bra is written as <”name”|) and named “kets” (a ket being written |”name”>). “Operators” are also simply named and correspond to physically measurable variables. The expectation value (the answer one expects) is written <state#2| operator |state#1> (which are clearly “bra-kets”; that was very much an intentional result of his notation :D ). Essentially the “ket” amounts to a representation of Schrödinger's [imath]\Psi[/imath], the “bra” amounts to [imath]\Psi^\dagger[/imath] and the operator corresponds to the thing being measured. Fundamentally, it is a shorthand notation which can only be understood after one understands the entire collection of underlying concepts.

 

Life begins to get a little complicated here. State #1 is the state of the system before the measurement is taken and state #2 is the state of the system after the measurement is taken (this is an issue central to quantum mechanics which I have not brought up prior to now, though I have used it indirectly). It bears directly upon changes in the state due to the act of measurement; an issue I have essentially avoided discussing. There are at least three important concepts you have to have in mind in order to transform your thoughts from Schrödinger's attack to Dirac's (which is then easily transformed into “matrix mechanics”).

 

The first is the idea of “eigenstates”. Specific eigenstates are defined by specific operators. If an operator, say O?, operating on a given state, say [imath]\phi_a[/imath], yields a number times that state (i.e., [imath]O_?\phi_a=k_a\phi_a[/imath] or [imath]O_?|a>=k_a|a>[/imath]) then clearly [imath]\int\phi_a^\dagger O_?\phi_a dV=k_?[/imath] since [imath]\int\phi_a^\dagger \phi_a dV=1[/imath]: i.e., the actual measurement does not change the state and an actual number (the associated eigenvalue of O?) is obtained. Dirac's notation is essentially constructed around such eigenstates.

 

This leads us to the second underlying idea. That is the issue often referred to as “completeness” (at least that is what they called it when I was a graduate student; it seems a lot of terminology has changed since those ancient days). That is the idea that the complete set of “eigenfunctions” form a basis which can be used in, or as, a sum to specify any desired function. The “Fourier transform” is a good example of such a basis.

 

I googled “Fourier transform” to find a good reference for you but I never found anything I really thought did a good job. I listened to about a half hour of the following 52 minute Stanford lecture.

 

YouTube - Lecture 1 | The Fourier Transforms and its Applications http://www.youtube.com/watch?v=gZNm7L96pfY

 

It just reminded me of why I slept through so many lectures when I was an undergraduate. :eek_big: The lectures were, for the most part, god awful; one could get a better understanding by just reading the book. You ought to take a look at that lecture to just get a rough idea of the hypnotic nature of a lot college science courses.

 

With regard to getting an inkling of Fourier transforms, it might be worth while to look at "The Fourier Transform”, part I and part ll

 

YouTube - The Fourier Transform- Part I http://www.youtube.com/watch?v=ObklYbQaX24

 

YouTube - The Fourier Transform- Part II http://www.youtube.com/watch?v=QO3kgwYzpZg.

 

I also ran accross this following film which sort of gives the student reaction to the above presentation.

 

YouTube - The Fourier Transform Film http://www.youtube.com/watch?v=XkJpbfGp0hE

 

At any rate, regarding my comment above concerning completeness, sines and cosines constitute the complete set of “eigenfunctions” of the [imath]\frac{\partial}{\partial x}[/imath] operator (the essential differential nature of our momentum and energy operators). The Fourier transform constitutes the transformation of any continuous f(x) into a sum of sine and cosine functions of specific wave length (or, as is expressed in the videos, f(t) into sine and cosine functions of [imath]\omega[/imath].

 

 

 

These kind of transformations (from functions to sums over eigenfunctions) transform between two different representations of the underlying data (position, x, and momentum, wave length, or time, t, and frequency, [imath]\omega[/imath]). In physics, the two kinds of variables being referred to here are generally referred to as “canonical variables” (the underlying data can be seen as totally represented as a function of one or the other, the value of [imath]\Phi[/imath] for every possible argument or the probability of a specific eigenvalue for every possible eigenvalue). You might take a look at the wikipedia reference to the term.

 

This is the source (or perhaps the end result) of the idea that the correct quantum mechanical state of a system can be expressed as a sum over the appropriate eigenfunctions. The issue being that the eigenstates of interest depend upon the operator of interest: i.e., exactly the same state can be expressed by a different set of eigenstates, the set of interest being defined by the operator representing measurement being taken. The complexity arising out of this issue is a whole subject unto itself. The Bell problems of entanglement arise from sending a solution in terms of one collection of eigenstates off into the universe and then performing a measurement with regard to a different set of eigenstates. But back to my original thread of thought.

 

I have totally forgotten the third issue I had in mind! Senility is a terrible thing! I won't change what I have already written because I want you to understand just what kind of a mental incompetent you are talking to. At any rate, in order to bring up more complex effects of the abstract vector aspect of [imath]\Psi[/imath] it is best to bring the issue up in terms of matrices and matrix algebra. First, a matrix is a collection of numbers such as

[math]\begin{bmatrix}n_{00} &n_{01} &\cdots &n_{0k} \\n_{10} &n_{11} &\cdots &n_{1k} \\\vdots &\vdots &\ddots &\vdots \\n_{k0} &n_{k1} &\cdots & n_{kk}\end{bmatrix}[/math]

 

Matrices need not be square; however, I will constrain this presentation of “operators” to be represented by square matrices. Multiplication between matrices is most easily understood by first defining the abstract vector [imath]\vec{\Psi}[/imath] in matrix notation (being a vector, it is represented by a one by n matrix where n is the number of components in the vector). The function [imath]\vec{\Psi}[/imath] could be represented as

[math]\begin{bmatrix}\Psi_1\\ \Psi_2\\ \vdots\\ \Psi_k\end{bmatrix}[/math]

 

Likewise, [imath]\vec{\Psi}^\dagger[/imath] would then be represented by

[math]\begin{bmatrix}\Psi_1^\dagger\;\Psi_2^\dagger\;\cdots \;\Psi_k^\dagger \end{bmatrix}[/math]

 

The product [imath]\vec{\Psi}^\dagger\cdot\vec{\Psi}=\Psi_1^\dagger\Psi_1+\Psi_2^\dagger\Psi_2\cdots+\Psi_k^\dagger\Psi_k[/imath], would then be exactly the result obtained with Schrödinger's notation. Using that result as a fundamental example of the desired nature of matrix multiplication, we can define that multiplication in terms of horizontal “rows” times vertical “columns”. If two matrices are multiplied together, the (i,j)th member of the product matrix is the ith row of the left matrix times the jth column of the right matrix.

 

Finally, instead of expressing things in terms of Schrödinger's wave functions, [imath]\vec{\Psi}[/imath], we will instead express our states in terms of specific eigenstates “canonical” to the coordinates of interest to us. (Let me just hand wave about “coordinates of interest” here! The coordinates of interest need not be at all what one would perceive to be reasonable reasonable from a classical perspective. I will explain the nature of the range of coordinate transforms which might be of interest to us when we get into General relativity.)

 

Thus it is that the nature of Dirac's notation is to make it very easy to express the outcome of specific measurements. (I should, at this moment bring out the issue that quantum mechanics, as seen by physicists in the field, is not at all a derived subject but rather a method of expressing specific “real” circumstances: i.e., the number of assumptions concerning what and how these measurements are defined are already far in excess of any assumptions I have made in my presentation as they presume the correctness of the physicists world view.) All measurements can be defined in terms of specific “operators” and all states can be represented by “kets” consisting of vectors built from probability weighted possible eigenstates. (You need to understand that all of these things are presumed to be well defined in their mental picture.) Thus it is that state #1 (no matter what that state consists of) can be represented by the “ket” [imath]|state\#1>[/imath] where that specific “ket” can be represented by the distribution of eigenstates of “some coordinates of interest” (the variable canonical to that “coordinate of interest”.

[math]\begin{bmatrix}p_1(|a_1>)\\ p_2(|a_2>)\\ \vdots\\ p_k(|a_k>)\end{bmatrix}[/math]

 

One can then deduce the nature of the matrix necessary to represent the operator by looking at the states (expressed in such matrix vectors) consisting of specific eigenvectors “of interest”: i.e., something of the form

[math]\begin{bmatrix}0\\ 0\\ \vdots\\1\\ \vdots\\0\end{bmatrix}[/math]

 

where only the jth element is non zero and it is one: i.e., the state being represented is the eigenstate [imath]|a_j>[/imath]. A little thought should convince you that the (ij)th element of the matrix representing the operator of interest “O” is exactly [imath]<a_i|O|a_j>[/imath]

 

That brings us directly to “matrix mechanics” where the physical laws are written in terms of matrix equations (equations where the elements of interest are matrices). Again, I have to make it clear that these physicists are concerned with representing measurements which they see as clearly defined in their world view. What is central to that perspective is the discreet nature of those possible eigenstates: subtle mechanisms are used to push the results into circumstances they see as continuous. So, all of the above was to bring us down to the subject to “spin angular momentum”.

 

In classical mechanics, angular momentum is a very important component of physical dynamics. Essentially, it is the momentum of an object with respect to a fixed point. As ordinary momentum is the variable canonical to position, angular momentum is the variable canonical to angular position (in a spherical coordinate system; remember the “coordinates of interest” issue). The interesting thing about angular momentum is that when one looks at [imath]\Phi(\theta)[/imath] (the arguments r and [imath]\phi[/imath] are just being ignored for the moment). If one adds 360 degrees to [imath]\theta[/imath] one is right back to exactly the same position originally specified as [imath]\theta[/imath] thus the probability must return exactly to whatever it was before the 360 degrees were added. Just as the length of a string on on a musical instrument sets the possible wave lengths of the vibrations of that string (the end points of the string must be nodes), the fact that the probabilities must exactly repeat after 360 degrees set the possible distances between nodes in [imath]\Phi(\theta)[/imath]. Only a specific set of discreet states are possible: i.e., angular momentum must be quantized.

 

Expressing the angular momentum operator in Schrödinger's notation is quite straight forward. It is worthwhile to look at the section “Angular momentum in quantum mechanics” in the “wikipedia”entry under “Angular_momentum”. The important point brought out there is that the components of angular momentum (in classical mechanics, it is a vector quantity) do not commute. It turns out that the magnitude of the angular momentum and one component does commute (usually taken to be the z component for convenience). It is important to understand that non-commuting operators can not be in eigenstates simultaneously (if you want to understand that, we can go into it but, for the moment, accept the hand waving or read that angular momentum entry carefully).

 

Of essence is the fact that, the quantum mechanical operators obey the commutation relationships that they anti commute. Furthermore , the commutation of any two components yields [imath]-i\hbar[/imath] times the third (commuting them just changes the sign). The problem is that, if one examines the actual states of the hydrogen atom in a magnetic field, (remember, physicists are presuming the classical world view is a valid picture here) one discovers a subtle splitting of eigenstates which the classical picture does not produce (just accept the hand waving here). In 1926 Wolfgang Pauli used Heisenbergs matrix theory of quantum mechanics to solve the spectrum of the hydrogen atom. A significant part of that solution was Pauli's introduction of 2x2 matrices presently known as “Pauli spin matrices”.

Well, I'm not really familiar with pauli spin matrices, or matrices in general. (I don't know what it should tell me that something is considered to be "a matrix"... The mathematical representation [imath](\vec{\alpha}= \alpha_x \hat{x}+\alpha_y\hat{y}+\alpha_z\hat{z})[/imath] looks just like a vector to me)

 

Should I teach myself some of that stuff?

Maybe, maybe not; you should be aware of its existence and have some idea as to where and why such things came up. If you can understand what I just wrote, great! If it doesn't make any sense to you don't worry about it. The only reason I brought it up is that you just can't explain a lot of modern physics without it. You can add the idea of Pauli's spin matrices to Schrödinger's picture by thinking of [imath]\Psi[/imath] being a vector entity in the abstract two dimensional space of Pauli's matrices. This possibility is also handled by my representation of [imath]\Psi[/imath] as a vector in an abstract space: i.e., my [imath]\vec{\Psi}[/imath]. My anti-commutating alpha and beta operators certainly provide the anti-commutation properties of his spin matrices; however, if you look at my work carefully, you will discover that I never bring up the actual commutation properties of these operators (those commutation properties never come up prior to this deduction of Dirac's equation).

 

If you think about it, you should be able to understand that allowing [imath]\Psi[/imath] to be a vector in Pauli's matrix space, will fulfill the requirements taken advantage of by Pauli in much the same way that the two dimensional representation of complex numbers handled the absence of those peaks and troughs of the water wave analogy. There is an interesting thing about that vector solution to the peak and trough problem; the two components of [imath]\Psi[/imath] yields a two component Schrödinger equation (one in the real direction and one in the imaginary direction). One might presume that these two equations are independent of one another; however, that can not be so. The requirement that these peaks and troughs can not exist in the final solution requires that the two solutions be reflections of one another so that the resultant magnitude does possess such peaks and troughs.

 

The same thing goes for the abstract space of Pauli's spin matrices. One can see [imath]\Psi[/imath] as a vector in Pauli's matrix space. A direct representation would yield independent equations for each component with no association between the various components. But, in order to obtain identification with Pauli's spins, we need to add in the relationships between these components required by the commutation relationships included in Pauli's picture. There is a subtle problem here. The theoretical physicists problem is to explain the experiment: i.e., if those relationships have to be there, he just postulates that they are there. My picture is entirely deductive so it becomes imperative that I come up with a logical reason why these commutation relationships are necessary.

 

When I established that the imaginary and real components of [imath]\Psi[/imath] had to be reflections of one another, that was rather a subtle trick of misdirection of attention. I really gave no reason why the peaks and troughs shouldn't be in the final solution, I just simply asserted they shouldn't be there. The fact that the assertion needs at least a little defense is too obvious to overlook in the spin circumstance so I will give you my argument as to why it must be so. (I put it the way I do because this is an issue upon which I occasionally convince myself I am wrong; but then again I convince myself at a later date that I am correct. One could say that the issue is somewhat open.) When I convince myself that I am right, I do it by arguing that we are talking about the [imath]\vec{\Psi}[/imath] which explains the information known to us. When we cast that into an explanation, and the explanation requires a higher order vector space for [imath]\Psi[/imath] all solutions yielding exactly that explanation (for the known data in any representation) must be equivalent. This implies that the abstract vector space expressed by [imath]\vec{\Psi}[/imath] must contain the same kind of symmetries extant in the kind of space used to represent the underlying data. (That is, the Pauli matrices, along with the identity matrix I, form an orthogonal basis for the complex Hilbert space of all 2 × 2 matrices.)

 

What that implies is that the consequences of spherical symmetry in that abstract space leads us to the lie algebras of which both Pauli's spin matrices and Dirac's 4x4 matrices are examples. Thus it is that I really don't go into the issue of that aspect of Dirac's equation; I take it as a more advanced issue which anyone competent in physics would find reasonably acceptable as represented in my fundamental equation.

 

-Termination of hand waving -

Very unfamiliar with that stuff, but okay.
Yeah, I think that is pretty okay. The real point here is that I obtain Maxwell's equations as an approximation to my equation in the specific circumstance where a “charged Dirac fermion” exists: i.e., is when an associated specific solution to a solution to that fundamental equation can be interpreted to be a “charged Dirac fermion”, the interaction function turns out to satisfy Maxwell's equations. In essence, that means Maxwell's equations are also true by definition; at least they are true by my definition of terms which are indeed quite consistent with the physics communities standard definitions.
Yes, and if that presumption - of no connections - is embedded in Dirac equation, I would think it is not only reasonable, but absolutely required to take [imath]\vec{\Psi}_1[/imath], [imath]\vec{\Psi}_2[/imath] as having no feedback with [imath]\vec{\Psi}_0[/imath]. It is how they are to behave by definition, isn't it?
Essentially that is absolutely correct. In standard physics, the way in which electrons and electromagnetic fields interact has utterly nothing to do with what the rest of the universe is doing. Removing [imath]\vec{\Psi}_0[/imath] from the problem is totally consistent with the standard interpretation of “the facts”.
I take it that [imath]\vec{\Psi}_1[/imath] and [imath]\vec{\Psi}_2[/imath] are both a function of [imath]x_1[/imath] and [imath]x_2[/imath]...
Essentially, yes; however, there is a subtle issue here that one should keep in mind. Given a specific [imath]x_1[/imath] and [imath]x_2[/imath], one can define a function [imath]\vec{\Psi}(x_1, x_2, t)[/imath] who's magnitude squared is the probability of element #1 being at found (or represented by) [imath]x_1[/imath] and element #2 being found at [imath]x_2[/imath]. But one cannot guarantee that such a function, [imath]\vec{\Psi}(x_1, x_2, t)[/imath], can be always be factored into a product of two functions [imath]\vec{\Psi}_1(x_1,t)\vec{\Psi}_2( x_2, t)[/imath]. That has to do with the nature of probability itself. On the other hand, [imath]\vec{\Psi}(x_1, x_2, t)[/imath] can always be factored into two functions where one function yields specifically the probability that one element will be found (or represented by) a specific argument: i.e., either [imath]\vec{\Psi}_1(x_1,t)[/imath] or [imath]\vec{\Psi}_2( x_2, t)[/imath]. The problem is that, once that step is taken, the other function must express the probability that the other element will be found (or represented by) the other argument “given that the first element was indeed found (or represented by) that first argument. Think about it for a while and I suspect you will understand the issue.

 

The point being that, when such a factorization is expressed, one of the functions needs both arguments but not both functions. And making both functions dependent upon both arguments just doesn't really make sense. I often take advantage of the fact that the pair may be factored either way; but, when I do that, I have to go back to the original relationship: i.e., you cannot jump back and forth between the two representations willy nilly as the associated functions are fundamentally defined differently.

 

Now it is always possible that [imath]\vec{\Psi}(x_1, x_2, t)[/imath] could factor into the form [imath]\vec{\Psi}_1(x_1,t)\vec{\Psi}_2( x_2, t)[/imath] but that would be a special case and cannot be presumed to be generally true. Physicists will often presume such a factorization is possible as it greatly simplifies the search for solutions but such a presumption could very well be false.

 

We can talk further on that issue if you find my assertion confusing. In essence this issue is related to the definitions of probability and, in the final analysis, ends up having no bearing upon our analysis for some simple but subtle reasons.

But [imath]\vec{\Psi}_0[/imath] is not a function of those two elements at all...
That is indeed the nature of the presumption that the state of the “rest of the universe” has no impact upon the two body problem being examined and that presumption is almost universally presumed in any common physics experiment.
So:

[math]

\vec{\Psi}=\vec{\Psi}_1 (\vec{x_1}, \vec{x_2}, t) \vec{\Psi}_2(\vec{x_1}, \vec{x_2}, t)\vec{\Psi}_0(\vec{x_3}, \vec{x_4},..., \vec{x_n},t)

[/math]

 

Is that correct?

Not technically! That is why I leave out the arguments in my expression

[math]\vec{\Psi}=\vec{\Psi}_1\vec{\Psi}_2\vec{\Psi}_0[/math]

 

and instead use the phrase, “it is entirely reasonable for me to presume no connections exist between [imath]\vec{\Psi}_0[/imath] and the other two functions”. Since I haven't made the decision as the which of the two remaining functions are dependent upon two arguments, I can let that decision slide until I reach a point where “convenience” moves me to make the decision. The decision is actually made when I set forth the statement: “the last step in this process is to left multiply by [imath]\vec{\Psi}_2^\dagger[/imath] and integrate over [imath]\vec{x}_2[/imath]”. Assertion that this results in unity for all the terms except the one involving the Dirac delta function is essentially an assertion that [imath]\vec{\Psi}_2[/imath] is the function not dependent upon the other argument. Integration over that Dirac delta function sets the fact (via the fundamental equation) that, only when [imath]\vec{x}_1=\vec{x}_2[/imath] can the other function depend upon [imath]\vec{x}_2[/imath]: i.e., it is the fact that all interactions between elements are mediated by the Dirac delta function which removes the dependence on [imath]\vec{x}_2[/imath] from [imath]\vec{\Psi}_1[/imath]. In other words, the fact that the only interaction between elements is due to the Dirac delta function essentially says there can be no connection between the two except when [imath]\vec{x}_1=\vec{x}_2[/imath] so the technical existence of [imath]\vec{x}_2[/imath] in [imath]\vec{\Psi}_1[/imath] becomes a moot issue.

 

I am sorry if the circumstance has become complex. These kinds of issues have to be taken care of in detail otherwise they can be used as arguments concerning the idea that I have left them out of my deductions.

 

Have fun -- Dick

Link to comment
Share on other sites

Anssi, you are a very bright person and I am sorely tempted to try to teach you physics; however, there is so much to be covered (much of which are things I haven't thought about for over forty years) that actually communicating is a god awful job. I will try to communicate some of the aspects which come to bear here but I am afraid it will end up being buried in a lot of hand waving. In a way, it reminds me of “Synergetics” and I fully understand how the central issues of physics can easily be expanded into a document of little use beyond being a door stop. Your questions are touching on the issue embodied in comparing “wave mechanics” to “matrix mechanics”; an issue Dirac removed when he proved the two were equivalent. The simple fact that their being equivalent was not evident to the physics community gives meaning to the complexity we are talking about here. I will try to hit the high points but don't expect perfect clarity.

If you don't want to read this or get bogged down by it, just skip down to “Termination of hand waving!” :eek_big:

 

Thank you, and don't worry, I'm quite interested of understanding some conventional physics...

 

You need to understand a little about the consequences of the vector nature of [imath]\Psi[/imath]. If you go back to my original introduction of [imath]\vec{\Psi}(\vec{x},t)[/imath], you will see that I essentially stated that absolutely any mathematical function can be represented by that notation. A mathematical function carries a given set of numbers (called the argument of that function: i.e., [imath](\vec{x},t)[/imath]) into a second set of numbers referred to as [imath]\vec{\Psi}[/imath] (the value of the function: something which can be seen as a vector in an abstract space). As I said at the time, you can see this as a computer program if you wish ([imath](\vec{x},t)[/imath] represents the input data for the program and [imath]\vec{\Psi}[/imath] represents the output). If a process exists for getting from the first set of numbers to the second set of numbers then that process can be thought of as a member of the set of all possible mathematical functions.

 

The output of such a representation can always be transformed into a single number by summing the squares of all the components, adding them all together.

 

I assume you meant to just write "summing the squares of all the components", and so I take it you are talking about a value that can be interpreted as the square of the magnitude of the vector.

 

The size of that number can always be adjusted by dividing by an appropriate number (found by setting integration over all possibilities to a fixed value: i.e., probability summed over all possibilities is defined to be 1). The point being that, if the probability of seeing a particular set of arguments can be deduced, it can be represented by a mathematical function under the notation [imath]\vec{\Psi}^\dagger\cdot \vec{\Psi}dV[/imath] via the proper definition of “[imath]\dagger[/imath]” and dV.

 

Right.

 

This is directly opposite to the standard physics approach. They come up with some equations (called quantum mechanics and it is really a long story as to how they got there) which determine a function they call [imath]\Psi[/imath]. Then they essentially postulate that this function is a wave function who's amplitude determines the probability the event will occur at the point referred to as the argument of that function (essentially because it yields a rational interpretation of the equations). As opposed to this, what I point out is that, if the probability is determinable, one can always come up with such a function and it has absolutely nothing to do with the existence of their hypothetical theory of quantum mechanics.

 

Yup.

 

None the less, we need to understand that hypothetical quantum mechanics for the simple fact that it does indeed amount to a summary of a significant number of solidly known experimental results: i.e., the defined functions constitute explanations of a quite a large domain of physical experiments.

 

What we really need to discuss now is the question of “notation” (how these quantum mechanical relationships are to be represented). An early presentation of “quantum mechanics” was essentially represented by Schrödinger's equation which was referred to as “wave mechanics” because of it's similarity to the differential equations which described waves (seems reasonable :shrug: ). Essentially, this picture resolved the wave/particle duality noticed at the time: i.e., particles were energy quantized waves. I bring this issue up because I want to explain some of the consequences which can be represented by the abstract vector nature of a function [imath]\Psi[/imath].

 

If these waves were to represent the probability of finding a particle, then we certainly can't map them directly into standard waves of water as such. The problem is that positions of high probability (the peaks of the waves) would be propagating along with associated positions of low probability (the troughs of the waves); an interpretation which is simply not consistent with the experiments. It turns out that there is a simple solution out there. If there are two functions, perfectly orthogonal to one another (in an abstract space) then the contributions can offset one another. That solution is to cast into wave mechanics by making [imath]\Psi[/imath] a complex number.

 

To see how that interpretation manages to solve the problem, you need to see the solution as a vector in that two dimensional complex space. Each component is a simple wave (sine or cosine or a sum of sines and cosines) and the two components are exactly 90 degrees out of phase. That is, associated with one term representable by “[imath]sin\left(\frac{x-vt}{\lambda}\right)[/imath]” is another term representable by “[imath]cos\left(\frac{x-vt}{\lambda}\right)[/imath]”. These, being in that complex space, can be represented by [imath]\Psi=a+ib[/imath]. If one defines “[imath]\dagger[/imath]” to mean “change the sign of the imaginary component”, then [imath]\Psi^\dagger\Psi= (a-ib)(a+ib)=a^2-iba+aib-i^2b^2=a^2+b^2[/imath] (since i squared is -1). Now if a and b are constructed with sine and cosine functions 90 degrees out of phase, a^2+b^2 (being a sum of terms like [imath]sin^2+cos^2[/imath]) will be a constant. Essentially the peaks of one function line up exactly with the troughs of the other and the probability of finding the particle becomes a broad function sans these peaks and troughs.

 

Ahha. While thinking this through, I found a handy little applet that I could use to plot wave functions and add them together, at the bottom of this page:

Functions 2 - maths online Gallery

 

And indeed sin(x)^2+cos(x)^2 produces a constant.

 

At the same time, the interference between these waves can still generate exactly the interference patterns based on those very same peaks and troughs (being interference of the actual waves with those two components, they are a function of [imath]\Psi[/imath] and not directly of [imath]\Psi^\dagger\Psi[/imath], the probability of finding the particle).

 

Yup, pretty clever.

 

...

 

In my opinion, Dirac's notation is the most inherently obscure of the three. In Dirac's notation one has the concept of “operators” and “states”. “States” are represented by named “bras” (a bra is written as <”name”|) and named “kets” (a ket being written |”name”>). “Operators” are also simply named and correspond to physically measurable variables. The expectation value (the answer one expects) is written <state#2| operator |state#1> (which are clearly “bra-kets”; that was very much an intentional result of his notation :D ). Essentially the “ket” amounts to a representation of Schrödinger's [imath]\Psi[/imath], the “bra” amounts to [imath]\Psi^\dagger[/imath] and the operator corresponds to the thing being measured. Fundamentally, it is a shorthand notation which can only be understood after one understands the entire collection of underlying concepts.

 

Reading that, and looking up the Wikipedia entry on "Dirac notation" little bit, I get a very superficial idea of what it's all about... I gather the bra is seen as a vector built out of components along one column on some matrix, and the ket is seen as a vector built out of components along one row on some matrix.

 

The page seems to be implying that the matrix space is seen as a hilbert space (which I read is an n-dimensional extension on euclidean space), and the notation means an inner product (which I read is an n-dimensional extension on dot product) between bra and ket.

 

I would have to read and think a lot more to understand all that better, but at least I have a tiny idea about what the notation is about.

 

Life begins to get a little complicated here. State #1 is the state of the system before the measurement is taken and state #2 is the state of the system after the measurement is taken (this is an issue central to quantum mechanics which I have not brought up prior to now, though I have used it indirectly). It bears directly upon changes in the state due to the act of measurement; an issue I have essentially avoided discussing.

 

Right, that seems to be what the following wikipedia comment is trying to say:

"In quantum mechanics the expression [imath]\left \langle \phi \mid \psi \right \rangle[/imath] (mathematically: the coefficient for the projection of [imath]\psi[/imath] onto [imath]\phi[/imath]) is typically interpreted as the probability amplitude for the state [imath]\psi[/imath] to collapse into the state [imath]\phi[/imath]"

 

There are at least three important concepts you have to have in mind in order to transform your thoughts from Schrödinger's attack to Dirac's (which is then easily transformed into “matrix mechanics”).

 

The first is the idea of “eigenstates”. Specific eigenstates are defined by specific operators. If an operator, say O?, operating on a given state, say [imath]\phi_a[/imath], yields a number times that state (i.e., [imath]O_?\phi_a=k_a\phi_a[/imath] or [imath]O_?|a>=k_a|a>[/imath]) then clearly [imath]\int\phi_a^\dagger O_?\phi_a dV=k_?[/imath] since [imath]\int\phi_a^\dagger \phi_a dV=1[/imath]: i.e., the actual measurement does not change the state and an actual number (the associated eigenvalue of O?) is obtained. Dirac's notation is essentially constructed around such eigenstates.

 

Ahha, and that Wikipedia entry is talking about matrix operations yielding linear transformations on vectors, and eigenvector of a given matrix being any vector that doesn't change its direction, but only magnitude, after that linear transformation... So I'm guessing it's the definitions of matrix operations that tie this all together... And being equivalent to the wave-like notation of Schrödinger, I'm guessing it's then the well defined eigenfunctions that preserve the relationships between some defined things, just like the wave equation preserves some relationship between the defined things on different sides of =

 

Emphasis on the word "guessing" :D

 

This leads us to the second underlying idea. That is the issue often referred to as “completeness” (at least that is what they called it when I was a graduate student; it seems a lot of terminology has changed since those ancient days). That is the idea that the complete set of “eigenfunctions” form a basis which can be used in, or as, a sum to specify any desired function. The “Fourier transform” is a good example of such a basis.

 

I googled “Fourier transform” to find a good reference for you but I never found anything I really thought did a good job. I listened to about a half hour of the following 52 minute Stanford lecture.

 

 

It just reminded me of why I slept through so many lectures when I was an undergraduate. :eek_big: The lectures were, for the most part, god awful; one could get a better understanding by just reading the book. You ought to take a look at that lecture to just get a rough idea of the hypnotic nature of a lot college science courses.

 

Well that was certainly mind-numbing :I

 

With regard to getting an inkling of Fourier transforms, it might be worth while to look at "The Fourier Transform”, part I and part ll

 

 

Okay, I quite liked that presentation actually, it is very much to the point, and it makes the issue seem very simple. Representing the signal on time domain signal, via summing a bunch of sine waves reminds me of taylor series that we just talked about little while ago. I have no troubles believing that can be done, although I would have no idea as to how to find out the correct amplitudes for each frequency so to end up with the desired wave form.

 

But after having those amplitudes, it's trivial to understand how the same thing is expressed in frequency domain.

 

.

 

Heh, I think there's a fair bit of irony to the comment "Waves is how the universe actually is. We just see it differently as humans" :D

 

Anyway, the second point "it makes solving differential equation easy" is I guess what we want to focus on. My brain would have shut down on the example he gives if we had not already gone through similar moves at the Schrödinger thread :)

 

So, I watched through that second part and also the third part, and see that they explain the actual mechanism behind the Fourier transform. I didn't start focusing and learning the in and out of it, but I got a rough idea of how it works.

 

At any rate, regarding my comment above concerning completeness, sines and cosines constitute the complete set of “eigenfunctions” of the [imath]\frac{\partial}{\partial x}[/imath] operator (the essential differential nature of our momentum and energy operators).

 

Ahha.

 

The Fourier transform constitutes the transformation of any continuous f(x) into a sum of sine and cosine functions of specific wave length (or, as is expressed in the videos, f(t) into sine and cosine functions of [imath]\omega[/imath].

 

Right.

 

These kind of transformations (from functions to sums over eigenfunctions) transform between two different representations of the underlying data (position, x, and momentum, wave length, or time, t, and frequency, [imath]\omega[/imath]). In physics, the two kinds of variables being referred to here are generally referred to as “canonical variables” (the underlying data can be seen as totally represented as a function of one or the other, the value of [imath]\Phi[/imath] for every possible argument or the probability of a specific eigenvalue for every possible eigenvalue). You might take a look at the wikipedia reference to the term.

 

Right okay, so essentially I'd view canonical variables as things that are essentially defined in terms of each others.

 

The Bell problems of entanglement arise from sending a solution in terms of one collection of eigenstates off into the universe and then performing a measurement with regard to a different set of eigenstates.

 

Not being able to pick up what that means, but it would be interesting to understand this.

 

But back to my original thread of thought.

 

I have totally forgotten the third issue I had in mind! Senility is a terrible thing! I won't change what I have already written because I want you to understand just what kind of a mental incompetent you are talking to.

 

Heh, well that happens to me too... So I sometimes start with listing up the things that go into an explanation, so that I won't forget something... And then try to stick with just explaining those things in as straightforward manner as possible, with hopes that the actual explantion doesn't get buried to all the associated issues that keep popping into my mind and that I might be tempted to comment on... I suspect that temptation hits you often as well, but don't worry, I'll try to keep up my end and have my sight on the ball still (I do find the extra commentary quite interesting nevertheless).

 

At any rate, in order to bring up more complex effects of the abstract vector aspect of [imath]\Psi[/imath] it is best to bring the issue up in terms of matrices and matrix algebra. First, a matrix is a collection of numbers such as

[math]\begin{bmatrix}n_{00} &n_{01} &\cdots &n_{0k} \\n_{10} &n_{11} &\cdots &n_{1k} \\\vdots &\vdots &\ddots &\vdots \\n_{k0} &n_{k1} &\cdots & n_{kk}\end{bmatrix}[/math]

 

Matrices need not be square; however, I will constrain this presentation of “operators” to be represented by square matrices. Multiplication between matrices is most easily understood by first defining the abstract vector [imath]\vec{\Psi}[/imath] in matrix notation (being a vector, it is represented by a one by n matrix where n is the number of components in the vector). The function [imath]\vec{\Psi}[/imath] could be represented as

[math]\begin{bmatrix}\Psi_1\\ \Psi_2\\ \vdots\\ \Psi_k\end{bmatrix}[/math]

 

Likewise, [imath]\vec{\Psi}^\dagger[/imath] would then be represented by

[math]\begin{bmatrix}\Psi_1^\dagger\;\Psi_2^\dagger\;\cdots \;\Psi_k^\dagger \end{bmatrix}[/math]

 

The product [imath]\vec{\Psi}^\dagger\cdot\vec{\Psi}=\Psi_1^\dagger\Psi_1+\Psi_2^\dagger\Psi_2\cdots+\Psi_k^\dagger\Psi_k[/imath], would then be exactly the result obtained with Schrödinger's notation.

 

Ahha, that seems to correlate to the commentary I saw in th Wikipedia entry about Dirac notation.

 

Using that result as a fundamental example of the desired nature of matrix multiplication, we can define that multiplication in terms of horizontal “rows” times vertical “columns”. If two matrices are multiplied together, the (i,j)th member of the product matrix is the ith row of the left matrix times the jth column of the right matrix.

 

It is starting to seem like my guess "it's the definitions of matrix operations that tie this all together" was not very far off the mark :)

 

Finally, instead of expressing things in terms of Schrödinger's wave functions, [imath]\vec{\Psi}[/imath], we will instead express our states in terms of specific eigenstates “canonical” to the coordinates of interest to us. (Let me just hand wave about “coordinates of interest” here! The coordinates of interest need not be at all what one would perceive to be reasonable reasonable from a classical perspective.

 

Okay, I just take it you are talking about some abstract spaces expressing some defined quantities in some defined form (that just so happens to be a useful form in some way)... :I

 

I will explain the nature of the range of coordinate transforms which might be of interest to us when we get into General relativity.)

 

Thus it is that the nature of Dirac's notation is to make it very easy to express the outcome of specific measurements. (I should, at this moment bring out the issue that quantum mechanics, as seen by physicists in the field, is not at all a derived subject but rather a method of expressing specific “real” circumstances: i.e., the number of assumptions concerning what and how these measurements are defined are already far in excess of any assumptions I have made in my presentation as they presume the correctness of the physicists world view.) All measurements can be defined in terms of specific “operators” and all states can be represented by “kets” consisting of vectors built from probability weighted possible eigenstates. (You need to understand that all of these things are presumed to be well defined in their mental picture.) Thus it is that state #1 (no matter what that state consists of) can be represented by the “ket” [imath]|state\#1>[/imath] where that specific “ket” can be represented by the distribution of eigenstates of “some coordinates of interest” (the variable canonical to that “coordinate of interest”.

[math]\begin{bmatrix}p_1(|a_1>)\\ p_2(|a_2>)\\ \vdots\\ p_k(|a_k>)\end{bmatrix}[/math]

 

One can then deduce the nature of the matrix necessary to represent the operator by looking at the states (expressed in such matrix vectors) consisting of specific eigenvectors “of interest”: i.e., something of the form

[math]\begin{bmatrix}0\\ 0\\ \vdots\\1\\ \vdots\\0\end{bmatrix}[/math]

 

where only the jth element is non zero and it is one: i.e., the state being represented is the eigenstate [imath]|a_j>[/imath]. A little thought should convince you that the (ij)th element of the matrix representing the operator of interest “O” is exactly [imath]<a_i|O|a_j>[/imath]

 

Well I looked at the matrix multiplication explanation at Wikipedia bit more, and I understand that the product matrix contains the result of each multiplication... I wouldn't say I have convinced myself of everything related to this issue (as there are very many things that I'd have to understand better first), but I can see little bit of how it all works out.

 

That brings us directly to “matrix mechanics” where the physical laws are written in terms of matrix equations (equations where the elements of interest are matrices). Again, I have to make it clear that these physicists are concerned with representing measurements which they see as clearly defined in their world view. What is central to that perspective is the discreet nature of those possible eigenstates: subtle mechanisms are used to push the results into circumstances they see as continuous. So, all of the above was to bring us down to the subject to “spin angular momentum”.

 

Okay...

 

Expressing the angular momentum operator in Schrödinger's notation is quite straight forward. It is worthwhile to look at the section “Angular momentum in quantum mechanics” in the “wikipedia”entry under “Angular_momentum”. The important point brought out there is that the components of angular momentum (in classical mechanics, it is a vector quantity) do not commute. It turns out that the magnitude of the angular momentum and one component does commute (usually taken to be the z component for convenience). It is important to understand that non-commuting operators can not be in eigenstates simultaneously (if you want to understand that, we can go into it but, for the moment, accept the hand waving or read that angular momentum entry carefully).

 

Okay, I read it but I did not of course understand all of it... We can stick with the hand waving for time being...

 

Of essence is the fact that, the quantum mechanical operators obey the commutation relationships that they anti commute. Furthermore , the commutation of any two components yields [imath]-i\hbar[/imath] times the third (commuting them just changes the sign). The problem is that, if one examines the actual states of the hydrogen atom in a magnetic field, (remember, physicists are presuming the classical world view is a valid picture here) one discovers a subtle splitting of eigenstates which the classical picture does not produce (just accept the hand waving here). In 1926 Wolfgang Pauli used Heisenbergs matrix theory of quantum mechanics to solve the spectrum of the hydrogen atom. A significant part of that solution was Pauli's introduction of 2x2 matrices presently known as “Pauli spin matrices”.

Maybe, maybe not; you should be aware of its existence and have some idea as to where and why such things came up. If you can understand what I just wrote, great!

 

I didn't understand all of it. I have an incredibly vague idea of how something like the 2x2 pauli spin matrices might come into play, when a specific notation is used to express the quantum mechanical relationships, but really would have to spend a lot more time with the issue to really understand how it all plays out in any detail.

 

If it doesn't make any sense to you don't worry about it. The only reason I brought it up is that you just can't explain a lot of modern physics without it. You can add the idea of Pauli's spin matrices to Schrödinger's picture by thinking of [imath]\Psi[/imath] being a vector entity in the abstract two dimensional space of Pauli's matrices. This possibility is also handled by my representation of [imath]\Psi[/imath] as a vector in an abstract space: i.e., my [imath]\vec{\Psi}[/imath]. My anti-commutating alpha and beta operators certainly provide the anti-commutation properties of his spin matrices; however, if you look at my work carefully, you will discover that I never bring up the actual commutation properties of these operators (those commutation properties never come up prior to this deduction of Dirac's equation).

 

If you think about it, you should be able to understand that allowing [imath]\Psi[/imath] to be a vector in Pauli's matrix space, will fulfill the requirements taken advantage of by Pauli in much the same way that the two dimensional representation of complex numbers handled the absence of those peaks and troughs of the water wave analogy.

 

Hmm, tell me, if the [imath]\Psi[/imath] is taken as a vector in that 2 dimensional matrix space, does it mean it can only have those 4 discreet values, or is it a continuous space between 0 and 1 (along both axes)... I'm just getting a bit confused about how the continuous and discreet aspects are used... (because I don't know the background well enough to understand when people suppose which is appropriate) And it's getting really hard to even communicate my thoughts, it's easy to get really confused here... :I

 

Actually I may be so far off the mark that my question doesn't even make sense to you :P

 

There is an interesting thing about that vector solution to the peak and trough problem; the two components of [imath]\Psi[/imath] yields a two component Schrödinger equation (one in the real direction and one in the imaginary direction). One might presume that these two equations are independent of one another; however, that can not be so. The requirement that these peaks and troughs can not exist in the final solution requires that the two solutions be reflections of one another so that the resultant magnitude does possess such peaks and troughs.

 

The same thing goes for the abstract space of Pauli's spin matrices. One can see [imath]\Psi[/imath] as a vector in Pauli's matrix space. A direct representation would yield independent equations for each component with no association between the various components. But, in order to obtain identification with Pauli's spins, we need to add in the relationships between these components required by the commutation relationships included in Pauli's picture. There is a subtle problem here. The theoretical physicists problem is to explain the experiment: i.e., if those relationships have to be there, he just postulates that they are there. My picture is entirely deductive so it becomes imperative that I come up with a logical reason why these commutation relationships are necessary.

 

When I established that the imaginary and real components of [imath]\Psi[/imath] had to be reflections of one another, that was rather a subtle trick of misdirection of attention. I really gave no reason why the peaks and troughs shouldn't be in the final solution, I just simply asserted they shouldn't be there. The fact that the assertion needs at least a little defense is too obvious to overlook in the spin circumstance so I will give you my argument as to why it must be so. (I put it the way I do because this is an issue upon which I occasionally convince myself I am wrong; but then again I convince myself at a later date that I am correct. One could say that the issue is somewhat open.) When I convince myself that I am right, I do it by arguing that we are talking about the [imath]\vec{\Psi}[/imath] which explains the information known to us. When we cast that into an explanation, and the explanation requires a higher order vector space for [imath]\Psi[/imath] all solutions yielding exactly that explanation (for the known data in any representation) must be equivalent. This implies that the abstract vector space expressed by [imath]\vec{\Psi}[/imath] must contain the same kind of symmetries extant in the kind of space used to represent the underlying data. (That is, the Pauli matrices, along with the identity matrix I, form an orthogonal basis for the complex Hilbert space of all 2 × 2 matrices.)

 

What that implies is that the consequences of spherical symmetry in that abstract space leads us to the lie algebras of which both Pauli's spin matrices and Dirac's 4x4 matrices are examples. Thus it is that I really don't go into the issue of that aspect of Dirac's equation; I take it as a more advanced issue which anyone competent in physics would find reasonably acceptable as represented in my fundamental equation.

 

Hmm, okay, yeah I would really need a better understanding of conventional physics, to be able to comment on that... :I

 

I'll split this into a new reply from here, as we are back to the algebraic steps between fundamental equation and Dirac equation...

 

-Anssi

Link to comment
Share on other sites

Yeah, I think that is pretty okay. The real point here is that I obtain Maxwell's equations as an approximation to my equation in the specific circumstance where a “charged Dirac fermion” exists: i.e., is when an associated specific solution to a solution to that fundamental equation can be interpreted to be a “charged Dirac fermion”, the interaction function turns out to satisfy Maxwell's equations. In essence, that means Maxwell's equations are also true by definition; at least they are true by my definition of terms which are indeed quite consistent with the physics communities standard definitions.

 

Yup. I'm not very familiar with Maxwell's equations either though.

 

Essentially, yes; however, there is a subtle issue here that one should keep in mind. Given a specific [imath]x_1[/imath] and [imath]x_2[/imath], one can define a function [imath]\vec{\Psi}(x_1, x_2, t)[/imath] who's magnitude squared is the probability of element #1 being at found (or represented by) [imath]x_1[/imath] and element #2 being found at [imath]x_2[/imath]. But one cannot guarantee that such a function, [imath]\vec{\Psi}(x_1, x_2, t)[/imath], can be always be factored into a product of two functions [imath]\vec{\Psi}_1(x_1,t)\vec{\Psi}_2( x_2, t)[/imath]. That has to do with the nature of probability itself. On the other hand, [imath]\vec{\Psi}(x_1, x_2, t)[/imath] can always be factored into two functions where one function yields specifically the probability that one element will be found (or represented by) a specific argument: i.e., either [imath]\vec{\Psi}_1(x_1,t)[/imath] or [imath]\vec{\Psi}_2( x_2, t)[/imath]. The problem is that, once that step is taken, the other function must express the probability that the other element will be found (or represented by) the other argument “given that the first element was indeed found (or represented by) that first argument. Think about it for a while and I suspect you will understand the issue.

 

Yes, and I remember it from when the same issue arose at Schrödinger's equation. Actually, remembering that issue about "probability" was what sparked my question, as I suspected I might be making the wrong assumption when supposing that [imath]\vec{x_1}[/imath] and [imath]\vec{x_2}[/imath] are to be present in both [imath]\vec{\Psi}_1[/imath] and [imath]\vec{\Psi}_2[/imath]... Somehow it just didn't quite seem analogous to what was going on at Schrödinger's derivation...

 

But yeah, now I think it makes perfect sense to me. I understand perfectly we could choose either function to contain both elements...

 

The point being that, when such a factorization is expressed, one of the functions needs both arguments but not both functions. And making both functions dependent upon both arguments just doesn't really make sense. I often take advantage of the fact that the pair may be factored either way; but, when I do that, I have to go back to the original relationship: i.e., you cannot jump back and forth between the two representations willy nilly as the associated functions are fundamentally defined differently.

 

Yup.

 

Now it is always possible that [imath]\vec{\Psi}(x_1, x_2, t)[/imath] could factor into the form [imath]\vec{\Psi}_1(x_1,t)\vec{\Psi}_2( x_2, t)[/imath] but that would be a special case and cannot be presumed to be generally true. Physicists will often presume such a factorization is possible as it greatly simplifies the search for solutions but such a presumption could very well be false.

 

We can talk further on that issue if you find my assertion confusing. In essence this issue is related to the definitions of probability and, in the final analysis, ends up having no bearing upon our analysis for some simple but subtle reasons.

 

Compared to the earlier commentary about conventional physics, this issue feels clear as a day to me :)

 

Not technically! That is why I leave out the arguments in my expression

[math]\vec{\Psi}=\vec{\Psi}_1\vec{\Psi}_2\vec{\Psi}_0[/math]

 

and instead use the phrase, “it is entirely reasonable for me to presume no connections exist between [imath]\vec{\Psi}_0[/imath] and the other two functions”. Since I haven't made the decision as the which of the two remaining functions are dependent upon two arguments, I can let that decision slide until I reach a point where “convenience” moves me to make the decision. The decision is actually made when I set forth the statement: “the last step in this process is to left multiply by [imath]\vec{\Psi}_2^\dagger[/imath] and integrate over [imath]\vec{x}_2[/imath]”. Assertion that this results in unity for all the terms except the one involving the Dirac delta function is essentially an assertion that [imath]\vec{\Psi}_2[/imath] is the function not dependent upon the other argument. Integration over that Dirac delta function sets the fact (via the fundamental equation) that, only when [imath]\vec{x}_1=\vec{x}_2[/imath] can the other function depend upon [imath]\vec{x}_2[/imath]: i.e., it is the fact that all interactions between elements are mediated by the Dirac delta function which removes the dependence on [imath]\vec{x}_2[/imath] from [imath]\vec{\Phi}_1[/imath]. In other words, the fact that the only interaction between elements is due to the Dirac delta function essentially says there can be no connection between the two except when [imath]\vec{x}_1=\vec{x}_2[/imath] so the technical existence of [imath]\vec{x}_2[/imath] in [imath]\vec{\Psi}_1[/imath] becomes a moot issue.

 

I am sorry if the circumstance has become complex. These kinds of issues have to be taken care of in detail otherwise they can be used as arguments concerning the idea that I have left them out of my deductions.

 

Definitely. After all this, my head feels quite cloudy, so I'll look at the next step at the OP little bit later...

 

-Anssi

Link to comment
Share on other sites

Well Anssi, you never cease to amaze me. You pick up on things more quickly than anyone else I have ever met.

Thank you, and don't worry, I'm quite interested of understanding some conventional physics...
Any questions you might have I would be willing to try and explain but I won't guarantee I have everything it takes to make it all clear and “the whole field” is a pretty big subject. I can't say that I understand it all but I do think that, with a little work I could follow almost any presentation as I understand the issues pertinent to the examination. I am convinced you have that gift already; so think of conventional physics as something you could easily understand and don't think you should bother with specific issues unless they interest you.
I assume you meant to just write "summing the squares of all the components", and so I take it you are talking about a value that can be interpreted as the square of the magnitude of the vector.
Of course you are correct; I have edited out the error. I guess I just wasn't thinking about what I had just written.
Ahha. While thinking this through, I found a handy little applet that I could use to plot wave functions and add them together, at the bottom of this page:

Functions 2 - maths online Gallery

 

And indeed sin(x)^2+cos(x)^2 produces a constant.

I looked at that url but none of the applets work on my machine (it wants missing plug-ins but the thing doesn't install, and then tells me to “manually install” which I do not know how to do). :shrug:
I would have to read and think a lot more to understand all that better, but at least I have a tiny idea about what the notation is about.
That is probably sufficient for what we are talking about. Down the road, if anything really bothers you, we can go into it.
Emphasis on the word "guessing" :D
I think you are guessing pretty good these days. Though Qfwfq would probably baulk at letting things ride so loosely. I wish he were following this as he often makes excellent comments on my shortsightedness.
Well that was certainly mind-numbing :I
What I really think is “mind-numbing” is that Brad Osgood is apparently a highly respected lecturer at Stanford, which is a highly rated technical university. The guy spends most of his time writing on the black board; as if his students couldn't follow algebra. You know, his whole course is there and lecture after lecture is just as awful as the first (I keep listening to various lectures just trying to find one which is well done). While I was writing this, I was listening to lecture #30 (the last of the quarter). You might check it out because the issue concerns that “coordinates of interest” that I brought up earlier. The lecture is as terrible as any of them but the information is quite valuable and, if you have time, you might listen to it.
Okay, I quite liked that presentation actually, it is very much to the point, and it makes the issue seem very simple. Representing the signal on time domain signal, via summing a bunch of sine waves reminds me of taylor series that we just talked about little while ago. I have no troubles believing that can be done, although I would have no idea as to how to find out the correct amplitudes for each frequency so to end up with the desired wave form.
Actually it is quite simple. It goes directly to the idea of eigenfunctions being “orthogonal” functions (I think that is the third issue I meant to bring up earlier). What is meant by that comment is that two functions ([imath]\phi_1(t)[/imath] and [imath]\phi_2(t)[/imath]) are orthogonal if [imath]\int_{-\infty }^{+\infty}\phi_1(t)\phi_2(t)dt =0[/imath]. A complete set of eigenfunctions form what is essentially a basis of orthogonal functions. If they weren't orthogonal, [imath]<\phi_1|\phi_2> \neq 0[/imath] (with no operator in there) which implies non-zero probability of transition from one “eigenstate” into another . That means they are not eigenstates! So the solution is (from a mathematical perspective) quite simple: you merely multiply the given function by a specific eigenfunction and integrate over the entire range of that function. Look at it from the following perspective

[math] f(t)=\sum_i a_i \phi_i(t)= a_1\phi_1(t)+a_2\phi_2(t)+\cdots[/math]

 

where each [imath]\phi_n(t)[/imath] is an eigenfunction with an eigenvalue of [imath]\omega_n[/imath]. Then

[math] \int_{-\infty}^{+\infty}\phi_n(t)f(t)dt= a_i[/math]

 

as all the other terms integrate to zero. It should be clear to you that [imath]a_i[/imath] is the amplitude of the frequency “[imath]\omega[/imath]”

So, I watched through that second part and also the third part, and see that they explain the actual mechanism behind the Fourier transform. I didn't start focusing and learning the in and out of it, but I got a rough idea of how it works.
I hadn't looked at part III. It is essentially exactly what I just showed you here (orthogonal functions).
Right okay, so essentially I'd view canonical variables as things that are essentially defined in terms of each others.
Well, yes; but it is better to think of them as connected by some operator. One is an eigenfunction of that operator the other is the argument that operator is defined through. There is a mathematical symmetry here which is quite important. Back to that “coordinates of interest” I mentioned earlier. For example, you can see the universe as completely defined by the position of every entity of significance or you can just as well see the universe as completely defined by the momentum of every entity. In that second perspective positions in the coordinate system would be specified by fixed values of momentum and one would have position eigenstates. You can think of “canonical” as a defined relationship between variables usually mediated by specific operators.
Not being able to pick up what that means, but it would be interesting to understand this.
Sorry about that. It is a complex idea from the perspective I was viewing it and it would be much better to start with some specific examples which are much easier to understand. If you really want to take the time to examine at least the beginning of the issue, you might take a look at “Quantum Entanglement and Bell's Theorem”. It's a pretty good presentation but it may have a few typos. The first one is in his equation (1) which seems like a bad place to start such things. The equation should be

[math] \Psi = c_1\begin{bmatrix}1\\0\end{bmatrix}+c_2\begin{bmatrix}0\\1\end{bmatrix}=\begin{bmatrix}c_1\\c_2\end{bmatrix}[/math]

 

Other than that it presents the problem quite well and it uses the notation we just talked about.

Okay, I just take it you are talking about some abstract spaces expressing some defined quantities in some defined form (that just so happens to be a useful form in some way)... :I
If you can follow lecture #30 by Brad Osgood, I think you will have an inkling of “coordinates of interest”. As I said, I will bring the issue up when we get into general relativity which is the next place I want to go after this Dirac thing.
I didn't understand all of it. I have an incredibly vague idea of how something like the 2x2 pauli spin matrices might come into play, when a specific notation is used to express the quantum mechanical relationships, but really would have to spend a lot more time with the issue to really understand how it all plays out in any detail.
Following that quantum entanglement article I referred to above might give you a better understanding of the use of Pauli spin matrices. I think you have enough understanding of matrix mathematics to follow it and the exercise might be of value.
Hmm, tell me, if the [imath]\Psi[/imath] is taken as a vector in that 2 dimensional matrix space, does it mean it can only have those 4 discreet values, or is it a continuous space between 0 and 1 (along both axes)... I'm just getting a bit confused about how the continuous and discreet aspects are used... (because I don't know the background well enough to understand when people suppose which is appropriate) And it's getting really hard to even communicate my thoughts, it's easy to get really confused here... :I
The vector nature of [imath]\Psi[/imath] can be understood from the following equations. For example, let [imath]\vec{\Psi}[/imath] have two components; then we can represent [imath]\vec{\Psi}[/imath] in the following manner

[math]\vec{\Psi}=\begin{bmatrix}\Psi_1\\\Psi_2\end{bmatrix}[/math]

 

This would essentially be equivalent to expressing my fundamental equation in the following form.

[math]\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}+\sum_{i \neq j}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\right\}\Psi_1(\vec{x}_1,\vec{x}_2,\cdots,t)=K\frac{\partial}{\partial t}\Psi_1[/math]

 

and

 

[math]\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}+\sum_{i \neq j}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\right\}\Psi_2(\vec{x}_1,\vec{x}_2,\cdots,t)=K\frac{\partial}{\partial t}\Psi_2[/math]

 

Two independent unconnected equations. However, these two equation are connected by the requirement that the probability of finding a specific set of fundamental elements is given by

[math]\Psi_1^\dagger\Psi_1+\Psi_2^\dagger\Psi_2[/math]

 

which would seem to be nothing more than independent relationships except for the fact that it brings in the possibility of those alpha and beta operators also connecting these two expressions: i.e., those operators need to be defined in the space of that abstract vector. I could point out that this is true of the other operators; however, as those operators commute, they can be represented by the matrix representation of unity (in the abstract space of our vector [imath]\Psi[/imath]

[math]\begin{bmatrix}1& 0 \\ 0 & 1\end{bmatrix}[/math]

 

which provides no connection between the two components beyond [imath]\Psi[/imath] itself. On the other hand, the alpha and beta operators have to anti commute and we can bring that characteristic directly into the calculation by relating that anti-commutation directly to the vector nature of [imath]\Psi[/imath] by representing them as the Dirac spin matrices.

 

Schroedinger's notation is very convenient to continuous variables (such as x, y, z, and tau) and matrix notation is very convenient to discreet variables (such as quantized angular momentum). The two notations can be pushed into one another (Schroedinger's notation via specified boundary conditions and matrix notation by allowing their number to go to infinity) but, depending upon the problem being analyzed, one is often more convenient than the other.

 

Actually, there is no reason for bringing up these vector possibilities for [imath]\Psi[/imath] unless there is some property in the solution which requires a connection between the components and I guess the fundamental mathematical property being expressed here is exactly the relationships expressed in Lie algebras. This is one reason I think it would be good if we still had Qfwfq's interest. I think he has a better grasp of the field of mathematics than I do. He mentioned Lie algebra quite a while ago and I really didn't pick up on the issue. I do not claim to be a good mathematician; I just accept their work as well thought outl.

I'll split this into a new reply from here, as we are back to the algebraic steps between fundamental equation and Dirac equation...
I'll split my reply too. This was supposed to be a short response but it seems to have blown into a essay. Sorry about that.

 

I just read your post #6 and see no problems at all so consider this an answer to both posts.

 

Have fun -- Dick

Link to comment
Share on other sites

Okay, had little bit free time on my hands, so I'm trying to understand the steps to that first expansion... (But first; I noticed a tiny typo in LaTeX, in the fundamental equation, the nabla sign is missing the subscript "i". It is also missing in that first expansion to three components)

 

So, I understand this expansion is there so we can pick out the arguments of the "rest of the universe", but I have couple of questions about it.

 

The first curly brackets, which contains elements [imath]x_1[/imath] and [imath]x_2[/imath], I would have thought it should contain the Dirac Delta function both ways for those elements, i.e.

 

[math]

\left\{\vec{\alpha}_1 \cdot \vec{\nabla}_1+ \vec{\alpha}_2 \cdot \vec{\nabla}_2+ \beta_{12} \delta(\vec{x}_1 -\vec{x}_2) + \beta_{21} \delta(\vec{x}_2 -\vec{x}_1)\right\}\vec{\Psi}_1 \vec{\Psi}_2 \vec{\Psi}_0

[/math]

 

I'm thinking perhaps you left the second one out because it would be redundant, as I guess it was somewhat redundant in the first place to go through each pair both ways... I'm thinking about its impact and I guess it would only either amount to a factor of 0, or factor of 1 "twice", but in the latter case the probability should be 0 as the indices would be identical... Hmmm, I think :I (I'm still quite bit shaky with the foundations of fermions and bosons and how it would play here, even though that issue has been touched a bit couple of times already...)

 

Well, if I'm mistaken, I'm sure you can explain to me why that last term is not in your expansion, just to be sure.

 

In the second curly brackets I see there's the rest of the elements:

 

[math]

\left\{\sum_{i=3}^\infty \vec{\alpha}_i \cdot \vec{\nabla_i}+\sum_{i \neq (2\; or\; j) \;\&\;j \neq 2}^\infty\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\right\}\vec{\Psi}_1\vec{\Psi}_2\vec{\Psi}_0

[/math]

 

So that sum of Dirac delta functions, I would have thought it would also exclude the element "1", not just "2"...? It's a bit tricky thing to write down clearly, but I have to ask if that was the intention, or if I'm missing something?

 

And I see the last term is the Dirac delta functions between elements [imath]x_1[/imath] / [imath]x_2[/imath] and the rest of the universe, which otherwise would be missing, so that's clear...

 

Then the moving of time derivative of [imath]\vec{\Psi}_0[/imath]. Sooo, it's a time derivate of a products of functions, I remember how that works from Schrödinger's derivation, and I think in this case it's:

 

[math]

K\frac{\partial}{\partial t}\vec{\Psi}_1\vec{\Psi}_2\vec{\Psi}_0

= \vec{\Psi}_0 \left \{ K \frac{\partial}{\partial t}\vec{\Psi}_1\vec{\Psi}_2\right \} + \vec{\Psi}_1\vec{\Psi}_2 \left \{ K \frac{\partial}{\partial t} \vec{\Psi}_0 \right \}

[/math]

 

And then I could move that latter term over to the left side.... So all in all, at this point I would have:

 

[math]

\left\{\vec{\alpha}_1 \cdot \vec{\nabla}_1+ \vec{\alpha}_2 \cdot \vec{\nabla}_2+ \beta_{12} \delta(\vec{x}_1 -\vec{x}_2) + \beta_{21} \delta(\vec{x}_2 -\vec{x}_1)\right\}\vec{\Psi}_1 \vec{\Psi}_2 \vec{\Psi}_0

[/math]

 

[math]

+ \left\{\sum_{i=3}^\infty \vec{\alpha}_i \cdot \vec{\nabla_i}+\sum_{i \neq (1,2\; or\; j) \;\&\;j \neq 1\;or\;2}^\infty\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\right\}\vec{\Psi}_1\vec{\Psi}_2\vec{\Psi}_0

[/math]

 

[math]

+\left[\sum_3^\infty\{\beta_{1i}\delta(\vec{x}_1-\vec{x}_i)+\beta_{i1}\delta(\vec{x}_i-\vec{x}_1)+\beta_{2i}\delta(\vec{x}_2-\vec{x}_i)+\beta_{i2}\delta(\vec{x}_i-\vec{x}_2)\}\right]\vec{\Psi}_1\vec{\Psi}_2\vec{\Psi}_0

[/math]

 

[math]

- \vec{\Psi}_1\vec{\Psi}_2 \left \{ K \frac{\partial}{\partial t} \vec{\Psi}_0 \right \}

[/math]

 

[math]

=

[/math]

 

[math]

\vec{\Psi}_0 \left \{ K \frac{\partial}{\partial t}\vec{\Psi}_1\vec{\Psi}_2\right \}

[/math]

 

 

Which is pretty close to what you have, but I'm struggling with that last bit... I know you move that time derivative term of [imath]\vec{\Psi}_0[/imath] as part of that "rest of the universe" term, for the purpose of integrating it out, and I can look at it essentially as if it's [imath]\vec{\Psi}_1\vec{\Psi}_2[/imath] operating on [imath]- K \frac{\partial}{\partial t} \vec{\Psi}_0[/imath], so I understand why you can put it there like you did, except for one little thing I could not convince myself of. Which is whether there should still be that original [imath]\vec{\Psi}_0[/imath] somewhere (which the "rest of the universe" term operates on), or is it already included via having that time derivative term in there... At least I understand that is the intention of what you have written, since I suppose the "rest of the universe" term would not do much on just the [imath]\vec{\Psi}_1[/imath] and [imath]\vec{\Psi}_2[/imath]... :I

 

I'll reply to your previous post at a better time. But;

 

Well Anssi, you never cease to amaze me. You pick up on things more quickly than anyone else I have ever met.

 

Thanks, I take that as quite a compliment, especially as I have not been able to put that much time into this lately... :I

But yeah, I also feel that understanding some logical presentation is just a matter of patience, and putting time into it... And not letting the huge amount of initial unknowns daunt you.

 

And:

 

I looked at that url but none of the applets work on my machine (it wants missing plug-ins but the thing doesn't install, and then tells me to “manually install” which I do not know how to do). :shrug:

 

It's a java applet, you can install from:

 

java.com: Java + You

 

Click on "Do I have Java?" to see the supported platforms and browsers. Should work under Linux too I think. If there is a valid version, you can install it from there for free.

 

-Anssi

Link to comment
Share on other sites

"I have brought together, in one expression the entire realms of physics represented by Newtonian mechanics, quantum mechanics, electrodynamics and relativity (both special and general)."

 

Doubtful. Gravitation is positively curved space (the sum of the interior angles of a triangle is always greater than 180 degrees and can be as large as 720 degrees). If you have three massed points forming a triangle there are two ways to find the center of mass: Vector all three together, or find the centers of mass of two and then that with the third mass. In Euclidean space the answers are rigorously identical, of course. There is only one center of mass of a system. On the surface of a sphere (turn on gravitation) you get (at least) two very different answers. QM centers of mass are not ambiguous. August 2009 "Scientific American," "Surprises from General Relativity: 'Swimming' in Spacetime."

 

General Relativity's physical systems are always spatially separable into independent components. Systems of three or more particles require cluster separability (macroscopic locality). When the system is separated into subsystems, the overall mathematical description must reduce to descriptions of the subsystems. This is vital in scattering problems with two or more fragments.

 

Quantum mechanics allows entangled states (superpositions of product states) that require a fundamental irresolvable connection within readily demonstrated physical systems (two-slit diffraction, the Einstein-Podolsky-Rosen paradox). Macroscopic locality is violated: Measuring the state of one slit in a double slit experiment alters the observed diffraction pattern to single slit patterns (quantum eraser experiments). Relativistic and quantum views are in conflict.

 

General Relativity models continuous spacetime, going beyond conformal symmetry (scale independence) to symmetry under all smooth coordinate transformations - general covariance (the stress-energy tensor embodying local energy and momentum) - resisting quantization. General Relativity is invariant under transformations of the diffeomorphism group. General Relativity predicts evolution of an initial system state with arbitrary certainty. Quantum mechanics' observables display discrete states. Heisenberg's Uncertainty Principle limits knowledge about conjugate variables in a system state, disallowing exact prediction of its evolution. Covariance with respect to reflection in space and time is not required by the Poincaré group of Special Relativity or the Einstein group of General Relativity. Parity anomalies must exist.

 

GR: c=c, G=G, h=0

QM: c=c, G=0, h=h

 

How does one unify that?

Link to comment
Share on other sites

Okay, had little bit free time on my hands, so I'm trying to understand the steps to that first expansion... (But first; I noticed a tiny typo in LaTeX, in the fundamental equation, the nabla sign is missing the subscript "i". It is also missing in that first expansion to three components)

[math]\cdots[/math]

So, I understand this expansion is there so we can pick out the arguments of the "rest of the universe", but I have couple of questions about it.

 

The first curly brackets, which contains elements [imath]x_1[/imath] and [imath]x_2[/imath], I would have thought it should contain the Dirac Delta function both ways for those elements, ...

In a sense they are redundant; however, technically [imath]\beta_{12}\neq \beta_{21}[/imath] so I have been a bit sloppy. Thank you for catching the issue. You are certainly earning your pay here! (What was it again?) What you are really pointing out is that no one else takes the trouble to check anything I say and you will never know how much I appreciate it. If it ever does get published, believe me, your name goes on as a contributor.
Well, if I'm mistaken, I'm sure you can explain to me why that last term is not in your expansion, just to be sure.
Clearly you were not mistaken.
So that sum of Dirac delta functions, I would have thought it would also exclude the element "1", not just "2"...? It's a bit tricky thing to write down clearly, but I have to ask if that was the intention, or if I'm missing something?
No, you are not missing a thing. What I put down was clearly wrong; there are three constraints on that sum: neither element “1” nor element “2” are to be included and i can not equal j. As I told you earlier, when I first made that post, my head was kind of losing focus. Every time I looked at it I found errors and often (after finding the point in the text) forgot what the error was. As I said, I think I am going senile.
Which is pretty close to what you have, but I'm struggling with that last bit...
What you get is correct. I have edited the constraints on that sum you refer to. I didn't quite put it the same way you did but the meanings are the same: i.e., your result is correct. The last bit is easy to explain. You have two terms (oh oh, I have just spotted another error in my expression which I will fix: I omitted the [imath]\vec{\Psi}_0[/imath] from the first term; indeed there should still be that original [imath]\vec{\Psi}_0[/imath] somewhere). But, back to those two terms; you have

[math]

\left\{\sum_{i=3}^\infty \vec{\alpha}_i \cdot \vec{\nabla_i}+\sum_{i \neq (1,2\; or\; j) \;\&\;j \neq 1\;or\;2}^\infty\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\right\}\vec{\Psi}_1\vec{\Psi}_2\vec{\Psi}_0

[/math]

 

and

 

[math]

- \vec{\Psi}_1\vec{\Psi}_2 \left \{ K \frac{\partial}{\partial t} \vec{\Psi}_0 \right \}

[/math]

 

which can be summed together as

[math]

\left\{\sum_{i=3}^\infty \vec{\alpha}_i \cdot \vec{\nabla_i}+\sum_{i \neq (1,2\; or\; j) \;\&\;j \neq 1\;or\;2}^\infty\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}_1\vec{\Psi}_2\vec{\Psi}_0

- \vec{\Psi}_1\vec{\Psi}_2 \left \{ K \frac{\partial}{\partial t} \vec{\Psi}_0 \right \}

[/math]

 

which is identical to

[math]

\left[\left\{\sum_{i=3}^\infty \vec{\alpha}_i \cdot \vec{\nabla_i}+\sum_{i \neq (1,2\; or\; j) \;\&\;j \neq 1\;or\;2}^\infty\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\right\}\vec{\Psi}_0 -K\frac{\partial}{\partial t}\vec{\Psi}_0\right]\vec{\Psi}_1\vec{\Psi}_2.

[/math]

 

The portion in square brackets is exactly the fundamental equation as it would be if the two elements we are looking at didn't exist. It is the equation the rest of the universe must satisfy and thus must be identically zero.

It's a java applet, you can install from:

 

java.com: Java + You

 

Click on "Do I have Java?" to see the supported platforms and browsers. Should work under Linux too I think. If there is a valid version, you can install it from there for free.

I tried but it doesn't work. I think my machine has java but when I try the verify button, it just runs and runs. When I try to reinstall, I end up in the same place I did before. It says I have to manually install but I don't know how. When I try the various downloads they provide I always end up in some kind of bind like “you are trying to open a binary file”. At one point in the help section, I tried to follow a specific procedure which was supposed to work and got to a point where I was supposed to “press” a specific option and it just wasn't there. Reading around, I get the impression my difficulty may be due to the fact that I am using a 64 bit system. So I am just going to let it go. There are only a few video formats which don't seem to work. Most of them seem to work just fine.

 

Finally, UncleAl, I am not ignoring you. You simply do not understand what I am doing.

How does one unify that?
You want to work the thing out from the standard physics perspective which just won't work. Check out my thread, "So far from being right, it isn't even wrong!".

 

Have fun everybody -- Dick

Link to comment
Share on other sites

In a sense they are redundant; however, technically [imath]\beta_{12}\neq \beta_{21}[/imath] so I have been a bit sloppy. Thank you for catching the issue. You are certainly earning your pay here! (What was it again?) What you are really pointing out is that no one else takes the trouble to check anything I say and you will never know how much I appreciate it. If it ever does get published, believe me, your name goes on as a contributor.

 

Well I would be very much honored, and yes I do think this stuff should definitely get out there one way or another. I see people talking about their hopes and dreams of finding solutions to exactly the problems that are solved here, so I do not understand the reluctance to look at it... :shrug:

 

Anyway, I'm very glad to hear that I was on the right track after all. Actually, with quite a bit of head scratching, I had figured out how the term with the time derivative of [imath]\vec{\Psi}_0[/imath] would go in there the way you had laid it down, except I just still had that "extra" [imath]\vec{\Psi}_0[/imath] in there to haunt me :D Well wasn't an extra one after all, great!

 

I have written this out as four specific terms for the simple reason that the two expressions in square brackets must vanish exactly from the assumption that the rest of the universe has utterly no impact upon the solution we are looking for (it is, when set to zero, exactly the fundamental constraint on the rest of the universe together with a lack of influence on the two elements of interest).

 

Indeed, so now we have:

 

[math]

\{\vec{\alpha}_1\cdot\vec{\nabla}_1+\vec{\alpha}_2\cdot\vec{\nabla}_2 +\beta_{12}\delta(\vec{x_1}-\vec{x_2})+\beta_{21}\delta(\vec{x_2}-\vec{x_1})\} \vec{\Psi}_1\vec{\Psi}_2\vec{\Psi}_0 =\vec{\Psi}_0\left\{K\frac{\partial}{\partial t}\vec{\Psi}_1\vec{\Psi}_2\right\}

[/math]

 

We may then left multiply by [imath]\vec{\Psi}_0^\dagger \cdot[/imath] and integrate over the entire rest of the universe where, because the state of the rest of the universe has absolutely no impact upon the problem we are concerned with, we obtain [imath]\vec{\Psi}_0^\dagger\cdot\vec{\Psi}_0 = 1[/imath] which entirely removes [imath]\vec{\Psi}_0[/imath] from the equation.

 

Yup, so I'm at:

 

[math]

\{\vec{\alpha}_1\cdot\vec{\nabla}_1+\vec{\alpha}_2\cdot\vec{\nabla}_2 +\beta_{12}\delta(\vec{x_1}-\vec{x_2})+\beta_{21}\delta(\vec{x_2}-\vec{x_1})\} \vec{\Psi}_1\vec{\Psi}_2 =K\frac{\partial}{\partial t}\vec{\Psi}_1\vec{\Psi}_2

[/math]

 

I'll continue from here soon...

 

Oh yeah, I thought I'd still comment to UncleAl, that if you are interested of the topic, perhaps you appreciate my little attempt to clarify what it's about, at;

 

http://hypography.com/forums/philosophy-of-science/18861-an-analytical-metaphysical-take-special-relativity-5.html#post271521

 

That should be something that you find fairly easy to read, and perhaps then be able to look at the OP for what it actually is.

 

One thing I find strange though, it seems that your doubt arises simply from the fact that "relativistic and quantum views are in conflict"... To me that reads as "They are not presently unified; Conclusion -> they can never be unified"

 

I take it that in your mind "unification" must mean something different than what it means in my mind... What it means here is that it is actually worked out how those "apparently conflicting views" are both consequential to the same exact premise. In this case, consequential to symmetries of a self-coherent world model. I.e. both are directly consequential to the one and the same expression of those symmetries.

 

After all, if two views are valid predictionwise (approximately), and if they both are explanations of the same reality, and they still conflict each others, then that's exactly the circumstance that calls for an explanation in a form of unification, does it not... :shrug:

 

That all just means, that the conflict is merely "apparent", arising from certain extraneous assumption(s) not existing in our explicit knowledge about reality (you can put all the ontological interpretations of relativity and quantum mechanics straight into that category). I'm sure you can immediately think of examples of such "extraneous assumptions that turned out to be wrong" from the history of science...

 

-Anssi

Link to comment
Share on other sites

If we now multiply the entire equation by the factor [imath]-ic\hbar[/imath]

 

[math]

-ic\hbar \{\vec{\alpha}_1\cdot\vec{\nabla}_1+\vec{\alpha}_2\cdot\vec{\nabla}_2 +\beta_{12}\delta(\vec{x_1}-\vec{x_2})+\beta_{21}\delta(\vec{x_2}-\vec{x_1})\} \vec{\Psi}_1\vec{\Psi}_2 = -ic\hbar K\frac{\partial}{\partial t}\vec{\Psi}_1\vec{\Psi}_2

[/math]

 

and use the definition for the momentum operator as [imath]-i\hbar\vec{\nabla}[/imath]

 

[math] \vec{p}=-i\hbar\vec{\nabla} [/math]

 

Looking again at the end of Schrödinger thread's OP, our definition of momentum was excluding the tau component (Edit: Oh I noticed you actually commented on this, so I'm on the track, good), so I guess the way [imath]-ic\hbar[/imath] should be injected into the equation, so to get to write it in terms of [imath]\vec{p}[/imath] is something like;

 

[math]

-ic\hbar \left\{ \vec{\alpha}_1\cdot\vec{\nabla}_1 \right\} =

-ic\hbar \left\{ \left\{ \alpha_{1x}\hat{x} + \alpha_{1y}\hat{y} + \alpha_{1z}\hat{z} + \alpha_{1\tau}\hat{\tau} \right\} \cdot \left\{ \frac{\partial}{\partial x_1} \hat{x} + \frac{\partial}{\partial y_1} \hat{y} + \frac{\partial}{\partial z_1} \hat{z} + \frac{\partial}{\partial \tau_1} \hat{\tau} \right\} \right\}

[/math]

 

[math]

= -ic\hbar \left\{ \alpha_{1x} \hat{x} \frac{\partial}{\partial x_1} \hat{x} + \alpha_{1y}\hat{y} \frac{\partial}{\partial y_1} \hat{y} + \alpha_{1z}\hat{z} \frac{\partial}{\partial z_1} \hat{z} + \alpha_{1\tau}\hat{\tau} \frac{\partial}{\partial \tau_1} \hat{\tau} \right\}

[/math]

 

[math]

= -ic\hbar \left\{ \vec{\alpha}_{1xyz} \cdot \vec{\nabla}_{1xyz} + \alpha_{1\tau}\hat{\tau} \frac{\partial}{\partial \tau_1} \hat{\tau} \right\}

[/math]

 

Hmm, so can that [imath]-ic\hbar[/imath] now be injected into the dot product term, just wherever, like:

 

[math]

= \left\{ \vec{\alpha}_{1xyz} \cdot (-ic\hbar)\vec{\nabla}_{1xyz} + (-ic\hbar)\alpha_{1\tau}\hat{\tau} \frac{\partial}{\partial \tau_1} \hat{\tau} \right\}

= \left\{ c\vec{\alpha}_{1xyz} \cdot \vec{p}_1 + (-ic\hbar)\alpha_{1\tau}\hat{\tau} \frac{\partial}{\partial \tau_1} \hat{\tau} \right\}

[/math]

 

Not really sure of myself at this point :D

 

I would like to know if I'm on right track before I go much further... If I am on the right track, then overall I'm at:

 

[math]

\left [\left\{

c\vec{\alpha}_{1xyz} \cdot \vec{p}_1 + (-ic\hbar)\alpha_{1\tau}\hat{\tau} \frac{\partial}{\partial \tau_1} \hat{\tau}

\right\}

+

\left\{

c\vec{\alpha}_{2xyz} \cdot \vec{p}_2 + (-ic\hbar)\alpha_{2\tau}\hat{\tau} \frac{\partial}{\partial \tau_2} \hat{\tau}

\right\}

+

(-ic\hbar) \beta_{12}\delta(\vec{x_1}-\vec{x_2})

+

(-ic\hbar) \beta_{21}\delta(\vec{x_2}-\vec{x_1})

\right ]\vec{\Psi}_1\vec{\Psi}_2

=

(-ic\hbar) K\frac{\partial}{\partial t}\vec{\Psi}_1\vec{\Psi}_2

[/math]

 

-Anssi

Link to comment
Share on other sites

I would like to know if I'm on right track before I go much further... If I am on the right track, then overall I'm at:
You are absolutely correct except for one subtle issue. Since we are using Dirac's representation of [imath]\vec{p} = p_x \hat{x}+p_y \hat{y}+ p_z \hat{z}[/imath], that is, his [imath]\vec{p}[/imath] has no tau component, it follows that [imath] \alpha_\tau \hat{\tau} \cdot p_\tau \hat{\tau}=\alpha_\tau p_\tau \hat{\tau}\cdot \hat{\tau}=\alpha_\tau p_\tau = 0[/imath] because [imath]p_\tau[/imath] is zero. Stepping back through the [imath]\vec{\nabla}[/imath] representation is essentially non productive and can be seen as confusing (both Dirac and I use the same definition of momentum and the same symbol [imath]\vec{\nabla}[/imath] for the differential representation but we mean somewhat different things). When I use it, I include the tau component, when he uses it he does not. Your use of the notation [imath]\vec{\nabla}_{xyz}[/imath] makes it quite clear that you understand this difference but it is nonetheless much less apt to create confusion if you do the algebra directly in terms of Dirac's [imath]\vec{p}[/imath]. If you do that, you avoid the double meaning of [imath]\vec{\nabla}[/imath].

 

The other complaint I have with your presentation is your omission of the “dot” when you go from the vector dot products to the dot products of the components: i.e., when you write [imath]\alpha_\tau \hat{\tau}\frac{\partial}{\partial \tau}\hat{\tau}[/imath] you are technically incorrect. It should be written [imath]\alpha_\tau \hat{\tau}\cdot \frac{\partial}{\partial \tau}\hat{\tau}[/imath]. But you should actually never see such things as [imath]\hat{x}\cdot \hat{x}=\hat{y}\cdot \hat{y}=\hat{z}\cdot \hat{z}=\hat{\tau}\cdot \hat{\tau}=1[/imath]. In addition, all “cross terms” vanish because the dot product between orthogonal components is zero. It is clear that these are aspects of the notation which are confusing you.

 

But, other than that notation problem, your conclusion is absolutely correct except for the [imath]\hat{\tau}\hat{\tau}[/imath] thing. Your final line should read

[math]

\left [\left\{

c\vec{\alpha}_{1xyz} \cdot \vec{p}_1 + (-ic\hbar)\alpha_{1\tau} \frac{\partial}{\partial \tau_1}

\right\}

+

\left\{

c\vec{\alpha}_{2xyz} \cdot \vec{p}_2 + (-ic\hbar)\alpha_{2\tau} \frac{\partial}{\partial \tau_2}

\right\}

+

(-ic\hbar) \beta_{12}\delta(\vec{x_1}-\vec{x_2})

+

(-ic\hbar) \beta_{21}\delta(\vec{x_2}-\vec{x_1})

\right ]\vec{\Psi}_1\vec{\Psi}_2[/math]

 

[math]

=

(-ic\hbar) K\frac{\partial}{\partial t}\vec{\Psi}_1\vec{\Psi}_2

[/math]

 

Other than that vector notation problem, your algebra is fine.

 

Carry on -- Dick

Link to comment
Share on other sites

...and [imath]c=-\frac{1}{K\sqrt{2}}[/imath].

 

So with this, the right side can be written as you have it in the OP:

 

[math]

(-ic\hbar) K\frac{\partial}{\partial t}\vec{\Psi}_1\vec{\Psi}_2 = \frac{i\hbar}{\sqrt{2}}\frac{\partial}{\partial t}\vec{\Psi}_1\vec{\Psi}_2

[/math]

 

Finally, setting the momentum in the tau direction as defining rest mass

 

I take that's then...

 

[math]

m = -i \frac{\hbar}{c} \frac{\partial}{\partial \tau}

[/math]

 

...like it is defined in Schrödinger's thread.

 

Then:

 

[math]

-ic\hbar\frac{\partial}{\partial \tau_1} = c^2\left \{-i \frac{\hbar}{c} \frac{\partial}{\partial \tau_1}\right \} = m_1c^2

[/math]

 

and defining [imath]2\beta=\beta_{12}+\beta_{21}[/imath],

 

So with this:

 

[math]

(-ic\hbar) \beta_{12}\delta(\vec{x_1}-\vec{x_2})

+

(-ic\hbar) \beta_{21}\delta(\vec{x_2}-\vec{x_1})

=

(-ic\hbar)2\beta\delta(\vec{x_1}-\vec{x_2})

=-2i\hbar c\beta\delta(\vec{x_1}-\vec{x_2})

[/math]

 

because clearly [imath]\delta(\vec{x_1}-\vec{x_2}) = \delta(\vec{x_2}-\vec{x_1})[/imath]

 

Phew, so now my result, when put together is:

 

[math]

\left \{

c\vec{\alpha}_{1} \cdot \vec{p}_1 + \alpha_{1\tau} m_1 c^2

+ c\vec{\alpha}_{2} \cdot \vec{p}_2 + \alpha_{2\tau} m_2 c^2

-2i\hbar c\beta\delta(\vec{x_1}-\vec{x_2})

\right \}

\vec{\Psi}_1\vec{\Psi}_2

=

\frac{i\hbar}{\sqrt{2}}\frac{\partial}{\partial t}\vec{\Psi}_1\vec{\Psi}_2

[/math]

 

(I have omitted the xyz subscripts from the alphas as, looking at our previous reply, they seem to be a redundant statement in there...)

 

Which is exactly you result, except I don't understand why you are using [imath]m_01[/imath] instead of just [imath]m_1[/imath]...?

 

(Note that [imath]\vec{\alpha}[/imath] is still a four dimensional operator but that [imath]\vec{p}[/imath] is being taken as three dimensional: i.e., [imath]\alpha_\tau \cdot \vec{p}=0[/imath]).

 

Indeed this also tells me you conceive the alpha as four dimensional and why there never was need for plugging tau out. Nice.

 

-Anssi

Link to comment
Share on other sites

Which is exactly you result, except I don't understand why you are using [imath]m_{01}[/imath] instead of just [imath]m_1[/imath]...?
Actually you are right, my notation is unnecessary. I used it only because of the common meaning of [imath]mc^2[/imath] in Einsteinian relativistic notation. The simple letter “m” is traditionally used to denote what is called the relativistic mass or the relativistic “apparent” mass: i.e., from the Einsteinian expression [imath]E=mc^2[/imath]. The mass I am concerned with here is what is called the “rest mass” usually denoted by [imath]m_0[/imath]. I really shouldn't do that as Dirac doesn't. The m he uses in his equation is also the “rest mass” and he uses the simple m so I should also.

 

I just googled “apparent mass” and was informed that modern physics avoids the term and uses “m” to refer to the rest mass; so you are correct, I should not have used the subscript “0” on the m. Again, I am an old man and apparently far behind the current accepted vocabulary. As they say in wikipedia, "The concept of "relativistic mass" is subject to misunderstanding. That's why we don't use it.”

 

Sorry about that; I think I will go back and remove it. Keep up the good work.

 

Have fun -- Dick

Link to comment
Share on other sites

Hmmm okay... I was suspecting it might be there to say "rest mass" but wasn't sure, and I'm not at all familiar with what is the current convention... They actually seem to use [imath]m_0[/imath] couple of times in that Wikipedia page to explicitly refer to rest mass, but I guess then if Dirac doesn't use that notation, there's no reason for it here... :I

 

It is interesting to note that if the second entity (the supposed massless element) is identified with the conventional concept of a free photon, its energy is given by c times its momentum. If the vector [imath]\vec{p}_2[/imath] is taken to be a four dimensional vector momentum in my [imath]x,y,z,\tau[/imath] space we can write the energy as

[math]\left\{\frac{i\hbar}{\sqrt{2}}\frac{\partial}{\partial t}\vec{\Psi}_2\right\}\vec{\Psi}_1 =c\vec{\alpha}_2\cdot\vec{p}_2\vec{\Psi}_2\vec{\Psi}_1=\sqrt{\frac{1}{2}}|cp_2|\vec{\Psi}_2\vec{\Psi}_1[/math]

 

using the fact that the value of [imath]\vec{\alpha}_2\cdot\vec{p}_2=\sqrt{\frac{1}{2}}|\vec{p}_2|[/imath] (which can be deduced from the fact that the dot product is the component of alpha in the direction of the momentum times the magnitude of that momentum).

 

I am struggling with this next step quite a bit. I'm just not sure what to look at... I am wondering the meaning of it being [imath]\frac{i\hbar}{\sqrt{2}}\frac{\partial}{\partial t}\vec{\Psi}_2[/imath] in there instead of [imath]i\hbar\frac{\partial}{\partial t}\vec{\Psi}_2[/imath] which was the definition for energy in Schrödinger's thread.

 

And I am wondering the meaning of having the order of [imath]\vec{\Psi}_2\vec{\Psi}_1[/imath] reversed like this, from all the earlier steps.

 

And finally I am not grasping the issue with [imath]\vec{\alpha}_2\cdot\vec{p}_2=\sqrt{\frac{1}{2}}|\vec{p}_2|[/imath]

 

So I am quite stuck and could use little help :help:

 

-Anssi

Link to comment
Share on other sites

Hmmm okay... I was suspecting it might be there to say "rest mass" but wasn't sure, and I'm not at all familiar with what is the current convention... They actually seem to use [imath]m_0[/imath] couple of times in that Wikipedia page to explicitly refer to rest mass, but I guess then if Dirac doesn't use that notation, there's no reason for it here... :I
I guess the “current convention” is not written in stone yet. :shrug:
I am struggling with this next step quite a bit. I'm just not sure what to look at... I am wondering the meaning of it being [imath]\frac{i\hbar}{\sqrt{2}}\frac{\partial}{\partial t}\vec{\Psi}_2[/imath] in there instead of [imath]i\hbar\frac{\partial}{\partial t}\vec{\Psi}_2[/imath] which was the definition for energy in Schrödinger's thread.
It is no more than the definition of “K” which yields “c” as the velocity of light. The same issue came up in the deduction of Schrödinger's equation (go back and look at it once).
...the equation we need to solve can be written in an extremely concise form:

[math]\left\{\vec{\alpha}\cdot \vec{\nabla} + g(\vec{x})\right\}\vec{\Phi} = K\frac{\partial}{\partial t}\vec{\Phi}, [/math]

 

which implies the following operational identity:

[math]\vec{\alpha}\cdot \vec{\nabla} + g(\vec{x}) = K\frac{\partial}{\partial t}. [/math]

 

That is, as long as these operators are operating on the appropriate[imath]\vec{\Phi}[/imath] they must yield identical results. If we now multiply the original equation by the respective sides of this identity, recognizing that the multiplication of the alpha and beta operators yields either one half (for all the direct terms) or zero (for all the cross terms) and defining the resultant of [imath]g(\vec{x})g(\vec{x})[/imath] to be [imath]\frac{1}{2}G(\vec{x})[/imath] (note that all alpha and beta operators have vanished), we can write the differential equation to be solved as

[math] \nabla^2\vec{\Phi}(\vec{x},t) + G(\vec{x})\vec{\Phi}(\vec{x},t)= 2K^2\frac{\partial^2}{\partial t^2}\vec{\Phi}(\vec{x},t).[/math]

The “2” on the right arose from squaring the alpha and beta operators (which yielded 1/2). Later, in order to obtain exactly Schrödinger's equation, I have to define [imath]c=\frac{1}{K\sqrt{2}}[/imath]. It doesn't really make any difference as, in my analysis of relativity, I show that the actual velocity, v? is immaterial; we have to define a clock before we can define velocities.

 

The whole thing is really a simple consequence of my definition of the alpha and beta operators. I should perhaps define them to be in alignment with the Pauli spin matrices and Dirac's work as it would be less confusing, but I defined them for a different purpose and had been using my original definition long before (probably something like ten years before) I managed to pull down my first approximate solution. It never felt right to go back and change things after the fact.

And I am wondering the meaning of having the order of [imath]\vec{\Psi}_2\vec{\Psi}_1[/imath] reversed like this, from all the earlier steps.
There is no meaning to it at all since they simply commute with one another. (I can prove that if you need to have it proved.) The only reason I changed the order here was so I could explicitly show the time derivative of [imath]\vec{\Psi}_2[/imath] by itself. The given derivative of [imath]\vec{\Psi}_1\vec{\Psi}_2[/imath] is,

[math]\frac{i\hbar}{\sqrt{2}}\frac{\partial}{\partial t}\vec{\Psi}_1\vec{\Psi}_2 = \left\{\frac{i\hbar}{\sqrt{2}}\frac{\partial}{\partial t}\vec{\Psi}_1\right\}\vec{\Psi}_2+\left\{\frac{i\hbar}{\sqrt{2}}\frac{\partial}{\partial t}\vec{\Psi}_2\right\}\vec{\Psi}_1,[/math]

 

and I am interested in getting rid of the second term: i.e., via [imath]c\vec{\alpha}_2\cdot\vec{p}_2\vec{\Psi}_2-\frac{i\hbar}{\sqrt{2}}\frac{\partial}{\partial t}\vec{\Psi}_2 =0[/imath] where [imath]\vec{p}_2[/imath] is taken to be the four dimensional version of the momentum in my Euclidean space. This is just a quick and dirty way of getting rid of some terms.

And finally I am not grasping the issue with [imath]\vec{\alpha}_2\cdot\vec{p}_2=\sqrt{\frac{1}{2}}|\vec{p}_2|[/imath]
I suspect you are not understanding my comment, “(which can be deduced from the fact that the dot product is the component of alpha in the direction of the momentum times the magnitude of that momentum)”. Remember, we are working in a Euclidean coordinate system and that fact yields a number of vector relationships which allow us to simplify vector product calculations (take a quick look at the wikipedia entry on dot products; there is a lot of valuable information there) . If we take the momentum in the tau direction to be “mc” then

[math]

\vec{p}_2 = p_x \hat{x}+p_y \hat{y}+p_z \hat{z}+p_\tau \hat{\tau}= p_x \hat{x}+p_y \hat{y}+p_z \hat{z}+mc \hat{\tau}

[/math]

 

and [imath]\vec{\alpha}_2\cdot\vec{p}_2[/imath] (the left hand side of the expression of interest) is a simple dot product. In a Euclidean coordinate system, the dot product of two vectors is invariant under rotation ([imath]\vec{a}\cdot\vec{b}=|a|\;|b| cos \theta[/imath] where theta is the angle between the two vectors). It can thus be seen as the magnitude of one vector multiplied by the component of the other in the direction of the first.

 

Fundamentally, from a physics perspective, the expression we are working with is a simple statement that the energy (as defined through the momentum) is the same as the energy (as defined through the time derivative): i.e., that fact has nothing to do with the coordinates used to represent the phenomena under examination, it is a characteristic of the phenomena itself. Thus we can always rotate our coordinate system to a state where [imath]\vec{p}_2[/imath] is parallel to one of the four orthogonal axes (such a rotation will not change the result of the dot product). In that case, the dot product is the component of [imath]\vec{\alpha}[/imath] along that axis (that single component) times the magnitude of [imath]\vec{p}_2[/imath] or [imath]\sqrt{\frac{1}{2}}|\vec{p}_2|[/imath] (we have already shown earlier that [imath]\alpha_i^2=1/2[/imath] so [imath]|\alpha_i|[/imath] must be [imath]\sqrt{\frac{1}{2}}[/imath]).

 

Another way to see the same result is to view [imath]\vec{\alpha}_2\cdot\vec{p}_2= i\hbar\sqrt{\frac{1}{2}}\frac{\partial}{\partial t}[/imath] as an operator identity when operating on [imath]\vec{\Psi}_2[/imath]. If we operate twice we end up with [imath]\frac{1}{2}|p_2|^2\vec{\Psi}_2=-\hbar^2\frac{1}{2}\frac{\partial^2}{\partial t^2}\vec{\Psi}_2[/imath] which is exactly the square of our first result again implying that [imath]\vec{\alpha}_2\cdot\vec{p}_2=\sqrt{\frac{1}{2}}|\vec{p}_2|[/imath] when operating on [imath]\vec{\Psi}_2[/imath].

So I am quite stuck and could use little help :help:
I hope that helps a little.

 

Have fun -- Dick

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

×
×
  • Create New...