Jump to content
Science Forums

Deriving Schrödinger's Equation From My Fundamental Equation


Doctordick

Recommended Posts

Hi Bombadil, I notice a perspective central to your posts which is quite common. Though you know quite a bit of mathematics, you don't seem to believe in the validity of the subject. What I mean by that is the fact that you seem to require an “understandable example” of the mathematical relationship before you will accept the mathematical relationship as meaningful. Any deduced mathematical expression is as valid as the underlying postulates behind the deduction (baring an error in the deduction itself). If one understands mathematics, reduction to an example can be counterproductive as it often constrains one's view of the possibilities. A mathematical expression means what the mathematical expression says and the fact that one often cannot mentally comprehend the range and application on an intuitive level is quite beside the point.

 

It might be worthwhile for you to read the thread where I introduce the dichotomy I call “logical” and “squirrel” thought.

 

I’ve managed to read the first page or so of that topic so far, what you are saying makes some sense and seems that it should be obvious although I can’t say that I have thought about it in quite this way before so I will have to consider some of the implications that this brings up. I have for the most part thought of math more as a language that is a method of communicating an idea more then anything else.

 

Although considering some of the points that you bring up I am starting to think that at best this is an insufficient definition in that saying that math is a way of communicating says nothing about what separates it from any other method, and while using math as a means of communicating is possible, it is perhaps not its most defining property in that there are ways of communicating that not nearly as much time or effort has been put in to remain consistent if any has. Although, I have heard that at one time there was a language used that a contradiction could not be wrote down in. This leads me to conclude that saying math is the study of self consistent systems is at least a better definition for math although I will have to consider if this is the best definition for it.

 

How each element behaves is an issue answered by the epistemological construct (the explanation) which is being represented by [math]\vec{\Psi}[/math] and my fundamental equation was specifically constructed to make absolutely no constraints upon that question.

 

The one thing that I don’t understand here is how is it that the constraints that all flaw free explanation obey are satisfied without consideration to how the elements behave. Isn’t it the behavior of the elements that the explanation tries to explain. I am suspecting that these may be two different issues but I can’t see what it is that separates them.

 

The final constraint is no more than the fact that the total energy of the element being represented by Schroedinger's equation must be approximately given by [math]E=mc^2[/math] where m is explicitly the rest mass of that element: i.e., a constant which may be subtracted from the total energy against which the kinetic energy is a trivial quantity. All this means is that we are dealing with a non-relativistic problem. As I said, Schroedinger's equation is known to be invalid if the element being represented is relativistic.

 

Then if we consider that it is only a single element that is of interest here then isn’t saying that [math]E=mc^2[/math] for the element of interest is equivalent to saying that we are in the rest frame of the element or at least a way of defining the rest frame of the element. Or to put this in a different way asking if the Schrödinger equation is a good approximation is equivalent to asking if [math]E=mc^2[/math] is a good approximation to the energy of the element of interest. This seems to me to imply that one way of defining the frame of reference is by knowing how the energy differs from [math]mc^2[/math].

 

This seems to suggest that if there exists a transformation from one frame to any other frame then there exists a frame in which the Schrödinger equation is not only an approximation but is the correct solution to the fundamental equation for the element of interest. Of course it is only for one element and as far as I know says nothing about how elements interact with each other.

 

If so this seems to suggest that we take a look at how the differentials to t and [math]\tau[/math] transform due to their use in defining the mass and energy operators in the fundamental equation.

 

This all seems to me to be related to what you where trying to explain in the thread Some subtle aspects of relativity.

 

I am sure you are aware of the “spin” characteristics of fundamental particles; that characteristic arises from “spinners” which are represented by anti-commutating operators. The idea first arose in quantum mechanics when it was seen that the classical concept of angular momentum turned out to be an anti-commutative operation when represented in quantum mechanics. That is why they speak of spin angular momentum, but the kind of relationships embodied in spin turn out to permeate throughout modern particle physics: “strangeness”, “charm”, “color” (names physicists have given to similar behavior not associated with angular momentum) all display behavior analogous to spin. I am sure you have heard of up and down quarks; the kind of behavior is quite easily mathematically represented by anti-commuting phenomena. All this is to say that modern physics without anti-commutation just doesn't seem to be a possibility: i.e., avoiding such things makes no sense at all.

 

While I have heard of all of these things I can’t say that I know much about what they represent. I haven’t looked at them much lately but previously when I have looked at them I have found either what I know about the topic was insufficient or the explanation was insufficient to be of much interest although that was some time ago so perhaps I should take a closer look at them now.

 

You say that “it seems to [you] that there should be other ways to define the operators to derive an equation that is equivalent to the fundamental equation”; if that is the case, lay it out for me.

 

I’ve been thinking about this some and while I can’t say that I have come up with anything that I know even works to define an equation equivalent to the fundamental equation what I have found is that every thing that I have considered that isn’t defined by antycumuting quickly gets it as a consequence of whatever it is defined by. So my conclusion so far is that antycumuting is perhaps the simplest ways to define an operator that works. If I do come up with anything that does seem to work I’ll bring it up later. Although, at this point I’m not expecting to come up with much.

Link to comment
Share on other sites

  • Replies 144
  • Created
  • Last Reply

Top Posters In This Topic

Just a quick post here because I did not have the proper time to concentrate on this issue this weekend :(

 

I am presuming here that you are aware of the fact that “f” is nothing more than a symbolic stand-in for the the result of all those terms arising from the integrals over all the arguments except [math]x_1[/math] and t (thus yielding a final differential equation in [math]x_1[/math] and t only.

 

The point being that every one of those individual integrals is multiplied by an alpha or a beta operator. Since every individual integral must, in the final analysis, be a function of the arguments which have not been integrated over, the net result (except for that term which arises via the time derivative) is a linear weighted sum of alpha and beta operators. Linear because no products of those alpha and beta operators appear, weighted by the actual results of the integrations and a sum because the result is indeed a sum of such things. Sorry if I seem to be beating to death an issue you already understand, but I think a clear understanding of what is being said is important. It seems to me that “seems to make sense” is a little non-committal for an issue which I feel should be an obvious clear fact.

 

I took another look at post #26 where you first explained this to me, and I think I understand it. Better and clearer now than before. Looking at the equation in that post, indeed each integral would be multiplied by an alpha or a beta operator.

 

Just to be sure; you comment on the one term that does not contain an alpha or a beta operator (the term with the time derivative), so basically what has happened for it to be included as part of f, it has been moved from one side of the equation to the other, and in the final differential equation it appears simply as one of the input arguments of "f"?

 

-Anssi

Link to comment
Share on other sites

This leads me to conclude that saying math is the study of self consistent systems is at least a better definition for math although I will have to consider if this is the best definition for it.
I feel it is indeed the best definition for mathematics and I will defend that position with the following (somewhat mathematical) argument. First I would point out that any system within the field of mathematics would be immediately removed from that category were it shown to be an inconsistent system; and, second, if anyone were to come up with a “non-trivial” internally self consistent system which was not already either a component of the field of mathematics or isomorphic to a known component of the field of mathematics, such a system would become a part of the field of mathematics as fast as mathematicians could encompass it. Since definition exists for the sole purpose of providing a way of differentiating what is and is not to be included, my definition is the very essence of a definition as it provides membership of the field in a very exact way.
The one thing that I don’t understand here is how is it that the constraints that all flaw free explanation obey are satisfied without consideration to how the elements behave. Isn’t it the behavior of the elements that the explanation tries to explain. I am suspecting that these may be two different issues but I can’t see what it is that separates them.
You miss what is, philosophically, perhaps the most important issue of my work. These are constraints on an acceptable explanation, not constraints on what is being explained. Essentially, what I am saying is that it is a waste of time to consider an explanation based upon ontological elements who's behavior violate the relationships implied by that equation.
This seems to suggest that if there exists a transformation from one frame to any other frame then there exists a frame in which the Schrödinger equation is not only an approximation but is the correct solution to the fundamental equation for the element of interest.
No, that is not a true statement. Schrödinger's equation is never correct; it is only approximately correct. The question is, how bad is the error. Please note that my definition of “energy” is not the standard definition given in physics (where “energy” is defined to be “the ability to do work”) as I have not defined “work”. My definition is as follows:
First, I will define ”the Energy Operator” as [imath]i\hbar\frac{\partial}{\partial t}[/imath] (and thus, the conserved quantity required by the fact of shift symmetry in the t index becomes “energy”: i.e., energy is conserved by definition). A second definition totally consistent with what has already been presented is to define the expectation value of “energy” to be given by

[math]E=i\hbar\int\vec{\Psi}^\dagger\cdot\frac{\partial}{\partial t}\vec{\Psi}dV.[/math]

Under this definition, what my derivation shows is that the error in the solution given by Schrödinger's equation and the correct solution is proportional to difference between the correct “energy” and the value of [imath]mc^2[/imath]. The error is always there, the question is, is it big enough for you to worry about it.

If so this seems to suggest that we take a look at how the differentials to t and [math]\tau[/math] transform due to their use in defining the mass and energy operators in the fundamental equation.
You miss a subtle point. The coordinate measure has not been defined. That measure is defined by your explanation and is embedded in those integrals which yield the appropriate values for “V” in the deduced Schrödinger equation (the measures used in those integrals have to be the same as the measures used in the expectations for the single element under examination).

 

The fact that the definition of that coordinate measure can not be defined without first knowing the explanation (in detail) is an important fact. On the other hand, we can look at two explanations (where the people making the explanations both consider the other to be moving in their coordinate system). It is the logic of that circumstance which I am discussing when I talk about relativity. For the moment, I suggest we concern ourselves only with the deduction of Schrödinger's equation.

While I have heard of all of these things I can’t say that I know much about what they represent. I haven’t looked at them much lately but previously when I have looked at them I have found either what I know about the topic was insufficient or the explanation was insufficient to be of much interest although that was some time ago so perhaps I should take a closer look at them now.
It's a pretty advanced component of modern physics and I suspect you won't get much satisfaction with any information easily available to you.
If I do come up with anything that does seem to work I’ll bring it up later. Although, at this point I’m not expecting to come up with much.
I apologize. I was actually being a bit facetious. I am sure you know that I have a Ph.D. In physics. The actual area of my specialization was theoretical nuclear physics and I spent a lot of time with the mathematical constructs used to derive the relationships embedded in fundamental particle physics. There is a lot of seemingly unrelated things there but anti-commutation seems to be a rather inevitable binding thread. It follows from that fact that anti-commutating operators should be a pretty fundamental aspect of the mathematics required to express the embedded ideas.

 

Thus I would say that idea that one could express the required constraints without the use of anti-commuting operators is a rather far fetched idea. Most importantly, they can be used to obtain rather simple results when equations are squared. One of the strange things about modern physics is that the fundamental laws (those that arise from symmetry principals) seem to be linear whereas the solutions to most problems seem to arise from expressions with squared terms. The two ideas go together like ham and eggs.

 

Have fun – Dick

Link to comment
Share on other sites

Hi Anssi, I put this as a separate post to make sure you didn't miss it.

Just to be sure; you comment on the one term that does not contain an alpha or a beta operator (the term with the time derivative), so basically what has happened for it to be included as part of f, it has been moved from one side of the equation to the other, and in the final differential equation it appears simply as one of the input arguments of "f"?
The big issue here is the fact that, when I square the entire operator on the left hand side of the equation, I get a great quantity of terms which simply drop out since for every time a pair of anti-commuting operators appear in one order they also appear in the opposite order and the sum of the two pairs is zero. All the remaining terms are multiplied by a squared anti-commuting operator in which case the result is a simple one half. Thus the anti-commuting operators simply drop out of the equation.

 

As you say, I have moved the time dependent terms I cannot argue to zero to the other side of the equation where I get rid of them by redefining the [imath]\vec{\Psi}[/imath].

 

Of course, this is only a valid argument when there are no terms which do not contain anti-commuting operators. That is why I go through the arguments at the end where I point what actually happens if those terms are left in. The consequences end up being in “V” which can be quite a bit more complex than a simple potential function of the coordinate x. As I say there,

The time dependence creates no real problems: V(x) merely becomes V(x,t). The terms proportional to [imath]\frac{\partial}{\partial x}[/imath] correspond to velocity dependent terms in V and, finally, retention of the alpha and beta operators essentially forces our deductive result to be a set of equation, each with its own V(x,t). All of these results are entirely consistent with Schroedinger's equation, they simply require interactions not commonly seen on the introductory level. Inclusion of these complications would only have served to obscure the fact that what was deduced was, in fact, Schrödinger's equation.
All these additional terms come in through the squaring of “f”; I just call the net result “V”. Please read the last paragraph of my post to Bombadil. You will have to take it on faith that I am telling you the truth when I say, “these results are entirely consistent with Schroedinger's equation, they simply require interactions not commonly seen on the introductory level”. Anyone who has gotten into the more complex applications of Schroedinger's equation is well aware of the usage of such terms.

 

Have fun -- Dick

Link to comment
Share on other sites

You miss what is, philosophically, perhaps the most important issue of my work. These are constraints on an acceptable explanation, not constraints on what is being explained. Essentially, what I am saying is that it is a waste of time to consider an explanation based upon ontological elements who's behavior violate the relationships implied by that equation.

 

So is it possible to come up with a behavior that would violate the fundamental equation? Or, is any behavior that violates the fundamental equation removed by considering more elements in the fundamental equation, or by considering the elements to be different types of elements?

 

No, that is not a true statement. Schrödinger's equation is never correct; it is only approximately correct. The question is, how bad is the error. Please note that my definition of “energy” is not the standard definition given in physics (where “energy” is defined to be “the ability to do work”) as I have not defined “work”. My definition is as follows:

 

If we go back to the point

 

[math]

\left\{\frac{\partial^2}{\partial x^2} + G(x)\right\}\vec{\Phi}(x,t)=\left\{\sqrt{2}K\frac{\partial}{\partial t}- iq\right\}\left\{\sqrt{2}K\frac{\partial}{\partial t}+iq\right\}\vec{\Phi}(x,t)

[/math]

 

where you substitute [imath]

K\sqrt{2}\frac{\partial}{\partial t}\vec{\Phi} \approx -iq\vec{\Phi}

[/imath] into the fundamental equation is the possibility of there existing a frame of reference in which this is the correct value removed simply because if it were the case then the second term would also vanish. Does this alone tell us that a reference frame does not exist where the Schrödinger equation gives the same behavior as the fundamental equation or is there more that is necessary to show that this is the case?

 

You miss a subtle point. The coordinate measure has not been defined. That measure is defined by your explanation and is embedded in those integrals which yield the appropriate values for “V” in the deduced Schrödinger equation (the measures used in those integrals have to be the same as the measures used in the expectations for the single element under examination).

 

Then are we also defining the value of those differentials at the same time as we define the coordinate measure? That is, is there a link between the definition of the coordinate measure and the value of q in the above equation?

 

Also is the linear sum of alpha and beta operators that make up V(X) representing the remainder of the universe (that is the remainder of the elements in the fundamental equation)? If so what are the terms representing that make up V(X) other then the alpha and beta operators are they also representing elements that make up the reminder of the elements in the fundamental equation?

 

Then since all of the elements in the fundamental equation are nothing more then a single point, that is they take up no space, the only way to define a measure is by how they are related to each other, so that to define a measure we must define some particular relation that the elements make up as a unit distance? Which we must define before we can set up a coordinate system, so that there is no way to determine whether or not such a relation changed other then in comparison to the remainder of the elements?

Link to comment
Share on other sites

Hi Anssi, I put this as a separate post to make sure you didn't miss it.

The big issue here is the fact that, when I square the entire operator on the left hand side of the equation, I get a great quantity of terms which simply drop out since for every time a pair of anti-commuting operators appear in one order they also appear in the opposite order and the sum of the two pairs is zero. All the remaining terms are multiplied by a squared anti-commuting operator in which case the result is a simple one half. Thus the anti-commuting operators simply drop out of the equation.

 

Hmm, I don't really understand what you are saying :(

 

When you say "when I square the entire operator on the left hand side...", you must be referring to this part of the OP:

 

If we now multiply the original equation by the respective sides of this identity, recognizing that the multiplication of the alpha and beta operators yields either one half (for all the direct terms) or zero (for all the cross terms) and defining the resultant of [imath]g(\vec{x})g(\vec{x})[/imath] to be [imath]\frac{1}{2}G(\vec{x})[/imath] (note that all alpha and beta operators have vanished), we can write the differential equation to be solved as

[math] \nabla^2\vec{\Phi}(\vec{x},t) + G(\vec{x})\vec{\Phi}(\vec{x},t)= 2K^2\frac{\partial^2}{\partial t^2}\vec{\Phi}(\vec{x},t).[/math]

 

I am unable to see how the anti-commuting operators cause a great quantity of terms to drop out...

 

Actually, now I think I need to still take a small step back first, as I'm getting confused over the previous step:

 

For the simple convenience of solving this differential equation, this result clearly suggests that one redefine [imath]\vec{\Psi}_0[/imath] via the definition [imath]\vec{\Psi}_0 = e^{-iK(S_2+S_r)t}\vec{\Phi}[/imath]. If one further defines the integral within the curly braces to be [imath]g(\vec{x}_1)[/imath], [imath]\vec{x}_1[/imath] being the only variable not integrated over, the equation we need to solve can be written in an extremely concise form:

[math]\left\{\vec{\alpha}\cdot \vec{\nabla} + g(\vec{x})\right\}\vec{\Phi} = K\frac{\partial}{\partial t}\vec{\Phi}, [/math]

 

 

I'm not sure anymore how to get to that result. What am I doing wrong at the end of post #64?

http://hypography.com/forums/philosophy-of-science/15451-deriving-schr-dingers-equation-my-fundamental-7.html#post253406

 

Also, when you say "[imath]\vec{x}_1[/imath] being the only variable not integrated over", you mean the one in the sum? (since it appears twice in the equation) I.e. [imath]g(\vec{x}_1)[/imath] stands for that integral with that one variable being left out from the integration? (If so, I misinterpreted this at first)

 

(Note btw that in the OP you are missing one "imath" tag in the paragraph right above what I quoted, when you mention [imath]f_0[/imath])

 

I re-read the rest of the OP through once again, and seems like there's not that much stuff to walk through anymore. Although some of it looks a bit tricky, but I hope I can get through it soon!

 

-Anssi

Link to comment
Share on other sites

Meanwhile, let us go back to your first post after I left. You were clearly disturbed by exactly what these “positions” in the hypothetical space actually represent.

 

...

 

These indices [math]\vec{x}_i[/math] are numerical labels for the undefined ontological elements standing behind your epistemological construct which is your explanation of reality. It is the epistemological construct itself which defines what these ontological elements are. When you purport to understand the reality you know, you identify these labels (or collections of labels) to those ontological elements defined by your explanation. It is the dynamic behavior of these labels (and that behavior can indeed be static, but “static” is a very limited realm) which your explanation explains.

 

So, it sounds like I got it right here:

 

As to the identity of specific elements; of course any specific epistemological construct (i.e. "worldview") would include that information. I guess there must be some assumption about the identity of that single element before there can exist any explanation to it, i.e. before any expectations about its future could exist. Hmm... So in that sense any specific [imath]\vec{\Psi}[/imath] must include the information about the identity of the elements. On the other hand, which element is which is never communicated in its input arguments. I guess that's not necessary?

 

I should clarify the comment "any specific [imath]\vec{\Psi}[/imath] must include the information about the identity of the elements". I guess I should rather say; in order for anyone to have arrived to any specific [imath]\vec{\Psi}[/imath], they must have come up with some specific definitions for the ontological elements in their worldview.

 

Does that sound about right?

 

-Anssi

Link to comment
Share on other sites

Bombadil, somehow I have to figure out how to get your mind off issues which are of utterly no interest at this point. What I am presenting is a logical deduction (or model if you prefer) following directly from the fact that one has a method (that flaw-free explanation) of obtaining one's expectations for any specific consequence conceivable. Against this you insist on talking about specific explanations. Specific explanations are of no interest here. Before you examine any specific explanations, you need to understand the character of the possible solutions to my fundamental equation.

So is it possible to come up with a behavior that would violate the fundamental equation?
Of course it is. The literature is chock full of explanations which are not “flaw-free”.
Or, is any behavior that violates the fundamental equation removed by considering more elements in the fundamental equation, or by considering the elements to be different types of elements?
... they are removed by considering other explanations! “Different types of elements” are issues of “different explanations”.
If we go back to the point

 

[math]

\left\{\frac{\partial^2}{\partial x^2} + G(x)\right\}\vec{\Phi}(x,t)=\left\{\sqrt{2}K\frac{\partial}{\partial t}- iq\right\}\left\{\sqrt{2}K\frac{\partial}{\partial t}+iq\right\}\vec{\Phi}(x,t)

[/math]

 

where you substitute [imath]

K\sqrt{2}\frac{\partial}{\partial t}\vec{\Phi} \approx -iq\vec{\Phi}

[/imath] into the fundamental equation is the possibility of there existing a frame of reference in which this is the correct value removed simply because if it were the case then the second term would also vanish. Does this alone tell us that a reference frame does not exist where the Schrödinger equation gives the same behavior as the fundamental equation or is there more that is necessary to show that this is the case?

That is certainly a possibility; however, it is also certainly uninteresting. Remember, G(x) is generated through integrals over Dirac delta function interactions. These functions have zero extent; it follows that one can always look at the relationship at a finer scale. Thus the impact of G(x) can always be temporarily removed. When the solution is looked at from that scale, the momentum of the event of interest vanishes (see the definition of momentum). That means the event of interest is exactly at rest with respect to the rest of the universe. If this is true for any point on the path of that event in your explanation, that point must define the CM of the universe. That can only be true if the event is “the universe”. This is simply an uninteresting case.
Then are we also defining the value of those differentials at the same time as we define the coordinate measure? That is, is there a link between the definition of the coordinate measure and the value of q in the above equation?
Well of course there is such a link! You seem to drop the issue that [imath]\vec{\Psi}_r[/imath] is the function which yields exactly your expectations for the entire rest of the universe (lying under your “explanation”). Somehow you fail to understand that the measures, the differentials and many other characteristics of the universe under your explanation lie within the description provided by [imath]\vec{\Psi}_r[/imath]. They constitute your presentation of your expectations.
Also is the linear sum of alpha and beta operators that make up V(X) representing the remainder of the universe (that is the remainder of the elements in the fundamental equation)? If so what are the terms representing that make up V(X) other then the alpha and beta operators are they also representing elements that make up the reminder of the elements in the fundamental equation?
There are no alpha and/or beta operators in V(x). The squaring of the operator representing my fundamental equation eliminates those operators. See my answer to Anssi below!
Then since all of the elements in the fundamental equation are nothing more then a single point, that is they take up no space, the only way to define a measure is by how they are related to each other, so that to define a measure we must define some particular relation that the elements make up as a unit distance? Which we must define before we can set up a coordinate system, so that there is no way to determine whether or not such a relation changed other then in comparison to the remainder of the elements?
Again, you are speaking of your expectations defined by [imath]\vec{\Psi}_r[/imath]. These are provided or assumed by your explanation: i.e., the explanation which defines your expectation [imath]\vec{\Psi}[/imath] for the universe.

 

I hope you can begin to comprehend that your questions have no bearing at all upon what I am talking about.

 

Have fun -- Dick

Link to comment
Share on other sites

I am sorry Anssi. I am presuming a facility with mathematics which you do not possess. It is nothing complex; it is little more than a facility developed very quickly when one spends a little time doing algebra. Concerning the dropping out of those alpha and beta operators, we ended the last post with

[math]\vec{\alpha}_1\cdot \vec{\nabla}\vec{\Psi}_0 + \left\{\int \vec{\Psi}_r^\dagger\cdot \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f_0(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right] \vec{\Psi}_r dV_r\right\}\vec{\Psi}_0 = K\frac{\partial}{\partial t}\vec{\Psi}_0\ + iK\left(S_2+S_r\right)\vec{\Psi}_0[/math]

 

Go back and look at my original post to this thread immediately following the above equation.

For the simple convenience of solving this differential equation, this result clearly suggests that one redefine [imath]\vec{\Psi}_0[/imath] via the definition [imath]\vec{\Psi}_0 = e^{-iK(S_2+S_r)t}\vec{\Phi}[/imath]. If one further defines the integral within the curly braces to be [imath]g(\vec{x}_1)[/imath], [imath]\vec{x}_1[/imath] being the only variable not integrated over, the equation we need to solve can be written in an extremely concise form:

[math]\left\{\vec{\alpha}\cdot \vec{\nabla} + g(\vec{x})\right\}\vec{\Phi} = K\frac{\partial}{\partial t}\vec{\Phi}, [/math]

This is an issue of differential calculus. The left hand side of the equation generates no consequences of that factor [imath]\vec{\Psi}_0 = e^{-iK(S_2+S_r)t}[/imath] (as far as the left hand side is concerned the factor is no more than a constant multiplying [imath]\vec{\Phi}[/imath]). On the right hand side, because ([imath]\vec{\Psi}_0 = e^{-iK(S_2+S_r)t}\vec{\Phi}[/imath]) both terms are functions of “t” and the differential with respect to t generates (by the chain rule) a very specific additional term: [imath]-iK\left(S_2+S_r\right)\vec{\Phi}_0[/imath] which just eliminates that final term. If we now substitute [imath]\vec{\Psi}_0 = e^{-iK(S_2+S_r)t}\vec{\Phi}[/imath] and divide the entire equation by [imath]e^{-iK(S_2+S_r)t}[/imath] we end up with exactly

[math]\left\{\vec{\alpha}\cdot \vec{\nabla} + g(\vec{x})\right\}\vec{\Phi} = K\frac{\partial}{\partial t}\vec{\Phi}, [/math]

 

At this point, it is important to remember that [imath]g(\vec{x})[/imath] is actually a sum over integrals each of which is multiplied by a different beta operator. Thus it is that the left hand side of the equation is actually a sum over a great many terms, everyone of which is multiplied by a different anti-commuting operator while the right hand side contains no such operators.

 

It is important here to remember one of the other basic constraints placed upon our fundamental equation was that the differential with respect to t of the function [imath]\vec{\Psi}[/imath] had to vanish. This, together with the arguments used to get to [imath]\vec{\Phi}[/imath] bring us to the fact that the partial with respect to t of [imath]\vec{\Phi}[/imath] can, at worst case, be constant.

If we do indeed have the correct function [imath]\vec{\Phi}[/imath] (under the approximations we have made) then the right hand side of our equation is a constant times [imath]\vec{\Phi}[/imath] (under our definition of energy, energy is conserved). It follows, as the night the day, that the left hand side must also yield that same constant times [imath]\vec{\Phi}[/imath]. Thus it is that we conclude that the mathematical operator, [imath]\left\{\vec{\alpha}\cdot \vec{\nabla} + g(\vec{x})\right\}[/imath] (when operating on [imath]\vec{\Phi}[/imath], is identical to the mathematical operator, [imath]K\frac{\partial}{\partial t}[/imath], when operating on [imath]\vec{\Phi}[/imath].

 

This is what leads to my writing down

[math]\left\{\vec{\alpha}\cdot \vec{\nabla} + g(\vec{x})\right\}=K\frac{\partial}{\partial t}[/math]

 

So I am now talking about the operators themselves and not the result obtained when they act upon [imath]\vec{\Phi}[/imath]. When I square these two elements (the two different expressions which yield identical results) the alpha a delta operators all drop out. To see that you have to understand exactly how the two factors are multiplied out under the rules of mathematics. Note first that every element of the left hand side has an alpha or beta operator as a factor (the right hand side has none). The actual structure of the left hand side squared is essentially analogous to

[math]\left\{a_1f_1(x)+a_2f_2(x)+a_3f_3(x)...a_nf_n(x)\right\}\left\{a_1f_1(x)+a_2f_2(x)+a_3f_3(x)...a_nf_n(x)\right\}[/math]

 

Where all the “[imath]a_i[/imath]” represent specific alpha or beta operators and [imath]f_i(x)[/imath] represent the result of the associated operation (either an x derivative or an appropriate integral). When one multiplies such a thing out, one gets a sum consisting of every term of the set multiplying every term of the set.

[math]a_1f_1(x)\left\{a_1f_1(x)+a_2f_2(x)+a_3f_3(x)...a_nf_n(x)\right\}+a_2f_2(x)\left\{a_1f_1(x)+a_2f_2(x)+a_3f_3(x)...a_nf_n(x)\right\}+[/math]

[math]a_3f_3(x)\left\{a_1f_1(x)+a_2f_2(x)+a_3f_3(x)...a_nf_n(x)\right\}+...+a_nf_n(x)\left\{a_1f_1(x)+a_2f_2(x)+a_3f_3(x)...a_nf_n(x)\right\}[/math]

 

or, multiplying out,

[math]a_1f_1(x)a_1f_1(x)+a_1f_1(x)a_2f_2(x)+a_1f_1(x)a_3f_3(x)...a_1f_1a_nf_n(x)+...+[/math]

[math]a_kf_k(x)a_1f_1(x)+a_kf_k(x)a_2f_2(x)+...a_kf_k(x)a_nf_n(x)+...+a_nf_n(x)a_nf_n(x)[/math]

 

There are two kinds of terms in that sum. There are what are called “direct terms” which are essentially simple squares where the index of the two terms being multiplied are the same and what are called “cross terms” which are the terms where the indices are different. The direct terms all yield exactly the same result the alpha or beta operator is squared and, since the indices are identical, the result is exactly one half. The functions connected to those indices are simply squared.

 

The interesting phenomena are the cross terms. Every time a term [imath]a_if_i(x)a_jf_j(x)[/imath] appears, a term [imath]a_jf_j(x)a_if_i(x)[/imath] also appears. [imath]f_i(x)f_j(x)[/imath] is identical to [imath]f_j(x)f_i(x)[/imath] as they are simple mathematical functions and they may be factored; however, the remaining factor [imath](a_ia_j+a_ja_i)=0[/imath] since these operators anti-commute. Thus it is that all the “cross terms” vanish identically.

I'm not sure anymore how to get to that result. What am I doing wrong at the end of post #64?
You are not using the chain rule of differentiation properly
Then concentrating on the differentiation:

[math]\frac{\partial}{\partial t}(e^{-iK(S_2+S_r)t}\vec{\Phi})[/math]

 

I'm not sure, but I suppose that can be written:

[math]\frac{\partial}{\partial t}e^{-iK(S_2+S_r)t}\frac{\partial}{\partial t}\vec{\Phi}[/math]

What you actually get from that differentiation is

[math]\frac{\partial}{\partial t}(e^{-iK(S_2+S_r)t}\vec{\Phi})=\left\{\frac{\partial}{\partial t}e^{-iK(S_2+S_r)t}\right\}\vec{\Phi}+ e^{-iK(S_2+S_r)t}\frac{\partial}{\partial t}\vec{\Phi}[/math]

 

Also, when you say "[imath]\vec{x}_1[/imath] being the only variable not integrated over", you mean the one in the sum? (since it appears twice in the equation) I.e. [imath]g(\vec{x}_1)[/imath] stands for that integral with that one variable being left out from the integration? (If so, I misinterpreted this at first)
Here I do not understand what you mean by “the one in the sum”. We have expressed [imath]\vec{\Psi}_1=\vec{\Psi}_0\vec{\Psi}_r[/imath]. The function [imath]\vec{\Psi}_0[/imath] is defined to be a function of but one argument. I have chosen it to be [imath]x_1[/imath] but it could have been any one of the individual arguments under discussion. All that is important is that [imath]\vec{\Psi}_r[/imath] constitutes all the others (all “remaining” arguments). [imath]\vec{\Psi}_r[/imath] is what yields your expectations for the rest of the universe; everything except that single argument we are discussing. You should maybe read my comments to Bombadil above. No specific x has any preference over another. The preferences are all detailed in your expectations (your explanation of the universe) assign meaning to specific indices.
(Note btw that in the OP you are missing one "imath" tag in the paragraph right above what I quoted, when you mention [imath]f_0[/imath])
I have fixed the post. Thank you for pointing out the error.
Does that sound about right?
Yeah, I think you have it about right.

 

Have fun -- Dick

Link to comment
Share on other sites

he one thing that I don’t understand here is how is it that the constraints that all flaw free explanation obey are satisfied without consideration to how the elements behave. Isn’t it the behavior of the elements that the explanation tries to explain. I am suspecting that these may be two different issues but I can’t see what it is that separates them.

You miss what is, philosophically, perhaps the most important issue of my work. These are constraints on an acceptable explanation, not constraints on what is being explained. Essentially, what I am saying is that it is a waste of time to consider an explanation based upon ontological elements who's behavior violate the relationships implied by that equation.

 

Perhaps it is helpful to point out here also that those symmetry constraints "on all flaw free explanations" are springing from the fact that the explicit meaning of the "data to be explained", is fundamentally unknown.

 

I.e. because we are ignorant of the explicit meaning of the data, our world model will have certain symmetries to it. That is true as long as we have not added extraneous and unnecessary assumptions to our worldview about the meaning of the data.

 

So even when you have a handful of different specific explanations (worldviews), you can say they all must obey the fundamental equation, or they have made undefendable/unnecessary assumptions about the meaning of the data.

 

For me, one especially interesting thing of this analysis is that those symmetry constraints seem to yield a way to define entities in a meaningful way in the first place. I.e. instead of considering the issue in terms of "let's see what specific objects we can first spot, and then let's measure how they behave", it's more like "here's raw meaningless data, and here's how to interpret it in terms of objects that behave in simple ways".

 

Anyway, I hope ethat explained why we can say that these constraints are satisfied by a flaw-free explanation without considering any specific behaviour of the elements (of the explanation) at all. If not, I'll have another attempt to explain it better.

 

-Anssi

Link to comment
Share on other sites

I am sorry Anssi. I am presuming a facility with mathematics which you do not possess. It is nothing complex; it is little more than a facility developed very quickly when one spends a little time doing algebra.

 

Yeah, just learning all this stuff fron scratch, so it's easy for me to also forget some rules that we've already used. Like;

 

You are not using the chain rule of differentiation properly

What you actually get from that differentiation is

[math]\frac{\partial}{\partial t}(e^{-iK(S_2+S_r)t}\vec{\Phi})=\left\{\frac{\partial}{\partial t}e^{-iK(S_2+S_r)t}\right\}\vec{\Phi}+ e^{-iK(S_2+S_r)t}\frac{\partial}{\partial t}\vec{\Phi}[/math]

 

Doh!

 

So, what I messed up in #64, I tried it again with pen and paper and ended up with the correct results. Like so:

 

[math]\left\{\frac{\partial}{\partial t}e^{-iK(S_2+S_r)t}\right\}\vec{\Phi}+ e^{-iK(S_2+S_r)t}\frac{\partial}{\partial t}\vec{\Phi}[/math]

 

 

[math]=[/math]

 

 

[math]-iK(S_2+S_r) e^{-iK(S_2+S_r)t} \vec{\Phi} + e^{-iK(S_2+S_r)t} \frac{\partial}{\partial t}\vec{\Phi} [/math]

 

Putting that result back into:

[math]K\frac{\partial}{\partial t}(e^{-iK(S_2+S_r)t}\vec{\Phi}) + iK\left(S_2+S_r\right)e^{-iK(S_2+S_r)t}\vec{\Phi}[/math]

 

And factoring out the terms:

[math]iK\left(S_2+S_r\right) e^{-iK(S_2+S_r)t} \vec{\Phi}[/math]

&

[math] -iK\left(S_2+S_r\right)e^{-iK(S_2+S_r)t} \vec{\Phi}[/math]

 

We end up with:

 

[math]Ke^{-iK(S_2+S_r)t} \frac{\partial}{\partial t}\vec{\Phi}[/math]

 

And the left hand side of that equation:

 

[math]\vec{\alpha}_1\cdot \vec{\nabla}\vec{\Psi}_0 + \left\{\int \vec{\Psi}_r^\dagger\cdot \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f_0(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right] \vec{\Psi}_r dV_r\right\}\vec{\Psi}_0[/math]

 

I suppose that turns into:

 

[math]\vec{\alpha}_1\cdot \vec{\nabla} e^{-iK(S_2+S_r)t} \vec{\Phi} + g(\vec{x}) e^{-iK(S_2+S_r)t} \vec{\Phi}[/math]

 

So overall that's:

 

[math]\vec{\alpha}_1\cdot \vec{\nabla} e^{-iK(S_2+S_r)t} \vec{\Phi} + g(\vec{x}) e^{-iK(S_2+S_r)t} \vec{\Phi} = Ke^{-iK(S_2+S_r)t} \frac{\partial}{\partial t}\vec{\Phi}[/math]

 

And:

 

...If we now substitute [imath]\vec{\Psi}_0 = e^{-iK(S_2+S_r)t}\vec{\Phi}[/imath] and divide the entire equation by [imath]e^{-iK(S_2+S_r)t}[/imath] we end up with exactly

[math]\left\{\vec{\alpha}\cdot \vec{\nabla} + g(\vec{x})\right\}\vec{\Phi} = K\frac{\partial}{\partial t}\vec{\Phi}, [/math]

 

Indeed!

Phew! :D

 

At this point, it is important to remember that [imath]g(\vec{x})[/imath] is actually a sum over integrals each of which is multiplied by a different beta operator. Thus it is that the left hand side of the equation is actually a sum over a great many terms, everyone of which is multiplied by a different anti-commuting operator while the right hand side contains no such operators.

 

It is important here to remember one of the other basic constraints placed upon our fundamental equation was that the differential with respect to t of the function [imath]\vec{\Psi}[/imath] had to vanish. This, together with the arguments used to get to [imath]\vec{\Phi}[/imath] bring us to the fact that the partial with respect to t of [imath]\vec{\Phi}[/imath] can, at worst case, be constant.

If we do indeed have the correct function [imath]\vec{\Phi}[/imath] (under the approximations we have made) then the right hand side of our equation is a constant times [imath]\vec{\Phi}[/imath] (under our definition of energy, energy is conserved). It follows, as the night the day, that the left hand side must also yield that same constant times [imath]\vec{\Phi}[/imath]. Thus it is that we conclude that the mathematical operator, [imath]\left\{\vec{\alpha}\cdot \vec{\nabla} + g(\vec{x})\right\}[/imath] (when operating on [imath]\vec{\Phi}[/imath], is identical to the mathematical operator, [imath]K\frac{\partial}{\partial t}[/imath], when operating on [imath]\vec{\Phi}[/imath].

 

This is what leads to my writing down

[math]\left\{\vec{\alpha}\cdot \vec{\nabla} + g(\vec{x})\right\}=K\frac{\partial}{\partial t}[/math]

 

Hmmm... I'm not really sure I understand completely your text explanation, but on the other hand it seems valid to me that if

 

[math]\left\{\vec{\alpha}\cdot \vec{\nabla} + g(\vec{x})\right\}\vec{\Phi} = K\frac{\partial}{\partial t}\vec{\Phi}[/math]

 

then

 

[math]\left\{\vec{\alpha}\cdot \vec{\nabla} + g(\vec{x})\right\}=K\frac{\partial}{\partial t}[/math]

 

So I am now talking about the operators themselves and not the result obtained when they act upon [imath]\vec{\Phi}[/imath]. When I square these two elements (the two different expressions which yield identical results) the alpha a delta operators all drop out. To see that you have to understand exactly how the two factors are multiplied out under the rules of mathematics.

...

There are two kinds of terms in that sum. There are what are called “direct terms” which are essentially simple squares where the index of the two terms being multiplied are the same and what are called “cross terms” which are the terms where the indices are different. The direct terms all yield exactly the same result the alpha or beta operator is squared and, since the indices are identical, the result is exactly one half. The functions connected to those indices are simply squared.

 

Okay. For a while I was a bit puzzled by where the "one half" comes from, because I had forgotten the definitions of anti-commutation, but I dug that information back up from post #86 and it sounds all valid to me now.

 

The interesting phenomena are the cross terms. Every time a term [imath]a_if_i(x)a_jf_j(x)[/imath] appears, a term [imath]a_jf_j(x)a_if_i(x)[/imath] also appears. [imath]f_i(x)f_j(x)[/imath] is identical to [imath]f_j(x)f_i(x)[/imath] as they are simple mathematical functions and they may be factored; however, the remaining factor [imath](a_ia_j+a_ja_i)=0[/imath] since these operators anti-commute. Thus it is that all the “cross terms” vanish identically.

 

Yes, got it.

 

Also, when you say "[imath]\vec{x}_1[/imath] being the only variable not integrated over", you mean the one in the sum? (since it appears twice in the equation) I.e. [imath]g(\vec{x}_1)[/imath] stands for that integral with that one variable being left out from the integration? (If so, I misinterpreted this at first)

Here I do not understand what you mean by “the one in the sum”.

 

I mean, because [imath] x_1 [/imath] appears twice in the equation:

 

[math]\vec{\alpha}_1\cdot \vec{\nabla}\vec{\Psi}_0 + \left\{\int \vec{\Psi}_r^\dagger\cdot \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f_0(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right] \vec{\Psi}_r dV_r\right\}\vec{\Psi}_0 = K\frac{\partial}{\partial t}\vec{\Psi}_0\ + iK\left(S_2+S_r\right)\vec{\Psi}_0[/math]

 

First outside the integral, and then inside the integral. So I was wondering about [imath]g(\vec{x}_1)[/imath], and what do you mean exactly by [imath]\vec{x}_1[/imath] being the only variable not integrated over;

 

Does it mean, that the [imath]\vec{x}_1[/imath], that was inside the integral [math]\int \vec{\Psi}_r^\dagger\cdot \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f_0(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right] \vec{\Psi}_r dV_r[/math], is going to be left out from the integration?

 

Thanks a lot for the math help once again

 

-Anssi

Link to comment
Share on other sites

Hmmm... I'm not really sure I understand completely your text explanation, but on the other hand it seems valid to me that if

 

[math]\left\{\vec{\alpha}\cdot \vec{\nabla} + g(\vec{x})\right\}\vec{\Phi} = K\frac{\partial}{\partial t}\vec{\Phi}[/math]

 

then

 

[math]\left\{\vec{\alpha}\cdot \vec{\nabla} + g(\vec{x})\right\}=K\frac{\partial}{\partial t}[/math]

As long as you accept it as a valid expression; however, you should realize that, as a general relationship it is false. Operating on an arbitrary function the two operators will yield totally different consequences. The equality of the operators is true only when those operators are operating on a function which is a solution to the fundamental equation (the only case of interest to us).
First outside the integral, and then inside the integral. So I was wondering about [imath]g(\vec{x}_1)[/imath], and what do you mean exactly by [imath]\vec{x}_1[/imath] being the only variable not integrated over;

 

Does it mean, that the [imath]\vec{x}_1[/imath], that was inside the integral [math]\int \vec{\Psi}_r^\dagger\cdot \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f_0(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right] \vec{\Psi}_r dV_r[/math], is going to be left out from the integration?

The expression “[imath]dV_r[/imath]” is used to express the range of variables being integrated over (the subscript “r” refers to all the “remaining” arguments: i.e., those not included in [imath]\vec{\Psi}_0[/imath]). If you go back and look at the arguments used to justify the separation of [imath]\vec{\Psi}_1[/imath] into [imath]\vec{\Psi}_0[/imath] and [imath]\vec{\Psi}_r[/imath] you should remember that [imath]\vec{\Psi}_r[/imath] needs to be a function of [imath]\vec{x}_1[/imath].

 

There are two kinds of integrals which one can define, an indefinite integral which is a function essentially the opposite of differentiation and a definite integral (where the integral is defined over some specific range of arguments). The “definite” integral is simply a number. In our case, we are integrating over all arguments except [imath]\vec{x}_1[/imath]. That being the case, the finished result will still be a function of [imath]\vec{x}_1[/imath]: i.e., the “number” one gets for all those definite integrals (for all the “remaining” arguments) depends upon what value [imath]\vec{x}_1[/imath] happens to have.

Thanks a lot for the math help once again
You are quite welcome. You must remember that mathematics itself is a full field all in its own and we are only touching on a few critical issues here. I hope you don't get bored with the subject.

 

Have fun – Dick

Link to comment
Share on other sites

As long as you accept it as a valid expression; however, you should realize that, as a general relationship it is false. Operating on an arbitrary function the two operators will yield totally different consequences. The equality of the operators is true only when those operators are operating on a function which is a solution to the fundamental equation (the only case of interest to us).

 

Oh, right, of course. Well, let me just comment on what parts exactly I am not sure I understand, as they probably have something to do with earlier explanations that are still somewhat fuzzy in my head :I

 

It is important here to remember one of the other basic constraints placed upon our fundamental equation was that the differential with respect to t of the function [imath]\vec{\Psi}[/imath] had to vanish.

 

That I understand; the shift symmetry regarding t.

 

This, together with the arguments used to get to [imath]\vec{\Phi}[/imath] bring us to the fact that the partial with respect to t of [imath]\vec{\Phi}[/imath] can, at worst case, be constant.

 

That I don't understand...

 

If we do indeed have the correct function [imath]\vec{\Phi}[/imath] (under the approximations we have made) then the right hand side of our equation is a constant times [imath]\vec{\Phi}[/imath] (under our definition of energy, energy is conserved).

 

...I remember this energy conservation stuff was mentioned earlier on in the presentation (regarding the constant K) and I don't think I understood it very well back then either. I am not sure what is our definition of energy here.

 

But taking those things on faith...

 

It follows, as the night the day, that the left hand side must also yield that same constant times [imath]\vec{\Phi}[/imath]. Thus it is that we conclude that the mathematical operator, [imath]\left\{\vec{\alpha}\cdot \vec{\nabla} + g(\vec{x})\right\}[/imath] (when operating on [imath]\vec{\Phi}[/imath], is identical to the mathematical operator, [imath]K\frac{\partial}{\partial t}[/imath], when operating on [imath]\vec{\Phi}[/imath].

 

This is what leads to my writing down

[math]\left\{\vec{\alpha}\cdot \vec{\nabla} + g(\vec{x})\right\}=K\frac{\partial}{\partial t}[/math]

 

...I think I understand that part.

 

You are quite welcome. You must remember that mathematics itself is a full field all in its own and we are only touching on a few critical issues here. I hope you don't get bored with the subject.

 

Well I don't find mathematics itself too interesting, but I would like to really understand your presentation properly, I don't think I'm getting bored by that too easily. Albeit, I do find it somewhat interesting how mathematics is being used as a tool to find not-so-obvious tautological relationships. I mean, I think I understand what you mean by the difference of squirrel thought and rigorous logical examination of a subject.

 

 

If we now multiply the original equation by the respective sides of this identity, recognizing that the multiplication of the alpha and beta operators yields either one half (for all the direct terms) or zero (for all the cross terms) and defining the resultant of [imath]g(\vec{x})g(\vec{x})[/imath] to be [imath]\frac{1}{2}G(\vec{x})[/imath] (note that all alpha and beta operators have vanished), we can write the differential equation to be solved as

[math] \nabla^2\vec{\Phi}(\vec{x},t) + G(\vec{x})\vec{\Phi}(\vec{x},t)= 2K^2\frac{\partial^2}{\partial t^2}\vec{\Phi}(\vec{x},t).[/math]

 

Well I tried to perform that algebra but I wasn't able to :(

It's just my embarrassing math skills again, so please help me out a bit... By "multiply the original equation by the respective sides of this identity", I guess you mean:

 

[math]\left\{\vec{\alpha}\cdot \vec{\nabla} + g(\vec{x})\right\} \left\{\vec{\alpha}\cdot \vec{\nabla} + g(\vec{x})\right\}\vec{\Phi} = K\frac{\partial}{\partial t} K\frac{\partial}{\partial t} \vec{\Phi}[/math]

 

First, the right hand side looks to me like it goes:

 

[math]K^2 \frac{\partial^2 }{\partial t^2} \vec{\Phi}[/math]

 

And the left hand side;

 

[math]\left\{ (\vec{\alpha}\cdot \vec{\nabla}) (\vec{\alpha}\cdot \vec{\nabla}) + (\vec{\alpha}\cdot \vec{\nabla}) g(\vec{x}) + g(\vec{x}) (\vec{\alpha}\cdot \vec{\nabla}) + g(\vec{x}) g(\vec{x}) \right\} \vec{\Phi}

[/math]

 

 

[math]=[/math]

 

 

[math]\left\{ \vec{\alpha}^2 \cdot \vec{\nabla}^2 + (\vec{\alpha}\cdot \vec{\nabla}) g(\vec{x}) + g(\vec{x}) (\vec{\alpha}\cdot \vec{\nabla}) + \frac{1}{2}G(\vec{x}) \right\} \vec{\Phi}

[/math]

 

Hmmm, I guess that first term [math]\vec{\alpha}^2 \cdot \vec{\nabla}^2 = \frac{1}{2} \vec{\nabla}^2[/math], and looking at your final result I suppose you multiply both sides by 2 at some point, making the right side [imath]2K^2 \frac{\partial^2 }{\partial t^2} \vec{\Phi}[/imath] so I suppose I'm somewhat on the right track anyway... But apart from that, I'm a bit stuck here.

 

Hmmm, and looking at your result, seems like all I'm missing is how to get rid of those 2 terms in the middle there, maybe it's just something simple in the anti-commutation properties that I'm missing here... :I

 

-Anssi

Link to comment
Share on other sites

Anssi, I appreciate your interest in the details as the details are the essence of the proof.

That I don't understand...
It is pretty simple: in our derivation, we have been using factors such as [imath]e^{ikt}[/imath] to shift the explicit differential with respect to t by a constant (in this case the constant is +ik). If we started with a constant (such as zero) and we change the definition of the function defined by the differential equation such that the value of of the time derivative is changed by a constant (by factoring out such a factor from the current function) then the result must be different from zero by a constant. If you go back to the details of our derivation, you will discover that we have used exactly that procedure every time we changed the differential equation: i.e., the partial with respect to t of that final function (in this case [imath]\vec{\Phi}[/imath]) can, at worst case, be constant as every change in the function amounted to a shift by a constant.
...I remember this energy conservation stuff was mentioned earlier on in the presentation (regarding the constant K) and I don't think I understood it very well back then either. I am not sure what is our definition of energy here.
Sorry, I tend to get ahead of myself when trying to make clarifying statements. What we actually have is the fact that the partial with respect to t is a constant (the value of the constant can be shifted by factoring out those exponential functions I just talked about). “A constant” means it doesn't change (it is thus a conserved quantity; all we are really doing above is shifting the zero reference): i.e., something (which has not yet been defined) which is obtained via a differential with respect to t, is conserved. My actual definition of “Energy” occurs after I deduce Schrödinger's equation. At that point, I define this differential (or rather a specific constant times that differential) to be “Energy”.
Meanwhile, the fact that the Schroedinger equation is an approximate solution to my equation leads me to put forth a few more definitions. Note to Buffy: there is no presumption of reality in these definitions; they are no more than definitions of abstract relationships embedded in the mathematical constraint of interest to us. That is, these definitions are entirely in terms of the mathematical representation and are thus defined for any collection of indices which constitute references to the elements the function [imath]\vec{\Psi}[/imath] was defined to explain.

 

First, I will define ”the Energy Operator” as [imath]i\hbar\frac{\partial}{\partial t}[/imath] (and thus, the conserved quantity required by the fact of shift symmetry in the t index becomes “energy”: i.e., energy is conserved by definition). A second definition totally consistent with what has already been presented is to define the expectation value of “energy” to be given by

[math]E=i\hbar\int\vec{\Psi}^\dagger\cdot\frac{\partial}{\partial t}\vec{\Psi}dV.[/math]

Well I don't find mathematics itself too interesting, but I would like to really understand your presentation properly, I don't think I'm getting bored by that too easily. Albeit, I do find it somewhat interesting how mathematics is being used as a tool to find not-so-obvious tautological relationships. I mean, I think I understand what you mean by the difference of squirrel thought and rigorous logical examination of a subject.
When it comes to deep rigorous logical examination, mathematics is almost essential. Feynman's position was that “mathematics is the distilled essence of logic”. When it comes to logical analysis you really can not get very far without understanding mathematics.
Hmmm, and looking at your result, seems like all I'm missing is how to get rid of those 2 terms in the middle there, maybe it's just something simple in the anti-commutation properties that I'm missing here... :I
It is just everyday algebra. Notice that

[math](a+b)^2=(a+b)(a+b)=a(a+b)+b(a+b)=a^2+ab+ba+b^2=a^2+(ab+ba)+b^2[/math].

 

With ordinary numbers (which commute), (ab+ba)=2ab; however, if a and b anti-commute (i.e., change sign when their order is inverted) then that "cross term" vanishes.

 

Actually, your logic is only valid with regard to the first term and I suspect you are missing some important portions as the [imath]\vec{\alpha}[/imath] is actually a sum of two terms (a two dimensional vector which is multiplying [imath]\vec{\nabla}[/imath] via a vector dot product. The factor [imath]\alpha_{ix}^2=\alpha_{i\tau}^2=1/2[/imath] and may be factored. The relevant cross term, [imath](\alpha_{ix}\alpha_{i\tau}+\alpha_{i\tau}\alpha_{ix})[/imath], of course, vanishes which eliminates the otherwise problematical [imath]\frac{\partial}{\partial x}\frac{\partial}{\partial \tau}[/imath] term. The factor [imath]g(\vec{x})^2[/imath] is no where near as simple as you have represented it. What you need to do is to look again at post #77 in this thread

Note first that every element of the left hand side has an alpha or beta operator as a factor (the right hand side has none). The actual structure of the left hand side squared is essentially analogous to

[math]\left\{a_1f_1(x)+a_2f_2(x)+a_3f_3(x)...a_nf_n(x)\right\}\left\{a_1f_1(x)+a_2f_2(x)+a_3f_3(x)...a_nf_n(x)\right\}[/math]

 

Where all the “[imath]a_i[/imath]” represent specific alpha or beta operators and [imath]f_i(x)[/imath] represent the result of the associated operation (either an x derivative or an appropriate integral). When one multiplies such a thing out, one gets a sum consisting of every term of the set multiplying every term of the set.

[math]a_1f_1(x)\left\{a_1f_1(x)+a_2f_2(x)+a_3f_3(x)...a_nf_n(x)\right\}+a_2f_2(x)\left\{a_1f_1(x)+a_2f_2(x)+a_3f_3(x)...a_nf_n(x)\right\}+[/math]

[math]a_3f_3(x)\left\{a_1f_1(x)+a_2f_2(x)+a_3f_3(x)...a_nf_n(x)\right\}+...+a_nf_n(x)\left\{a_1f_1(x)+a_2f_2(x)+a_3f_3(x)...a_nf_n(x)\right\}[/math]

 

or, multiplying out,

[math]a_1f_1(x)a_1f_1(x)+a_1f_1(x)a_2f_2(x)+a_1f_1(x)a_3f_3(x)...a_1f_1a_nf_n(x)+...+[/math]

[math]a_kf_k(x)a_1f_1(x)+a_kf_k(x)a_2f_2(x)+...a_kf_k(x)a_nf_n(x)+...+a_nf_n(x)a_nf_n(x)[/math]

 

There are two kinds of terms in that sum. There are what are called “direct terms” which are essentially simple squares where the index of the two terms being multiplied are the same and what are called “cross terms” which are the terms where the indices are different. The direct terms all yield exactly the same result the alpha or beta operator is squared and, since the indices are identical, the result is exactly one half. The functions connected to those indices are simply squared.

 

The interesting phenomena are the cross terms. Every time a term [imath]a_if_i(x)a_jf_j(x)[/imath] appears, a term [imath]a_jf_j(x)a_if_i(x)[/imath] also appears. [imath]f_i(x)f_j(x)[/imath] is identical to [imath]f_j(x)f_i(x)[/imath] as they are simple mathematical functions and they may be factored; however, the remaining factor [imath](a_ia_j+a_ja_i)=0[/imath] since these operators anti-commute. Thus it is that all the “cross terms” vanish identically.

You have explicitly removed the alpha times g terms (your two terms in the middle). Since each and every term in the sum going to make up the g terms has a beta operator, every time an alpha beta product occurs, a beta alpha product with exactly the same other factor also appears thus that other factor may be factored out. That factor is being multiplied by [imath](\alpha \beta+ \beta \alpha)[/imath] and since [imath](\alpha \beta=-\beta \alpha)[/imath] every term vanishes.

 

The g squared term is again a product of two sums every term of which has a beta factor. Once again, you get a whole slew of beta squared terms which yield 1/2 plus all possible cross terms. Again, because the two functions multiplying those specific beta operators are the same in the two commuted factors, they can be factored out. The other factor, the two “different” beta operators, anti-commute and thus are exact negatives of one another: i.e., all the cross terms vanish. One half may be factored from all the remaining terms and we simply define G to be that complete sum.

 

I hope I have cleared up your current problems.

 

Have fun -- Dick

Link to comment
Share on other sites

  • 2 weeks later...

Sorry I've been slow, I've just been a bit busy, and didn't have time to reply last weekend either :(

 

Anssi, I appreciate your interest in the details as the details are the essence of the proof.

 

Yeah, I really would like to understand the logical mechanisms at play here... So I need to pay attention to the details as I am not at all familiar with the math that's being used :)

 

It is pretty simple: in our derivation, we have been using factors such as [imath]e^{ikt}[/imath] to shift the explicit differential with respect to t by a constant (in this case the constant is +ik). If we started with a constant (such as zero) and we change the definition of the function defined by the differential equation such that the value of of the time derivative is changed by a constant (by factoring out such a factor from the current function) then the result must be different from zero by a constant.

 

Okay, I think I understand that little bit now, but it did raise couple of questions in my head.

 

So, in the original equation, where the differential of t vanishes, it basically meant that changes in "t" in the input arguments did not change the end result (the probability), i.e. shifting the patterns inside the x,tau,t-space doesn't change the expectations, as they are a function of the patterns, but not the placement of the origin of the x,tau,t-space.

 

So, if I imagine a graph that plots "t" on x-axis and "probability" on y-axis, then with the original function [imath]\Psi[/imath] that graph is horizontal straight line?

 

With [imath]\Phi[/imath], the "value of the time derivative is changed by a constant", does that essentially mean that that graph would be a sloped straight line? I.e. after this redefinition, the probability can - at worst case - change in linear fashion due to changes in "t"? I.e. "change is constant" (no pun intented)

 

If I got that right, then looking at the expression [math]K\frac{\partial}{\partial t} \vec{\Phi}[/math]... Hmmm, so I figure it means [math]\frac{\partial}{\partial t} \vec{\Phi}[/math] by itself is not necessarily zero (as if it was, multiplication by K would not do anything), and then "K" amounts to a scale factor to that derivative. Actually that sounds a bit like something you've said before so I'm hoping I've interpreted this right. (Had few other possible interpretations in my head but I think this one makes most sense now :D)

 

It is just everyday algebra. Notice that

[math](a+b)^2=(a+b)(a+b)=a(a+b)+b(a+b)=a^2+ab+ba+b^2=a^2+(ab+ba)+b^2[/math].

 

With ordinary numbers (which commute), (ab+ba)=2ab; however, if a and b anti-commute (i.e., change sign when their order is inverted) then that "cross term" vanishes.

 

I'm afraid I'm not sure how to apply your explanation to get rid of the two middle terms. You seem to explain how [math](a+b)^2=1[/math] and can be factored. But I don't know how [math](\vec{\alpha}\cdot \vec{\nabla}) g(\vec{x}) + g(\vec{x}) (\vec{\alpha}\cdot \vec{\nabla})[/math] is the same as [math](a+b)^2[/math].

 

Actually, your logic is only valid with regard to the first term and I suspect you are missing some important portions as the [imath]\vec{\alpha}[/imath] is actually a sum of two terms (a two dimensional vector which is multiplying [imath]\vec{\nabla}[/imath] via a vector dot product.

 

Hmm, I'm afraid I'm really confused over this as well :I

Um... so, I suppose it's valid to write the left hand side (after squaring) as:

 

[math]\left\{ \vec{\alpha}^2 \cdot \vec{\nabla}^2 + (\vec{\alpha}\cdot \vec{\nabla}) g(\vec{x}) + g(\vec{x}) (\vec{\alpha}\cdot \vec{\nabla}) + g(\vec{x}) g(\vec{x}) \right\} \vec{\Phi}[/math]

 

From that point on I'm a bit lost. Yes, I thought [math]\vec{\alpha}^2[/math] would simply equal [imath]\frac{1}{2}[/imath], making [math]\vec{\alpha}^2 \cdot \vec{\nabla}^2 = \frac{1}{2} \vec{\nabla}^2[/math].

 

Since you are saying "your logic is only valid with regard to the first term and I suspect you are missing some important portions as the [imath]\vec{\alpha}[/imath] is actually a sum of two terms (a two dimensional vector which is multiplying [imath]\vec{\nabla}[/imath] via a vector dot product", I'm not sure anymore how the first term is valid... Perhaps it was an accident :D

 

I mean, considering that the alpha is actually two terms, what I have in my head now;

 

[math]\vec{\alpha}^2 = ( \alpha_{ix} + \alpha_{i\tau} )^2 = \alpha_{ix}^2 + ( \alpha_{ix} \alpha_{i\tau} + \alpha_{i\tau} \alpha_{ix} ) + \alpha_{i\tau}^2 = \frac{1}{2} + 0 + \frac{1}{2} = 1[/math]

 

So that would imply that [math]\vec{\alpha}^2 \cdot \vec{\nabla}^2 = \vec{\nabla}^2[/math].... That can't be right, I guess I must doing something invalid here already.

 

The factor [imath]\alpha_{ix}^2=\alpha_{i\tau}^2=1/2[/imath] and may be factored.

 

By this, do you mean simply that [math] \alpha_{ix}^2 + \alpha_{i\tau}^2 [/math] - which appeared in the squaring of [imath]\vec{\alpha}[/imath] - equals to 1 and hence can be factored out? Otherwise I don't know how [imath]1/2[/imath] could be factored...

 

The relevant cross term, [imath](\alpha_{ix}\alpha_{i\tau}+\alpha_{i\tau}\alpha_{ix})[/imath], of course, vanishes which eliminates the otherwise problematical [imath]\frac{\partial}{\partial x}\frac{\partial}{\partial \tau}[/imath] term.

 

I understand how that cross term vanishes, but I do not understand how that removes the [imath]\frac{\partial}{\partial x}\frac{\partial}{\partial \tau}[/imath]... And also, do you mean rather [imath]\frac{\partial}{\partial x}+ \frac{\partial}{\partial \tau}[/imath], or am I missing something again?

 

The factor [imath]g(\vec{x})^2[/imath] is no where near as simple as you have represented it.

 

Hmmm, what I did there was simply substitute [imath]g(\vec{x})g(\vec{x})[/imath] with [imath]\frac{1}{2}G(\vec{x})[/imath], which is what I thought you meant in the OP when you said; "...defining the resultant of [imath]g(\vec{x})g(\vec{x})[/imath] to be [imath]\frac{1}{2}G(\vec{x})[/imath]"

 

But looking at your explanation now, I guess it wasn't that simple. I think I'd like to clear out my confusions with the issues above before I'll try to understand that part though... I hope you have a good idea about what I'm missing exactly...

 

-Anssi

Link to comment
Share on other sites

I am very sorry Anssi. I am just now reading your latest post and it is just the simple fact that you are not used to doing mathematics and steps that anyone used to doing a lot of mathematical algebra would take for granted, you simply overlook. Mathematics representation is actually a very demanding thing. One can never be sloppy about the characteristics of the things one is working with. I will temporally skip past the time derivatives, as I think your confusion is a little more complex there, and skip directly to the anti-commutation issues.

It is just everyday algebra. Notice that

[math](a+b)^2=(a+b)(a+b)=a(a+b)+b(a+b)=a^2+ab+ba+b^2=a^2+(ab+ba)+b^2[/math].

 

With ordinary numbers (which commute), (ab+ba)=2ab; however, if a and b anti-commute (i.e., change sign when their order is inverted) then that "cross term" vanishes.

I think you understand what is being said there but I could be mistaken; your real error occurs when you attempt to apply the relationship to [imath]\vec{\alpha} \cdot \vec{\nabla}[/imath] (which is not an ordinary number by any means) multiplied by itself,
From that point on I'm a bit lost. Yes, I thought [math]\vec{\alpha}^2[/math] would simply equal [imath]\frac{1}{2}[/imath], making [math]\vec{\alpha}^2 \cdot \vec{\nabla}^2 = \frac{1}{2} \vec{\nabla}^2[/math].
You are failing to take into account the fact that [imath]\vec{\alpha} \cdot \vec{\nabla}[/imath] is not just one factor times another, it is a “vector dot product”. First, [imath]\vec{\alpha}[/imath] is actually a vector which can be written [imath]\alpha_{x }\hat{x}+\alpha_{\tau}\hat{\tau}[/imath] where [imath]\hat{x}[/imath] and [imath]\hat{\tau}[/imath] are unit vectors pointing in the x and tau directions respectively. Likewise [imath] \vec{\nabla}[/imath] is also a vector operator which has components in both the x direction and the tau direction: [imath] \vec{\nabla}=\frac{\partial}{\partial x}\hat{x}+\frac{\partial}{\partial \tau}\hat{\tau}[/imath]. Understanding that, you should further understand that the dot product of the two is [imath]\alpha_x \frac{\partial}{\partial x}+\alpha_\tau \frac{\partial}{\partial \tau}[/imath] (the dot product has no cross terms since [imath]\hat{x}\cdot\hat{\tau}[/imath] vanishes). You should think in terms evident in the fact that the scaler product of a vector with itself is exactly the square of the magnitude of that vector (the sum of the squares of its components: i.e., the scaler product never produces cross terms between the components).

 

The square of that term (which is in fact exactly [imath](\vec{\alpha} \cdot \vec{\nabla}) [/imath] squared), is actually the expression,

[math] \left[\alpha_x\frac{\partial}{\partial x}+\alpha_\tau\frac{\partial}{\partial \tau}\right]^2[/math]

 

which expands to,

[math]\alpha_x \frac{\partial}{\partial x}\alpha_x \frac{\partial}{\partial x}+\alpha_x\frac{\partial}{\partial x}\alpha_\tau\frac{\partial}{\partial \tau}+\alpha_\tau\frac{\partial}{\partial \tau}\alpha_x\frac{\partial}{\partial x}+\alpha_\tau\frac{\partial}{\partial \tau}\alpha_\tau\frac{\partial}{\partial \tau}[/math].

 

Collecting terms, realizing that the partials and the alpha operators commute, one obtains the following:

[math] (\alpha_x )^2 \frac{\partial^2}{\partial x^2}+\left(\alpha_x\alpha_\tau \frac{\partial}{\partial x}\frac{\partial}{\partial \tau}+\alpha_\tau\alpha_x\frac{\partial}{\partial \tau}\frac{\partial}{\partial x}\right)+(\alpha_\tau)^2\frac{\partial^2}{\partial \tau^2}[/math].

 

The fact that the probability function cannot depend upon tau implies [imath]\vec{\Psi}(x,\tau)[/imath] can be factored into a product function (one function dependent upon x and the other upon tau) which implies that [imath]\frac{\partial}{\partial x}\frac{\partial}{\partial \tau}=\frac{\partial}{\partial \tau}\frac{\partial}{\partial x}[/imath]: i.e., the partials with respect to x and tau commute.

 

So we can factor that cross partial and reduce the thing to

[math] (\alpha_x )^2 \frac{\partial^2}{\partial x^2}+(\alpha_x\alpha_\tau +\alpha_\tau\alpha_x)\frac{\partial}{\partial x}\frac{\partial}{\partial \tau}+(\alpha_\tau)^2\frac{\partial^2}{\partial \tau^2}[/math].

 

It is only when we get to here that we can start to use the anti-commuting characteristics of the alpha operator. Clearly the factor [imath](\alpha_x\alpha_\tau +\alpha_\tau\alpha_x)[/imath] vanishes and both [imath]\alpha_x^2[/imath] and [imath]\alpha_\tau^2[/imath] evaluate to 1/2 (thus the 1/2 can be factored). It follows that the final result is

[math] \frac{1}{2}\left\{\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial \tau^2}\right\}= \frac{1}{2}\nabla^2[/math].

 

Now, at this point I have to point out one additional difficulty in your analysis of the problem. That it wasn't [imath](\vec{\alpha} \cdot \vec{\nabla}) [/imath] which was susposed to be squared, so what I have just done is not the correct answer. What you were supposed to square was was the operator [imath](\vec{\alpha} \cdot \vec{\nabla}+g(x))[/imath]. The first thing you have to do is examine the meaning of that expression g(x). How was that function obtained? It was the result of integration over all the remaining arguments arguments in the universe: i.e.,

[math]g(x)= 2\sum_{i=\# 2}^n \beta_{i1}\int \vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_1)\vec{\Psi}_r dV_r[/math],

 

where i=#2 means i is chosen from set #2 and the factor 2 arises from the omission of the integral over [imath]\delta(\vec{x}_1-\vec{x}_j)[/imath] which would yield an identical result. This, technically, would be a sum over probably considerably more than (1,000,000,000,000,000,000,000,000)2 integrals, each one multiplied by a different beta operator. That sum has a lot of terms in it and the thing we want to square has several more. The first few terms of the actual thing to be squared will each have a single alpha operator as a factor (the actual number of these terms will depend upon the dimensionality of our analysis). Those would be the terms arising from [imath](\vec{\alpha} \cdot \vec{\nabla})[/imath]. The point is that every term in the entire sum to be squared has a factor consisting of one or the other operator and they all anti-commute (if you remember I moved a portion of "f" over to the right side of the equation in order to assure that no terms lacked such an operator). Thus we know (without examining each of those terms in detail) that ALL the cross terms generated by such a square vanish and the squared alpha/beta terms which don't vanish all evaluate to the same factor, 1/2, which can be factored out. This is an example of the power of mathematics to logically discuss more things than can be concieved of by a human mind. The g (x) (which consistes of all the squares of those beta terms can be seen as generating a new sum

[math]g(x)^2= 4\sum_{i=\#2}^n \frac{1}{2}\left\{\int \vec{\Psi}_r^\dagger\cdot \delta(\vec{x}_i-\vec{x}_j)\vec{\Psi}_r dV_r\right\}^2[/math]

 

which also constitutes the same (1,000,000,000,000,000,000,000,000)2 plus terms just mentioned but can nevertheless be written as G(x) as that is exactly what it is: some function of x. What do all these terms represent? They represent the entire rest of the universe and yield the probability of an interaction connected with the elemental entity of interest at the point called “x”. (For the great majority of any human experiments, most of these terms can be approximated by zero; however, it should be clear to you that the ”correct” answer includes the impact of even a single photon from the farthest star; remember, this is a flaw-free explanation (flaw-free means there is nothing left out). The only result of interest to us is that the net result can be reduced to some G(x) and when ordinary physicists do physics, they essentially approximate that function (they leave out a lot more stuff than that single photon from the farthest star I just spoke about). Usually physicists consider no more than one other significant interaction: i.e., ordinarily most all of the terms can be neglected. However, in a "correct" mathematical representation, nothing can be left out. If anything is left out, it becomes an approximation.

 

This has already managed to become a fairly non-trivial response so suppose we let the problems with exactly what the functions of the form [imath]Ae^{ikt}[/imath] and/or [imath]Ae^{ikx}[/imath] do to the solutions [imath]\vec{Psi}[/imath] until you understand how the anti-commuting operators eliminate cross terms in the squares of my fundamental equation. That issue is one which you really have to understand.

 

Have fun -- Dick

Link to comment
Share on other sites

I am very sorry Anssi. I am just now reading your latest post and it is just the simple fact that you are not used to doing mathematics and steps that anyone used to doing a lot of mathematical algebra would take for granted, you simply overlook. Mathematics representation is actually a very demanding thing. One can never be sloppy about the characteristics of the things one is working with.

 

Definitely :)

It is certainly hard for me to figure out the right algebraic steps to make to get from one point to another, I usually have few different options in my head and then I just try them for a fit, with varying results :) But at least it is getting easier as I learn something new with just about every post (or just understand something better)

 

I will temporally skip past the time derivatives, as I think your confusion is a little more complex there, and skip directly to the anti-commutation issues.

 

Good, good

 

I think you understand what is being said there but I could be mistaken; your real error occurs when you attempt to apply the relationship to [imath]\vec{\alpha} \cdot \vec{\nabla}[/imath] (which is not an ordinary number by any means) multiplied by itself,

 

Yup, seems that is exactly where the error occurred! Actually it did cross my mind for a second that I'd need to expand that, but I wasn't successfull with that at all. Also...

 

You are failing to take into account the fact that [imath]\vec{\alpha} \cdot \vec{\nabla}[/imath] is not just one factor times another, it is a “vector dot product”. First, [imath]\vec{\alpha}[/imath] is actually a vector which can be written [imath]\alpha_{x }\hat{x}+\alpha_{\tau}\hat{\tau}[/imath] where [imath]\hat{x}[/imath] and [imath]\hat{\tau}[/imath] are unit vectors pointing in the x and tau directions respectively. Likewise [imath] \vec{\nabla}[/imath] is also a vector operator which has components in both the x direction and the tau direction: [imath] \vec{\nabla}=\frac{\partial}{\partial x}\hat{x}+\frac{\partial}{\partial \tau}\hat{\tau}[/imath]. Understanding that, you should further understand that the dot product of the two is [imath]\alpha_x \frac{\partial}{\partial x}+\alpha_\tau \frac{\partial}{\partial \tau}[/imath] (the dot product has no cross terms since [imath]\hat{x}\cdot\hat{\tau}[/imath] vanishes).

 

...I had completely forgotten that property of dot product. I mean, that the orthogonal terms will obviously vanish... :I

 

It is only when we get to here that we can start to use the anti-commuting characteristics of the alpha operator. Clearly the factor [imath](\alpha_x\alpha_\tau +\alpha_\tau\alpha_x)[/imath] vanishes and both [imath]\alpha_x^2[/imath] and [imath]\alpha_\tau^2[/imath] evaluate to 1/2 (thus the 1/2 can be factored). It follows that the final result is

[math] \frac{1}{2}\left\{\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial \tau^2}\right\}= \frac{1}{2}\nabla^2[/math].

 

I was able to follow that whole explanation, and don't have any questions about it now.

 

So, onwards;

 

Now, at this point I have to point out one additional difficulty in your analysis of the problem. That it wasn't [imath](\vec{\alpha} \cdot \vec{\nabla}) [/imath] which was susposed to be squared, so what I have just done is not the correct answer. What you were supposed to square was was the operator [imath](\vec{\alpha} \cdot \vec{\nabla}+g(x))[/imath].

 

Right.

 

The first thing you have to do is examine the meaning of that expression g(x). How was that function obtained? It was the result of integration over all the remaining arguments arguments in the universe: i.e.,

[math]g(x)= 2\sum_{i=\# 2}^n \beta_{i1}\int \vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_1)\vec{\Psi}_r dV_r[/math],

 

where i=#2 means i is chosen from set #2 and the factor 2 arises from the omission of the integral over [imath]\delta(\vec{x}_1-\vec{x}_j)[/imath] which would yield an identical result. This, technically, would be a sum over probably considerably more than (1,000,000,000,000,000,000,000,000)2 integrals, each one multiplied by a different beta operator. That sum has a lot of terms in it and the thing we want to square has several more.

 

Yup.

 

The first few terms of the actual thing to be squared will each have a single alpha operator as a factor (the actual number of these terms will depend upon the dimensionality of our analysis). Those would be the terms arising from [imath](\vec{\alpha} \cdot \vec{\nabla})[/imath].

 

Right, in this case [math]\alpha_x \frac{\partial}{\partial x} + \alpha_\tau\frac{\partial}{\partial \tau}[/math]

 

And that would be followed by all the terms from [math]g(x)[/math]

 

The point is that every term in the entire sum to be squared has a factor consisting of one or the other operator and they all anti-commute (if you remember I moved a portion of "f" over to the right side of the equation in order to assure that no terms lacked such an operator). Thus we know (without examining each of those terms in detail) that ALL the cross terms generated by such a square vanish and the squared alpha/beta terms which don't vanish all evaluate to the same factor, 1/2, which can be factored out. This is an example of the power of mathematics to logically discuss more things than can be concieved of by a human mind. The g (x) (which consistes of all the squares of those beta terms can be seen as generating a new sum

[math]g(x)^2= 4\sum_{i=\#2}^n \frac{1}{2}\left\{\int \vec{\Psi}_r^\dagger\cdot \delta(\vec{x}_i-\vec{x}_j)\vec{\Psi}_r dV_r\right\}^2[/math]

 

which also constitutes the same (1,000,000,000,000,000,000,000,000)2 plus terms just mentioned but can nevertheless be written as G(x) as that is exactly what it is: some function of x. What do all these terms represent? They represent the entire rest of the universe and yield the probability of an interaction connected with the elemental entity of interest at the point called “x”. (For the great majority of any human experiments, most of these terms can be approximated by zero; however, it should be clear to you that the ”correct” answer includes the impact of even a single photon from the farthest star; remember, this is a flaw-free explanation (flaw-free means there is nothing left out). The only result of interest to us is that the net result can be reduced to some G(x) and when ordinary physicists do physics, they essentially approximate that function (they leave out a lot more stuff than that single photon from the farthest star I just spoke about). Usually physicists consider no more than one other significant interaction: i.e., ordinarily most all of the terms can be neglected. However, in a "correct" mathematical representation, nothing can be left out. If anything is left out, it becomes an approximation.

 

Right, I think I understand what you are saying there.

 

This has already managed to become a fairly non-trivial response so suppose we let the problems with exactly what the functions of the form [imath]Ae^{ikt}[/imath] and/or [imath]Ae^{ikx}[/imath] do to the solutions [imath]\vec{Psi}[/imath] until you understand how the anti-commuting operators eliminate cross terms in the squares of my fundamental equation. That issue is one which you really have to understand.

 

Yes, so let me try to perform that squaring once again:

 

[math]\left\{ \vec{\alpha}\cdot \vec{\nabla} + g(\vec{x}) \right\}^2 \vec{\Phi} = \left\{ K\frac{\partial}{\partial t} \right\}^2 \vec{\Phi}[/math]

 

 

 

[math]\left\{ (\vec{\alpha}\cdot \vec{\nabla})^2 + (\vec{\alpha}\cdot \vec{\nabla}) g(\vec{x}) + g(\vec{x})(\vec{\alpha}\cdot \vec{\nabla}) + g(\vec{x})^2 \right\} \vec{\Phi} = K^2\frac{\partial^2}{\partial t^2}\vec{\Phi}[/math]

 

 

 

 

[math]\left\{ \frac{1}{2}\nabla^2 + (\vec{\alpha}\cdot \vec{\nabla}) g(\vec{x}) + g(\vec{x})(\vec{\alpha}\cdot \vec{\nabla}) + \frac{1}{2}G(\vec{x}) \right\} \vec{\Phi} = K^2\frac{\partial^2}{\partial t^2}\vec{\Phi}[/math]

 

(Note to lurking observers; The resultant of [imath]g(\vec{x})^2[/imath] was defined in the first post to be [imath]\frac{1}{2}G(\vec{x})[/imath], as oppose to just [imath]G(\vec{x})[/imath], as it is in the example in the previous post)

 

And the cross terms that were giving be trouble:

[math](\vec{\alpha}\cdot \vec{\nabla}) g(\vec{x}) + g(\vec{x})(\vec{\alpha}\cdot \vec{\nabla}) = \left\{ \alpha_x \frac{\partial}{\partial x} + \alpha_\tau\frac{\partial}{\partial \tau} \right\} g(\vec{x}) + g(\vec{x}) \left\{ \alpha_x \frac{\partial}{\partial x} + \alpha_\tau\frac{\partial}{\partial \tau} \right\} [/math]

 

From that point on it's easy to see how the multiplication(s) by [imath]g(\vec{x})[/imath] expand, being that [imath]g(\vec{x})[/imath] is a sum of terms where each term has got an anti-commuting element to it. And in the expanded expression the multiplications between anti-commuting elements occur in the opposite orders at the opposite sides of the plus sign, so the whole thing just vanishes. I guess that is what I was supposed to pick up here.

 

So removing that cross term from the equation, we are at:

 

[math]\left\{ \frac{1}{2}\nabla^2 + \frac{1}{2}G(\vec{x}) \right\} \vec{\Phi} = K^2\frac{\partial^2}{\partial t^2}\vec{\Phi}[/math]

 

And multiplying both sides by 2, we are exactly at the result you wrote down in the OP;

 

[math] \nabla^2\vec{\Phi}(\vec{x},t) + G(\vec{x})\vec{\Phi}(\vec{x},t)= 2K^2\frac{\partial^2}{\partial t^2}\vec{\Phi}(\vec{x},t)[/math]

 

I think I got it right now. Let me know if I'm still making a mistake somewhere. I'll try to continue from here soon.

 

Thanks for the help again!

 

-Anssi

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...