Jump to content
Science Forums

Deriving Schrödinger's Equation From My Fundamental Equation


Doctordick
 Share

Recommended Posts

Hi!

 

I don’t know if I fully understand what you are asking, in what way do you not think that it is relevant, from how I understand it the whole use of the interpretation is that it obeys the fundamental equation and in so doing satisfies all of the resulting constraints so that it would not be relevant only if it had no interesting consequences.

 

I didn't say it was irrelevant :)

 

When I said "...it makes me wonder if it is relevant here that the velocity of the elements is constant", what I was thinking about was whether the constant velocity is what gives you the clue about which element from present A is which element in present B. Now I think the answer is no.

 

What I think that is being missed here is that [imath]\vec{\Psi}[/imath] dose not contain the actual information about the elements that are in it.

 

Well yeah. I'm just saying that if you have a particular [imath]\vec{\Psi}[/imath], then the definitions that tell you which element is which, would be embedded in that [imath]\vec{\Psi}[/imath]. Well, I would like to hear what DD says about this.

 

The thing here is that the particular solutions are of little interest seeing as we can’t even solve for the general solutions. The general solutions can’t be solved for, this has been said several times. Therefore, the only thing left is to learn more about the possible solutions in some other way so what we are looking for are the constraints that all flaw free explanation must obey.

 

If you can understand this it might at least give you a different view of what you are asking if you cant understand it I’d rather not try and go into much more detail as it would likely take us considerably off topic which is something I think we are already doing.

 

I understand it.

 

-Anssi

Link to comment
Share on other sites

  • Replies 144
  • Created
  • Last Reply

Top Posters In This Topic

I didn't say it was irrelevant

 

When I said "...it makes me wonder if it is relevant here that the velocity of the elements is constant", what I was thinking about was whether the constant velocity is what gives you the clue about which element from present A is which element in present B. Now I think the answer is no.

 

I’m going to agree with you in that this does not at lest directly tell us which element is which element.

 

Well yeah. I'm just saying that if you have a particular [imath]\vec{\Psi}[/imath], then the definitions that tell you which element is which, would be embedded in that [imath]\vec{\Psi}[/imath]. Well, I would like to hear what DD says about this.

 

While I think I know what you are saying I’m not so sure that the definitions are imbedded in [imath]\vec{\Psi}[/imath] In fact, at this point I don’t think that [imath]\vec{\Psi}[/imath] tells us which element is which but I do think that there is little more that we can discus about this until we know more about it, at this point I’m also wondering what DD has to say about it.

 

P.S. Have you been able to follow the remainder of the math for the derivation of the Schrödinger equation?

Link to comment
Share on other sites

P.S. Have you been able to follow the remainder of the math for the derivation of the Schrödinger equation?

 

No, I'm a bit stuck with trying to learn math, perhaps you can help me.

 

The second constraint will be that the probability distribution describing the rest of the universe is stationary in time: that would be that [imath]P_r[/imath] is, for practical purposes, not a function of t. If that is the case, the only form of the time dependence of [imath]\vec{\Psi}_r[/imath] which satisfies temporal shift symmetry is [imath]e^{iS_rt}[/imath].

 

I understand what is meant by supposing that "[imath]P_r[/imath] is, for practical purposes, not a function of t", but I'm running into trouble when trying to understand where [imath]e^{iS_rt}[/imath] is coming from exactly.

 

Here's DD's elaboration on the issue:

 

Not really; that line brings up the solution to the differential equation derived from the requirement of “global” shift symmetry in the argument “t”. That is,

[math] \frac{\partial}{\partial t}P_r(t) = 0,[/math]

 

or, since [imath]P_r(t)[/imath] is defined to be given by [imath]\vec{\Psi}_r^\dagger(t)\cdot\vec{\Psi}_r(t)[/imath],

[math] \left\{\frac{\partial}{\partial t}\vec{\Psi}_r^\dagger(t)\right\}\cdot\vec{\Psi}_r(t)+\vec{\Psi}_r^\dagger(t) \cdot\left\{\frac{\partial}{\partial t}\vec{\Psi}_r(t)\right\} = 0.[/math]

 

In deference to Qfwfq I should have said, the simplest form of time dependence, not [imath]\vec{\Psi}=0[/imath] which solves that equation is of the form [imath]e^{iS_rt}[/imath]. That form would yield a [imath]\vec{\Psi}^\dagger= e^{-iS_rt}[/imath] via the definition of the “complex conjugate” (the meaning of that [imath]\dagger[/imath] symbol). The differentiation of the product representation of P with respect to t yields two term which differ only in their sign thus their sum is zero.

[math]\frac{d}{dx}e^{ax}=ae^{ax}[/math]

 

Something I could have easily proved fifty years ago but the proof currently seems to have slipped my mind. If you don't believe the factual nature of the above dirivative, see Derivative of the Exponential Function.

 

I understand "e" means exponential function, and I understand some of its properties through its wikipedia page, but I must be missing something... I am not sure how [imath]\vec{\Psi}[/imath] being [imath]e^{iS_rt}[/imath] and:

 

[math] \left\{\frac{\partial}{\partial t}\vec{\Psi}_r^\dagger(t)\right\}\cdot\vec{\Psi}_r(t)+\vec{\Psi}_r^\dagger(t) \cdot\left\{\frac{\partial}{\partial t}\vec{\Psi}_r(t)\right\}[/math]

 

yields the two terms which differ only by their sign. Tell me what am I getting wrong:

 

[math] \frac{\partial}{\partial t} \vec{\Psi}_r^\dagger(t)\cdot\vec{\Psi}_r(t) [/math]

 

If [imath]\vec{\Psi} = e^{iS_rt}[/imath] then I suppose the above can be written:

 

[math] \frac{\partial}{\partial t} e^{-iS_rt} \cdot e^{iS_rt} [/math]

 

Or:

 

[math] \left\{\frac{\partial}{\partial t} e^{-iS_rt}\right\} \cdot e^{iS_rt} + e^{-iS_rt} \cdot \left\{\frac{\partial}{\partial t} e^{iS_rt}\right\}[/math]

 

According to the link that DD provided [imath] \frac{d}{dx}e^{x} = e^{x} [/imath], then that implies the above can be written:

 

[math] e^{-iS_rt} \cdot e^{iS_rt} + e^{-iS_rt} \cdot e^{iS_rt}[/math]

 

And by now I have no idea how to end up with terms that differ by their sign, I must have gone wrong somewhere already...

 

Also, I don't know what the S is in [imath]e^{iS_rt}[/imath]. Just a constant, like K?

 

Also I am confused as to why in DD's own example there's that additional "a":

 

[math]\frac{d}{dx}e^{ax}=ae^{ax}[/math]

 

Why isn't it just [imath]\frac{d}{dx}e^{ax}=e^{ax}[/imath] ?

 

Just remember that I don't really know much of any math, apart from whatever DD's explained so far. I'm basically studying these concepts on the fly (it's a bit tricky and involves a lot of guessing) so I can be missing something that might seem fairly obvious to you. Just suppose I don't know much of anything :)

 

-Anssi

Link to comment
Share on other sites

Also I am confused as to why in DD's own example there's that additional "a":

 

 

[imath]\frac{d}{dx}e^{ax}=ae^{ax}[/imath]

 

Why isn't it just [imath]\frac{d}{dx}e^{ax}=e^{ax}[/imath] ?

 

I’m going to start here at the end because if you are having the problem that I suspect you are I will have to explain it first anyhow.

 

I think that you are having a problem with the chain rule. It can be wrote out as

 

[imath] \frac{\partial}{\partial x}(f(g(x))=\frac{\partial}{\partial g(x)}f(g(x))\frac{\partial}{\partial x}g(x)[/imath]

 

Pay particular attention to the difference between the left side of the equation and the first term on the right side. On the left side f is considered a function of g(x) which is a function of x, on the right side it is a little different. On the right side the first g(x) could be replaced with a variable with the same value as g(x) (that is f is a function of g(x)) while the second one is a function of x. There are other ways that this can be wrote out but I suspect that this is probably the simplest notation and the same notation as what DD has been using. You may also note the similarity in appearance to the sum rule.

 

So with this in mind we can write out your equation as

 

[imath] \frac{\partial}{\partial x}(e^{ax})=\frac{\partial}{\partial ax}e^{ax}\frac{\partial}{\partial x}ax=ae^{ax}[/imath]

 

You might note that I have used as fact that the derivative of ax is a and the derivative of [imath]e^x[/imath] is [imath] e^x[/imath]. if you need a proof of any of these things tell me and I will come up with proofs for them.

 

I understand what is meant by supposing that "[imath]P_r[/imath] is, for practical purposes, not a function of t", but I'm running into trouble when trying to understand where [imath]e^{iS_rt}[/imath] is coming from exactly.

 

There are a couple of different things to consider the first one is that we are trying to remove the derivative of t from the equation

 

[imath] K\left\{\int \vec{\Psi}_2^\dagger \frac{\partial}{\partial t}\vec{\Psi}_2dV_2\right\}\vec{\Psi}_1[/imath]

 

In such a way that it lets us integrate over the set of invalid elements removing the integral. We also don’t want to change the value of the equation

 

[imath]\vec{\Psi}_r^\dagger(t)\cdot\vec{\Psi}_r(t)[/imath]

 

The simplest way to do this is to choose a function that satisfies the equations

 

[imath] f^\dagger(t)\cdot f(t) = 1[/imath]

 

And the equation

 

[imath] \vec\Psi_r^ \dagger (t) \cdot \vec\Psi_r\left\{\frac{\partial}{\partial t}f(t) \right\} +\vec{\Psi}_r^\dagger\cdot{\vec{\Psi}_r(t)}\left\{\frac{\partial}{\partial t}f^\dagger(t) \right\} = 0[/imath]

 

The simplest equation that can satisfy these equations is of the form [imath]e^{iS_rt}[/imath] if you can’t verify that this equation satisfies these constraints just say so.

 

Also, I don't know what the S is in [imath]e^{iS_rt}[/imath]. Just a constant, like K?

 

More precisely it can’t be a function of the elements that are going to be integrated over in the next step or the integral would be more complex so we couldn’t easily integrate over that element.

 

According to the link that DD provided [imath]\frac{d}{dx}e^{x} = e^{x}[imath], then that implies the above can be written:

 

[imath]e^{-iS_rt} \cdot e^{iS_rt} + e^{-iS_rt} \cdot e^{iS_rt}[/imath]

 

And by now I have no idea how to end up with terms that differ by their sign, I must have gone wrong somewhere already...

 

You have forgotten to move the constant of the exponent remember that one is the negative of the other.

Link to comment
Share on other sites

Sorry it took me a while to reply, I was just replying to easy topics over from the christmas holidays, while this requires me to actually scratch my head a bit :)

 

Now I'm back at home though, so time to do some head scratching.

 

I’m going to start here at the end because if you are having the problem that I suspect you are I will have to explain it first anyhow.

 

I think that you are having a problem with the chain rule. It can be wrote out as

 

[imath] \frac{\partial}{\partial x}(f(g(x))=\frac{\partial}{\partial g(x)}f(g(x))\frac{\partial}{\partial x}g(x)[/imath]

 

Pay particular attention to the difference between the left side of the equation and the first term on the right side. On the left side f is considered a function of g(x) which is a function of x, on the right side it is a little different. On the right side the first g(x) could be replaced with a variable with the same value as g(x) (that is f is a function of g(x)) while the second one is a function of x. There are other ways that this can be wrote out but I suspect that this is probably the simplest notation and the same notation as what DD has been using. You may also note the similarity in appearance to the sum rule.

 

So with this in mind we can write out your equation as

 

[imath] \frac{\partial}{\partial x}(e^{ax})=\frac{\partial}{\partial ax}e^{ax}\frac{\partial}{\partial x}ax=ae^{ax}[/imath]

 

Oh okay, so [imath]e^{ax}[/imath] is like [imath]f(g(x))[/imath] in the sense that the outcome of [imath]e^{ax}[/imath] is a function of [imath]ax[/imath], correct?

 

You might note that I have used as fact that the derivative of ax is a and the derivative of [imath]e^x[/imath] is [imath] e^x[/imath]. if you need a proof of any of these things tell me and I will come up with proofs for them.

 

With little thought, I was able to figure that bit out now. Thanks!

 

There are a couple of different things to consider the first one is that we are trying to remove the derivative of t from the equation

 

[imath] K\left\{\int \vec{\Psi}_2^\dagger \frac{\partial}{\partial t}\vec{\Psi}_2dV_2\right\}\vec{\Psi}_1[/imath]

 

In such a way that it lets us integrate over the set of invalid elements removing the integral. We also don’t want to change the value of the equation

 

[imath]\vec{\Psi}_r^\dagger(t)\cdot\vec{\Psi}_r(t)[/imath]

 

The simplest way to do this is to choose a function that satisfies the equations

 

[imath] f^\dagger(t)\cdot f(t) = 1[/imath]

 

Well let's see if I was able to figure it out correctly...

 

Actually, shouldn't that be [imath]\frac{\partial}{\partial t} f^\dagger(t)\cdot f(t) = 0[/imath]?

 

At least where I was at is at the shift symmetry over "t":

 

[math] \frac{\partial}{\partial t}P_r(t) = 0[/math]

 

or:

 

[math] \left\{\frac{\partial}{\partial t}\vec{\Psi}_r^\dagger(t)\right\}\cdot\vec{\Psi}_r(t)+\vec{\Psi}_r^\dagger(t) \cdot\left\{\frac{\partial}{\partial t}\vec{\Psi}_r(t)\right\} = 0[/math]

 

So I guess I got it right up to here:

 

[math] \left\{\frac{\partial}{\partial t} e^{-iS_rt}\right\} \cdot e^{iS_rt} + e^{-iS_rt} \cdot \left\{\frac{\partial}{\partial t} e^{iS_rt}\right\}[/math]

 

And according to the information you provided, the next step is:

 

[math] \frac{\partial}{\partial t} e^{-iS_rt} = \frac{\partial}{\partial -iS_rt} e^{-iS_rt} \frac{\partial}{\partial t} -iS_rt = -iS_re^{-iS_rt} [/math]

 

Following that idea through;

 

[math] \left\{\frac{\partial}{\partial t} e^{-iS_rt}\right\} \cdot e^{iS_rt} + e^{-iS_rt} \cdot \left\{\frac{\partial}{\partial t} e^{iS_rt}\right\}

= -iS_r + iS_r = 0 [/math]

 

(My logic tells me [imath]e^{x} \cdot e^{-x} = 1[/imath], that's correct?)

 

And the equation

 

[imath] \vec\Psi_r^ \dagger (t) \cdot \vec\Psi_r\left\{\frac{\partial}{\partial t}f(t) \right\} +\vec{\Psi}_r^\dagger\cdot{\vec{\Psi}_r(t)}\left\{\frac{\partial}{\partial t}f^\dagger(t) \right\} = 0[/imath]

 

Hmm, here I need help... I'm not even sure where you got that equation from :I Can you explain it more?

 

Very many thanks already for your help.

 

-Anssi

Link to comment
Share on other sites

Oh okay, so e^{ax} is like f(g(x)) in the sense that the outcome of e^{ax} is a function of ax, correct?

 

I think that you have the right idea but you should remember that it is still a function of x that is ax is a function of x although we can find the differential in steps the first of which is a function of ax making it far easer to find then it other wise would be.

 

Well let's see if I was able to figure it out correctly...

 

Actually, shouldn't that be [imath]\frac{\partial}{\partial t} f^\dagger(t)\cdot f(t) = 0[/imath]?

 

At this point I am bringing your attention to the fact that it will multiply the value of [imath]\vec{\Psi}_r^\dagger(t)\cdot\vec{\Psi}_r(t)[/imath] by 1. This is perhaps more out of convenience then requirement.

 

Following that idea through;

 

[imath]\left\{\frac{\partial}{\partial t} e^{-iS_rt}\right\} \cdot e^{iS_rt} + e^{-iS_rt} \cdot \left\{\frac{\partial}{\partial t} e^{iS_rt}\right\} = -iS_r + iS_r = 0[/imath]

 

(My logic tells me [imath]e^{x} \cdot e^{-x} = 1[/imath], that's correct?)

 

Yes this all looks right to me.

 

Hmm, here I need help... I'm not even sure where you got that equation from :I Can you explain it more?

 

This is just the equation

 

[imath]\left\{\frac{\partial}{\partial t}\vec{\Psi}_r^\dagger(t)\right\}\cdot\vec{\Psi}_r(t)+\vec{\Psi}_r^\dagger(t) \cdot\left\{\frac{\partial}{\partial t}\vec{\Psi}_r(t)\right\} = 0[/imath]

 

except that I have factored out the part of [imath]\vec{\Psi}_r(t)[/imath] that that is a function of t rather then leave it as part of the function [imath]\vec{\Psi}_r(t)[/imath] so there is nothing new going on here just me making what seems to me to be an obvious substitution without saying what I’m doing.

Link to comment
Share on other sites

Okay, I think I understand what you are saying in the post above.

 

So let's see...

 

At this point, we must carefully analyze the development of the function f created when we integrated over set #2 in our earlier example. As mentioned at the time, f was a linear weighted sum of alpha and beta operators except for one strange term introduced by the time derivative of [imath]\vec{\Psi}_2[/imath]. Please note that, if [imath]P_r[/imath] is insensitive to [imath]\vec{\Psi}_0[/imath] and stationary in time then so is [imath]P_2[/imath]. This follows directly from the fact that [imath]P_2[/imath] is the probability distribution of the “invalid” ontological elements required to constrain the “valid” ontological elements to what is to be explained. There is certainly no required time dependence if the set to be explained has no time dependence, nor can there be any dependence upon [imath]\vec{\Psi}_0[/imath] if the set “r” can be seen as uninfluenced by [imath]\vec{\Psi}_0[/imath]. This leads to the conclusion that

[math]K\left\{\int \vec{\Psi}_2^\dagger \frac{\partial}{\partial t}\vec{\Psi}_2dV_2\right\}\vec{\Psi}_1=iKS_2\vec{\Psi}_1[/math]

 

So I'm still trying to understand exactly how to get to that [math]iKS_2\vec{\Psi}_1[/math]. You said:

 

There are a couple of different things to consider the first one is that we are trying to remove the derivative of t from the equation

 

[imath] K\left\{\int \vec{\Psi}_2^\dagger \frac{\partial}{\partial t}\vec{\Psi}_2dV_2\right\}\vec{\Psi}_1[/imath]

 

In such a way that it lets us integrate over the set of invalid elements removing the integral. We also don’t want to change the value of the equation

 

[imath]\vec{\Psi}_r^\dagger(t)\cdot\vec{\Psi}_r(t)[/imath]

 

The simplest way to do this is to choose a function that satisfies the equations

 

[imath] f^\dagger(t)\cdot f(t) = 1[/imath]

 

And the equation

 

[imath] \vec\Psi_r^ \dagger (t) \cdot \vec\Psi_r\left\{\frac{\partial}{\partial t}f(t) \right\} +\vec{\Psi}_r^\dagger\cdot{\vec{\Psi}_r(t)}\left\{\frac{\partial}{\partial t}f^\dagger(t) \right\} = 0[/imath]

 

The simplest equation that can satisfy these equations is of the form [imath]e^{iS_rt}[/imath] if you can’t verify that this equation satisfies these constraints just say so.

 

So I now understand how [imath]e^{iS_rt}[/imath] satisfies those two equations above, but I'm not exactly sure how it removes the integral from [imath] K\left\{\int \vec{\Psi}_2^\dagger \frac{\partial}{\partial t}\vec{\Psi}_2dV_2\right\}\vec{\Psi}_1[/imath]. It's probably something fairly obvious, I just can't see it...

 

Also just have to make sure, when you say "choose a function", does that essentially mean substitute a given function with that operation over the whole equation?

 

-Anssi

Link to comment
Share on other sites

So I now understand how [imath]e^{iS_rt}[/imath] satisfies those two equations above, but I'm not exactly sure how it removes the integral from [imath]K\left\{\int \vec{\Psi}_2^\dagger \frac{\partial}{\partial t}\vec{\Psi}_2dV_2\right\}\vec{\Psi}_1[/imath]. It's probably something fairly obvious, I just can't see it...

 

Notice that the only reason that we cannot do the integration is that there is a derivative to t inside of the integral and that if this derivative where not there then the integral would have the same value as the integral already done and equal to 1. Now if we consider that the only t dependence of [imath]\vec{\Psi}_2[/imath] is in fact [imath]e^{iS_rt}[/imath] we can easily do the differentiation above and arrive at

 

[imath]\frac{\partial}{\partial t}\vec{\Psi}_2 = iS_r\vec{\Psi}_2[/imath]

 

Substituting this back into the equation and moving the constants outside of the integral will allow us to perform the integration.

 

Also just have to make sure, when you say "choose a function", does that essentially mean substitute a given function with that operation over the whole equation?

 

Not exactly although if I’m understanding you right we do, do this. What I am saying is that if you where to try some other functions in those equations you would find that the functions [imath]e^{iS_rt}[/imath] and [imath]e^{-iS_rt}[/imath] satisfy the equations. Notice though that since the sine in the exponent can in fact be considered as part of [imath]S_r[/imath] they are really the same function. So in this case either function will in fact be equivalent to the function [imath]e^{iS_rt}[/imath] however if we relaxed the constraints slightly or if for some reason we chose a particular value for [imath]S_{r}[/imath] there would be more then one choice for the function to use. Choosing one of these is what I am referring to.

Link to comment
Share on other sites

Notice that the only reason that we cannot do the integration is that there is a derivative to t inside of the integral and that if this derivative where not there then the integral would have the same value as the integral already done and equal to 1. Now if we consider that the only t dependence of [imath]\vec{\Psi}_2[/imath] is in fact [imath]e^{iS_rt}[/imath] we can easily do the differentiation above and arrive at

 

[imath]\frac{\partial}{\partial t}\vec{\Psi}_2 = iS_r\vec{\Psi}_2[/imath]

 

I suppose when you say [imath]e^{iS_rt}[/imath] you mean [imath]e^{iS_2t}[/imath]

 

So, it's like this:

 

I understand that:

[math]\frac{\partial}{\partial t} e^{iS_2t} = iS_2e^{iS_2t}[/math]

 

And then simply substituting [imath]e^{iS_2t}[/imath] with [imath]\vec{\Psi}_2[/imath] in that result we get the [imath]iS_2\vec{\Psi}_2[/imath]

 

Substituting this back into the equation and moving the constants outside of the integral will allow us to perform the integration.

 

Okay, so now I think I understand how:

 

[math]K\left\{\int \vec{\Psi}_2^\dagger \frac{\partial}{\partial t}\vec{\Psi}_2dV_2\right\}\vec{\Psi}_1=iKS_2\vec{\Psi}_1[/math]

 

Onto the next step:

 

There is certainly no required time dependence if the set to be explained has no time dependence, nor can there be any dependence upon [imath]\vec{\Psi}_0[/imath] if the set “r” can be seen as uninfluenced by [imath]\vec{\Psi}_0[/imath]. This leads to the conclusion that

[math]K\left\{\int \vec{\Psi}_2^\dagger \frac{\partial}{\partial t}\vec{\Psi}_2dV_2\right\}\vec{\Psi}_1=iKS_2\vec{\Psi}_1[/math]

 

and that the function “f” may be written [imath]f=f_0 -iKS_2[/imath] where [imath]f_0[/imath] is entirely made up of a linear weighted sum of alpha and beta operators.

 

(took a liberty to fix LaTex there)

 

So I suppose the reason why “f” can be written [imath]f=f_0 -iKS_2[/imath] is that:

 

The function f must be a linear weighted sum of alpha and beta operators plus one single term which does not contain such an operator. That single term arises from the final integral of the time derivative of [imath]\vec{\Psi}_2[/imath] on the right side of the original representation of the result of integration:

[math]\int \vec{\Psi}_2^\dagger\cdot\frac{\partial}{\partial t}\vec{\Psi}_2dV_2.[/math]

 

Seems to make sense.

 

Onwards:

 

So long as the above constraints are approximately valid, our differential equation for [imath]\vec{\Psi}_0(\vec{x}_1,t)[/imath] may be written in the following form.

[math]\vec{\alpha}_1\cdot \vec{\nabla}\vec{\Psi}_0 + \left\{\int \vec{\Psi}_r^\dagger\cdot \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f_0(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right] \vec{\Psi}_r dV_r\right\}\vec{\Psi}_0 = K\frac{\partial}{\partial t}\vec{\Psi}_0\ + iK\left(S_2+S_r\right)\vec{\Psi}_0. [/math]

 

Strangely enough, I was able to figure out that bit too. The only little thing that I don't see clearly is whether [imath]-iKS_2[/imath] can be moved from the left side of the equation to the right side just like that, since it's first placed inside so many parentheses. If you and DD say it is valid, I am willing to take that on faith right now though. Unless you want to write down all the algebra to convince me. (I'm sorry I'm a math dummie :D But I must say it's getting easier little by little)

 

Onwards:

 

For the simple convenience of solving this differential equation, this result clearly suggests that one redefine [imath]\vec{\Psi}_0[/imath] via the definition [imath]\vec{\Psi}_0 = e^{-iK(S_2+S_r)t}\vec{\Phi}[/imath]. If one further defines the integral within the curly braces to be [imath]g(\vec{x}_1)[/imath], [imath]\vec{x}_1[/imath] being the only variable not integrated over, the equation we need to solve can be written in an extremely concise form:

[math]\left\{\vec{\alpha}\cdot \vec{\nabla} + g(\vec{x})\right\}\vec{\Phi} = K\frac{\partial}{\partial t}\vec{\Phi}, [/math]

 

Here I must say I am getting a bit lost. I have not scratched my head on it for too long yet, but perhaps you can explain that part to me little bit. I'll let you know if I figure something out...

 

EDIT: Oh I think I figured that bit out too...

 

Thank you very much!

 

-Anssi

Link to comment
Share on other sites

I suppose when you say [math]e^{iS_rt}[/math] you mean [math]e^{iS_2t}[/math]

 

So, it's like this:

 

I understand that:

[math]\frac{\partial}{\partial t} e^{iS_2t} = iS_2e^{iS_2t}[/math]

 

And then simply substituting [math]e^{iS_2t}[/math] with [math]\vec{\Psi}_2[/math] in that result we get the [math]iS_2\vec{\Psi}_2[/math]

 

Yes [math]e^{iS_2t}[/math] is what I meant. I think that you have this figured out, all that I think is really important at this point is that you understand how performing the substitution lets us perform the integration.

 

Strangely enough, I was able to figure out that bit too. The only little thing that I don't see clearly is whether [math]-iKS_2[/math] can be moved from the left side of the equation to the right side just like that, since it's first placed inside so many parentheses. If you and DD say it is valid, I am willing to take that on faith right now though. Unless you want to write down all the algebra to convince me. (I'm sorry I'm a math dummie But I must say it's getting easier little by little)

 

We have pretty much already done the math to move it out of the parenthesis. The first one just says that it is in the summation, since it makes no difference if we do integration or addition first we can easily move it out of the fist parenthesis. The second one is simply telling us what we are integrating so by completing the integral of the function we move it out of the second parenthesis. At this point it can be easily moved to the other side of the equation.

 

Here I must say I am getting a bit lost. I have not scratched my head on it for too long yet, but perhaps you can explain that part to me little bit. I'll let you know if I figure something out...

 

EDIT: Oh I think I figured that bit out too...

 

Ok just in case you are still having some problems I’ll explain what is going on here, it is acutely quite straight forward, all that you need to do is perform the substitution that DD has defined. That is [math]\vec{\Psi}_0 = e^{-iK(S_2+S_r)t}\vec{\Phi}[/math] performing this substitution on the right side of the equation turns it into the equation

 

[math]K\frac{\partial}{\partial t} \vec{\Psi}_0 = K\frac{\partial}{\partial t} (e^{-iK(S_2+S_r)t}\vec{\Phi}) [/math]

 

If you perform the differentiation on the right side you will find that the right side of the equations are identical.

 

The left side is of course just a way of rewriting the equation.

Link to comment
Share on other sites

Well, I am back. It is certainly nice to be home; after almost two full months away, one really appreciates home sweet home. And I now have “new” eyes: I have had cataract surgery and the improvement is unbelievable. I am quite sorry to see Qfwfq dropping off the forum; I think he was a valuable asset. On the other hand, Bombadil has impressed me with his knowledge of mathematics. I was under the impression that he was only beginning to study the subject.

 

I have read a few of the many posts which have been made while I was gone and I have a great many comments to make; however, I will have to put some thought in how to clearly express the issues behind your confusion; essentially, the rather unorthodox perspective I have on those same issues.

 

Meanwhile, Anssi, with regard to your latest post, let me make these comments.

I suppose when you say [imath]e^{iS_rt}[/imath] you mean [imath]e^{iS_2t}[/imath]
The “r” generally refers to “remaining” which you seem to presume is being referred to as set #2 (source of the “2” subscript). Note back to my opening post on this thread.
What we would like to do is to reduce the number of arguments to something which can be handled: i.e., we want to know the nature of the equations which must be obeyed by a subset of those variables. In an interest towards accomplishing that result, my first step is to divide the problem into two sets of variables: set number one will be the set referring to our “valid” ontological elements (together with the associated tau indices) and set number two will refer to all the remaining arguments. I will refer to these sets as #1 and #2 respectively. (You should comprehend that #1 must be finite and that #2 can possibly be infinite.)
This first separation is done for two reasons: first to show the logical arguments standing behind the separation and second to take explicit advantage of the fact that “valid” ontological elements can not have identical labels (identical labels would eliminate the possibility that they refer to different ontological elements; this was exactly the reason the additional axis tau was introduced). Further down you should note that I then divide set #1 into two sets. That final separation is into [math]x_1[/math] and all remaining arguments (referred to a set “r” for “remaining”). Time derivatives exist on the right hand side of both those expressions and those time derivatives in both cases lack any alpha or beta operator. So, in a sense, the two subscripts are essentially equivalent; they both refer to a very similar circumstance. In the final analysis, the net result of the two terms is replacement of the respective derivatives with the term [math]i(S_2+S_r)[/math]. I apologize for the sloppiness of my presentation and we probably need to discuss the issue further.
The function f must be a linear weighted sum of alpha and beta operators plus one single term which does not contain such an operator.
I am presuming here that you are aware of the fact that “f” is nothing more than a symbolic stand-in for the the result of all those terms arising from the integrals over all the arguments except [math]x_1[/math] and t (thus yielding a final differential equation in [math]x_1[/math] and t only.
If the actual function [imath]\vec{\Psi}_2[/imath] were known (i.e., a way of obtaining our expectations for set #2 is known), the above integrals could be explicitly done and we would obtain an equation of the form:

[math] \left\{\sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right\}\vec{\Psi}_1 = K\frac{\partial}{\partial t}\vec{\Psi}_1. [/math]

The point being that every one of those individual integrals is multiplied by an alpha or a beta operator. Since every individual integral must, in the final analysis, be a function of the arguments which have not been integrated over, the net result (except for that term which arises via the time derivative) is a linear weighted sum of alpha and beta operators. Linear because no products of those alpha and beta operators appear, weighted by the actual results of the integrations and a sum because the result is indeed a sum of such things. Sorry if I seem to be beating to death an issue you already understand, but I think a clear understanding of what is being said is important. It seems to me that “seems to make sense” is a little non-committal for an issue which I feel should be an obvious clear fact.

 

Meanwhile, let us go back to your first post after I left. You were clearly disturbed by exactly what these “positions” in the hypothetical space actually represent.

Hmm, I'm not really sure how to interpret this business of moving indices. I mean, I'm getting confused about how the indices are really identified now (are they positions or elements?).

 

I mean - dropping "tau" for a moment from the conversation - let's say our mapping of a specific "present" includes some elements on the x-axis on positions "1" and "2"; and accordingly we call these elements "1" and "2".

 

Then we have a function which tells us the probability of finding the elements "1" and "2" from a future "present"... ...OR rather, since we have interpreted the data as dust motes that move around, it tells us the probability that SOME dust motes exist in the positions "1" and "2", while the original dust motes from those positions might have - in our interpretation - moved elsewhere?

 

So, basically I'm getting confused with whether an input argument to [math]\vec{\Psi}[/math] is identified as a position on an X-axis or is it referring to an element that can have different position on X-axis...

One of the problems evident here is that people almost universally presume that the meaning of words is embedded in the words themselves (in this case, that is equivalent in presuming that the meanings of the numerical labels are embedded in the numbers themselves). Languages are not static entities; the meanings of words change in subtle ways every generation. Their true meanings (the concept “true” being listener's interpretation of that meaning) are a direct consequence of the epistemological construct conceived of by that listener. (We presume someone else's understanding is the same as our own when they express conclusions sufficiently similar to our own that we can accept them as consequences of our personal epistemological construct: our personal explanation of the circumstance.)

 

But beyond even that issue, our explanation quite often presume very different labels are referring to what the explanation assumes are identical entities separated by time. For example, a present day pile of assorted trash (commonly referred to archaeologically as a “tell”) can be identified (via the archaeological explanation) to once have been an ancient city. What I am getting at is the fact that it is the relationships and temporal behavior of these labels which yield their definitions.

 

Philosophers most often presume the names they give to “ontological elements” are static and unchanging. This is invariably a static view of reality (and, essentially, a view not requiring the concept of “time”). Our languages certainly do not support such a view; even over only a few short generations, human languages change so much as to preclude common comprehension of the earlier versions. How do you think ancient languages came to be “dead” languages. I love words and I got the biggest kick out of a reference I read some twenty years ago: Piss; an onomatopoeic euphemism used by women for the vulgar term used by men which was not recorded. There is an interesting article in “Science News” concerning how languages change.

 

These indices [math]\vec{x}_i[/math] are numerical labels for the undefined ontological elements standing behind your epistemological construct which is your explanation of reality. It is the epistemological construct itself which defines what these ontological elements are. When you purport to understand the reality you know, you identify these labels (or collections of labels) to those ontological elements defined by your explanation. It is the dynamic behavior of these labels (and that behavior can indeed be static, but “static” is a very limited realm) which your explanation explains. The “time-space” diagram used to display these labels in my dynamic representation is no more than a mechanism for display. What is important here is that there exists no set of ontological elements which cannot be so displayed and likewise there exist no behavior of these ontological elements which violates my mathematical construct.

 

Another issue which seems to be bothering you is my “dust mote” mental model. All I am doing there is expressing exactly what any competent modern physicist would write down as the controlling equation for the quantum mechanical behavior of a collection of massless dust motes of infinitesimal size acting only via contact interactions. Each element would be controlled by momentum conservation (represented by space derivative terms) and energy conservation (represented by the time derivative) plus that contact interaction (the Dirac delta function interaction). His equation turns out to be essentially identical to mine; the only difference being the fact that he might not come up with exactly the form of my interaction term. This is no more than a mental model which includes all possible interpretations of my equation. Any specific epistemological construct merely specifies the ontological identity of the various “dust motes”.

 

Have fun -- Dick

Link to comment
Share on other sites

Well, I am back. It is certainly nice to be home; after almost two full months away, one really appreciates home sweet home. And I now have “new” eyes: I have had cataract surgery and the improvement is unbelievable. I am quite sorry to see Qfwfq dropping off the forum; I think he was a valuable asset. On the other hand, Bombadil has impressed me with his knowledge of mathematics. I was under the impression that he was only beginning to study the subject.

 

Welcome back hope you had a good vacation it sounds like you did.

 

I have actually been doing the subject for some time and have gotten to what I think is at least a reasonable level, although without knowing the order of things I have perhaps overlooked some things that might be considered obvious.

 

After rereading some of the past posts and some of the discussion in this topic I am starting to wonder about a couple of things that I don’t think have really been brought up before.

 

Firstly I am wondering if there is any difference in how each element must behave. Meaning, I know that you have shown that the Schrödinger equation can be derived from the fundamental equation if we consider the three constraints used to do so to be close approximations but just how good an approximation is this. In the fundamental equation will all elements behave in the same way or will the elements behave differently depending on what element it is or perhaps a better way of putting this is depending on the rest of the universe in comparison to the element of interest. This also leads me to the question is it possible for two elements to become indistinguishable from one another?

 

Also I am wondering why exactly you chose the alpha and beta operators the way that you have. I know that you have said before that they are used in different parts of physics but it seems to me that there should be other ways to define the operators to derive an equation that is equivalent to the fundamental equation, so why is it that you chose the alpha and beta operators the way that you did?

 

Also I don’t know if you have looked at my last post in some subtle aspect of relativity but after rereading some of that topic again (I have done this a couple of times) I think that the last post is still how I am understanding the issue right now.

Link to comment
Share on other sites

Yes [math]e^{iS_2t}[/math] is what I meant. I think that you have this figured out, all that I think is really important at this point is that you understand how performing the substitution lets us perform the integration.

 

Well, here's how I supposed it goes;

 

Once you get to:

 

[math]K\left\{\int \vec{\Psi}_2^\dagger iS_2\vec{\Psi}_2dV_2\right\}\vec{\Psi}_1[/math]

 

It is valid to just move [imath]iS_2[/imath] out from the integral since its effect is the same be it inside or outside the integral. And when it is out, the value of the integral is by definition 1.

 

And I suppose the reason the time derivative can't just be moved outside the integral is that it operates on [imath]\vec{\Psi}_2[/imath], so moving them away from each others would invalidate the equation.

 

I'm probably using funny language because I am just trying to learn these concepts on my own, but anyhow, I'm fairly confident I've understood this bit correctly.

 

We have pretty much already done the math to move it out of the parenthesis. The first one just says that it is in the summation, since it makes no difference if we do integration or addition first we can easily move it out of the fist parenthesis. The second one is simply telling us what we are integrating so by completing the integral of the function we move it out of the second parenthesis. At this point it can be easily moved to the other side of the equation.

 

I'm sorry but I got really confused about this now... I got confused what "it is in the summation" means, and what the "order of doing the integration or addition" has to do with the first (inner?) set of parentheses, and I don't know at what point do we "complete the integral of the function"... Overall I'm just not understanding the common terminology well enough I guess.

 

Hmm, but this is probably easy to solve, so let's see, I'm just trying to figure out the explicit steps of how to get from here:

 

[math]\vec{\alpha}_1\cdot \vec{\nabla}\vec{\Psi}_0 + \left\{\int \vec{\Psi}_r^\dagger\cdot \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f_0(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t) -iKS_2 \right] \vec{\Psi}_r dV_r\right\}\vec{\Psi}_0 = K\frac{\partial}{\partial t}\vec{\Psi}_0\ + iKS_r\vec{\Psi}_0[/math]

 

To here:

 

[math]\vec{\alpha}_1\cdot \vec{\nabla}\vec{\Psi}_0 + \left\{\int \vec{\Psi}_r^\dagger\cdot \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f_0(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right] \vec{\Psi}_r dV_r\right\}\vec{\Psi}_0 = K\frac{\partial}{\partial t}\vec{\Psi}_0\ + iK\left(S_2+S_r\right)\vec{\Psi}_0[/math]

 

And I would like to see clearly why it is valid to simply pick up the [imath]-iKS_2[/imath] and move it to the right side of the equation, which clearly has occurred.

 

Just to be clear, I also have no idea why such a thing would be invalid. I simply don't know what algebraic rules are at play here...

 

Ok just in case you are still having some problems I’ll explain what is going on here, it is acutely quite straight forward, all that you need to do is perform the substitution that DD has defined. That is [math]\vec{\Psi}_0 = e^{-iK(S_2+S_r)t}\vec{\Phi}[/math] performing this substitution on the right side of the equation turns it into the equation

 

[math]K\frac{\partial}{\partial t} \vec{\Psi}_0 = K\frac{\partial}{\partial t} (e^{-iK(S_2+S_r)t}\vec{\Phi}) [/math]

 

Well there are few things I'm not entirely sure of, so let's see if I can walk through it properly. Concentrating on the right side of the equation:

 

[math]K\frac{\partial}{\partial t}\vec{\Psi}_0\ + iK\left(S_2+S_r\right)\vec{\Psi}_0[/math]

 

Substituting [imath]\vec{\Psi}_0[/imath] with [imath]e^{-iK(S_2+S_r)t}\vec{\Phi}[/imath] we get:

 

[math]K\frac{\partial}{\partial t}(e^{-iK(S_2+S_r)t}\vec{\Phi}) + iK\left(S_2+S_r\right)e^{-iK(S_2+S_r)t}\vec{\Phi}[/math]

 

Then concentrating on the differentiation:

[math]\frac{\partial}{\partial t}(e^{-iK(S_2+S_r)t}\vec{\Phi})[/math]

 

I'm not sure, but I suppose that can be written:

[math]\frac{\partial}{\partial t}e^{-iK(S_2+S_r)t}\frac{\partial}{\partial t}\vec{\Phi}[/math]

 

If so, then I can concentrate on the first half:

[math]\frac{\partial}{\partial t}e^{-iK(S_2+S_r)t} = \frac{\partial}{\partial -iK(S_2+S_r)t} e^{-iK(S_2+S_r)t} \frac{\partial}{\partial t} -iK(S_2+S_r)t = -iK(S_2+S_r)e^{-iK(S_2+S_r)t}[/math]

 

Putting that result back into:

[math]K\frac{\partial}{\partial t}(e^{-iK(S_2+S_r)t}\vec{\Phi}) + iK\left(S_2+S_r\right)e^{-iK(S_2+S_r)t}\vec{\Phi}[/math]

 

we get:

 

[math]K \left\{-iK(S_2+S_r)e^{-iK(S_2+S_r)t}\right\} + \left\{iK\left(S_2+S_r\right)e^{-iK(S_2+S_r)t}\right\}\vec{\Phi}\frac{\partial}{\partial t}\vec{\Phi}[/math]

 

 

[math]=[/math]

 

 

[math]K \vec{\Phi} \frac{\partial}{\partial t}\vec{\Phi}[/math]

 

Okay, I've messed up something, I seem to have one extra [math]\vec{\Phi}[/math] left in there and I don't know why... :I

 

Looking at the OP, I was expecting to end up with [imath] K \frac{\partial}{\partial t}\vec{\Phi}[/imath]

 

Little help? :eek_big:

 

-Anssi

Link to comment
Share on other sites

The “r” generally refers to “remaining” which you seem to presume is being referred to as set #2 (source of the “2” subscript).

 

Hmmm, no I was not presuming that, I was just pointing out something that looked like a typo in Bombadil's post:

 

...if we consider that the only t dependence of [imath]\vec{\Psi}_2[/imath] is in fact [imath]e^{iS_rt}[/imath]

 

Note back to my opening post on this thread.

This first separation is done for two reasons: first to show the logical arguments standing behind the separation and second to take explicit advantage of the fact that “valid” ontological elements can not have identical labels (identical labels would eliminate the possibility that they refer to different ontological elements; this was exactly the reason the additional axis tau was introduced). Further down you should note that I then divide set #1 into two sets. That final separation is into [math]x_1[/math] and all remaining arguments (referred to a set “r” for “remaining”). Time derivatives exist on the right hand side of both those expressions and those time derivatives in both cases lack any alpha or beta operator. So, in a sense, the two subscripts are essentially equivalent; they both refer to a very similar circumstance. In the final analysis, the net result of the two terms is replacement of the respective derivatives with the term [math]i(S_2+S_r)[/math]. I apologize for the sloppiness of my presentation and we probably need to discuss the issue further.

 

Conveniently, as you can see from my previous post, I am currently at that exact point of the presentation :) Certainly I have had to double check which set is which few times along the way, but overall I don't think it is too sloppy. But then I don't see everything very clearly and I have to really concentrate to keep these things in order in my head...

 

I'll have to reply to the rest of your post later...

 

-Anssi

Link to comment
Share on other sites

Hi Anssi, the simplest thing to do here is to specify exactly all the steps require here. Starting with the first expression.

[math]\vec{\alpha}_1\cdot \vec{\nabla}\vec{\Psi}_0 + \left\{\int \vec{\Psi}_r^\dagger\cdot \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f_0(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t) -iKS_2 \right] \vec{\Psi}_r dV_r\right\}\vec{\Psi}_0 = K\frac{\partial}{\partial t}\vec{\Psi}_0\ + iKS_r\vec{\Psi}_0[/math]

 

The first step is to recognize that the sum above [imath]\left(\sum_{i=1}^n\right)[/imath] is over the index “i” together with the fact that there is no “i” in the term [imath]iKS_2[/imath] (something easily missed as there is indeed an “i” in that term). This is a direct consequence of the fact that “i” has different meanings here: the first incident is “i” as an index and the second incident is [imath]i=\sqrt{-1}[/imath]. Now anyone really familiar with mathematics would never even notice this mixed usage as the second term arose from differentiating the expression [imath]e^{iS_2t}[/imath] which only makes sense if [imath]i=\sqrt{-1}[/imath] but this is, none the less, a dangerous way to use mathematical conventions.

 

But, once you realize that [imath]iKS_2[/imath] is not part of the collection of terms being summed, you should realize that our equation can just as easily be written

[math]\vec{\alpha}_1\cdot \vec{\nabla}\vec{\Psi}_0 + \left\{\int \vec{\Psi}_r^\dagger\cdot \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f_0(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right]\vec{\Psi}_rdV_r+\int\vec{\Psi}_r^\dagger\cdot \left( -iKS_2\right) \vec{\Psi}_r dV_r\right\}\vec{\Psi}_0 = K\frac{\partial}{\partial t}\vec{\Psi}_0\ + iKS_r\vec{\Psi}_0[/math]

 

Of course, you probably noticed that [imath]f_0[/imath] also has no reference to the index “i” and is thus not part of the sum either, but we aren't really concerned with moving that term. The next step is to recognize that [imath]iKS_2[/imath] is just a number and can be factored out of the integral where it now stands. Thus our equation can be written

[math]\vec{\alpha}_1\cdot \vec{\nabla}\vec{\Psi}_0 + \left\{\int \vec{\Psi}_r^\dagger\cdot \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f_0(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right]\vec{\Psi}_rdV_r+\left( -iKS_2\right)\int\vec{\Psi}_r^\dagger\cdot \vec{\Psi}_r dV_r\right\}\vec{\Psi}_0 = K\frac{\partial}{\partial t}\vec{\Psi}_0\ + iKS_r\vec{\Psi}_0[/math]

 

We now have the integral [imath]\int\vec{\Psi}_r^\dagger\cdot \vec{\Psi}_r dV_r[/imath] standing as a united element all by itself. This we know to be identical to unity. Substituting “1” for this term we now have [imath]iKS_2[/imath] standing by itself.

[math]\vec{\alpha}_1\cdot \vec{\nabla}\vec{\Psi}_0 + \left\{\int \vec{\Psi}_r^\dagger\cdot \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f_0(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right]\vec{\Psi}_rdV_r+\left( -iKS_2\right)\right\}\vec{\Psi}_0 = K\frac{\partial}{\partial t}\vec{\Psi}_0\ + iKS_r\vec{\Psi}_0[/math]

 

It should be clear to you that [imath]iKS_2[/imath] is now nothing more than one of several terms bracketed by the curly brackets “{...}”. The only reason it is inside those brackets is because, like all the other terms inside those brackets, it is multiplied by [imath]\vec{\Psi}_0[/imath]: i.e., we could just as easily written

[math]\vec{\alpha}_1\cdot \vec{\nabla}\vec{\Psi}_0 + \left\{\int \vec{\Psi}_r^\dagger\cdot \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f_0(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right]\vec{\Psi}_rdV_r\right\}\vec{\Psi}_0+\left( -iKS_2\right)\vec{\Psi}_0 = K\frac{\partial}{\partial t}\vec{\Psi}_0\ + iKS_r\vec{\Psi}_0[/math]

 

At this point [imath]( -iKS_2)\vec{\Psi}_0[/imath] stands all alone as a specific term added to the left hand side of the equation. Adding [imath]iKS_2\vec{\Psi}_0[/imath] to both sides of the equation results in exactly what we were looking for.

[math]\vec{\alpha}_1\cdot \vec{\nabla}\vec{\Psi}_0 + \left\{\int \vec{\Psi}_r^\dagger\cdot \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f_0(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right] \vec{\Psi}_r dV_r\right\}\vec{\Psi}_0 = K\frac{\partial}{\partial t}\vec{\Psi}_0\ + iK\left(S_2+S_r\right)\vec{\Psi}_0[/math]

 

I think you should be able to follow that.

 

Good luck -- Dick

Link to comment
Share on other sites

Hi Bombadil, I notice a perspective central to your posts which is quite common. Though you know quite a bit of mathematics, you don't seem to believe in the validity of the subject. What I mean by that is the fact that you seem to require an “understandable example” of the mathematical relationship before you will accept the mathematical relationship as meaningful. Any deduced mathematical expression is as valid as the underlying postulates behind the deduction (baring an error in the deduction itself). If one understands mathematics, reduction to an example can be counterproductive as it often constrains one's view of the possibilities. A mathematical expression means what the mathematical expression says and the fact that one often cannot mentally comprehend the range and application on an intuitive level is quite beside the point.

 

It might be worthwhile for you to read the thread where I introduce the dichotomy I call “logical” and “squirrel” thought.

Firstly I am wondering if there is any difference in how each element must behave.
How each element behaves is an issue answered by the epistemological construct (the explanation) which is being represented by [math]\vec{\Psi}[/math] and my fundamental equation was specifically constructed to make absolutely no constraints upon that question.
Meaning, I know that you have shown that the Schrödinger equation can be derived from the fundamental equation if we consider the three constraints used to do so to be close approximations but just how good an approximation is this.
Here I get the distinct impression that you ceased to read the original post to this thread as soon as I concluded Schrödinger's equation was an approximate solution to my equation. If you read on, you will discover that I discuss the issue of these constraints I placed on circumstance. In the final analysis, there is no approximation except for the final one and that final approximation is an absolutely well known approximation without which Schrödinger's equation is known to be invalid. To reiterate,
This is a truly astounding conclusion. The fact that the probability of seeing a particular number in a stream of totally undefined numbers can be deduced to be found via Schroedinger's equation, no matter what the rule behind those numbers might be, is totally counter intuitive. It is extremely important that we check the meaning of the three constraints I placed on the problem in terms of the conclusion reached.

 

The first two are quite obvious. Recapping, they consisted of demanding that the data point under consideration had negligible impact on the rest of the universe and that the pattern representing the rest of the universe was approximately constant in time. These are both common approximations made when one goes to apply Schroedinger's equation: that is, we should not be surprised that these approximations made life convenient. What is important is that Schroedinger's equation is still applicable to physical situations where these constraints are considerably relaxed. In other words, the constraints are not required by Schroedinger's equation itself.

 

The serious question then is, what happens to my derivation when those constraints are relaxed. If one examines that derivation carefully, one will discover that the only result of these constraints was to remove the time dependent term from the linear weighted sum expressed by g(x). If this term is left in, G(x) will be complicated in three ways: first, the general representation must allow for time dependence; second, the representation must allow for terms proportional to [imath]\frac{\partial}{\partial x}[/imath] and, finally, the resultant V(x) will be a linear sum of the alpha and beta operators.

 

The time dependence creates no real problems: V(x) merely becomes V(x,t). The terms proportional to [imath]\frac{\partial}{\partial x}[/imath] correspond to velocity dependent terms in V and, finally, retention of the alpha and beta operators essentially forces our deductive result to be a set of equation, each with its own V(x,t). All of these results are entirely consistent with Schroedinger's equation, they simply require interactions not commonly seen on the introductory level. Inclusion of these complications would only have served to obscure the fact that what was deduced was, in fact, Schroedinger's equation.

The final constraint is no more than the fact that the total energy of the element being represented by Schroedinger's equation must be approximately given by [imath]E=mc^2[/imath] where m is explicitly the rest mass of that element: i.e., a constant which may be subtracted from the total energy against which the kinetic energy is a trivial quantity. All this means is that we are dealing with a non-relativistic problem. As I said, Schroedinger's equation is known to be invalid if the element being represented is relativistic.

 

The net result of all this is that there are no approximations made in the deduction of Schroedinger's equation.

In the fundamental equation will all elements behave in the same way or will the elements behave differently depending on what element it is or perhaps a better way of putting this is depending on the rest of the universe in comparison to the element of interest. This also leads me to the question is it possible for two elements to become indistinguishable from one another?
Again, you are asking questions about that epistemological construct (the explanation) represented by [math]\vec{\Psi}[/math] and the deduction of my fundamental equation was explicitly specified to place utterly no constraints on that epistemological construct. In addition to that, Schroedinger's equation has to do with a single element; the entire consequences of the rest of the universe (or at least your explanations presumptions on that subject) are embedded in that function “V”: i.e., there is but one element being discussed here.
Also I am wondering why exactly you chose the alpha and beta operators the way that you have. I know that you have said before that they are used in different parts of physics but it seems to me that there should be other ways to define the operators to derive an equation that is equivalent to the fundamental equation, so why is it that you chose the alpha and beta operators the way that you did?
The only property embodied by my alpha and beta operators is that they anti-commute. Other than that, they are nothing but constants. If you were aware of all the intricacies of modern physics, you would know that anti-commutation plays a role in mathematically describing the fine details of almost all fundamental particles. The phenomena of anti-commutation stands behind every application of the uncertainty principal.

 

I am sure you are aware of the “spin” characteristics of fundamental particles; that characteristic arises from “spinners” which are represented by anti-commutating operators. The idea first arose in quantum mechanics when it was seen that the classical concept of angular momentum turned out to be an anti-commutative operation when represented in quantum mechanics. That is why they speak of spin angular momentum, but the kind of relationships embodied in spin turn out to permeate throughout modern particle physics: “strangeness”, “charm”, “color” (names physicists have given to similar behavior not associated with angular momentum) all display behavior analogous to spin. I am sure you have heard of up and down quarks; the kind of behavior is quite easily mathematically represented by anti-commuting phenomena. All this is to say that modern physics without anti-commutation just doesn't seem to be a possibility: i.e., avoiding such things makes no sense at all.

 

You say that “it seems to [you] that there should be other ways to define the operators to derive an equation that is equivalent to the fundamental equation”; if that is the case, lay it out for me.

Also I don’t know if you have looked at my last post in some subtle aspect of relativity but after rereading some of that topic again (I have done this a couple of times) I think that the last post is still how I am understanding the issue right now.
Yes I have looked at your last post in that thread and my impression is that you just don't understand the issue as it has been handed to you. I have posted a more explicit explanation of my reaction to that thread.

 

Have fun -- Dick

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share


×
×
  • Create New...