Jump to content


Photo
- - - - -

Deriving Schrödinger's Equation From My Fundamental Equation


  • Please log in to reply
144 replies to this topic

#35 Doctordick

Doctordick

    Explaining

  • Members
  • 1071 posts

Posted 09 October 2008 - 01:33 PM

Okay, hmmm... I suppose I should have more physics knowledge to understand exactly what you mean (I don't understand why angular momentum of an entity is given in such and such manner in QM, or what it means exactly), but I also suppose it is not important at this stage...(?)

Yeah, it's physics knowledge but it's not really all that difficult. Conventionally, momentum means it tends to keep going: i.e., it takes a force to slow it down or speed it up. Angular momentum has to do with the same phenomena except one is talking about rotation. It takes torque to speed up or slow down rotation. In Newtonian physics, momentum (often represented by the letter “p”) is given by [imath]\vec{p}=m\vec{v}[/imath] and angular momentum is given by [imath]\vec{L}=\vec{r}\times \vec{p}[/imath]. Note that, in a rotational system, velocity is equal to [imath]r \omega[/imath] where omega is the angular velocity. Think of a skater in a spin; as they draw their arms in they are reducing r thus constant angular momentum requires their angular velocity (their spin) to increase.

At any rate, angular momentum is a vector quantity (normally thought of as pointing along the axis of rotation). Thus the angular momentum of the skaters right hand (given he is at the origin of the coordinate system) as he looks in the x direction is given by y times the momentum of his hand in the x direction. The angular momentum of his left hand is -y times the momentum of his hand in the x direction which is exactly the negative of the momentum of his right hand so the two add together. Converting these representation into quantum mechanics (where momentum in the x direction is given by [imath]\frac{\partial}{\partial x}[/imath]), what we are talking about is [imath]y\frac{\partial}{\partial x}[/imath].

When you mess with this mathematically, you will discover that the x, y and z components of angular momentum anticommute with one another. This is exactly what lies behind the idea of “spin” of elementary particles. As I said, we will get to that issue when I derive Dirac's equation. These alpha and beta operators give rise to both “spin” phenomena and electromagnetic phenomena. (Just put this in to keep you interested :bouquet: )

The chosen "particular specific operator" was denoted as [imath]\alpha_{qx}[/imath], I suppose if it was explicitly stated it could be, say, [imath]\alpha_{3x}[/imath], i.e. it just refers to one specific x in the input arguments.

The alpha operator is a vector operator: i.e., [imath]\vec{\alpha}_i = \alpha_{ix}\hat{x}+\alpha_{iy}\hat{y}+\alpha_{iz}\hat{z}+\alpha_{i\tau}\hat{\tau}[/imath]. Each one of those alpha operators is a unique alpha operator; there are four operators associated with each vector alpha associated with a particular xi. When we set that constraint that [imath]\sum_i\vec{\alpha}\vec{\Psi}=0[/imath], it means that each component of that vector sums to zero.

But if it's a single specific index, then I don't understand what does it mean to "sum over q", i.e. how does one do a "sum over one specific x"? I'd expect to see just one term in that sum.

That x refers to the fact that we are summing over all of the alpha operators associated with the x axis; we are not summing over x. The letter q simply stands for the index of a particular alpha operator. I use a different letter because I cannot use i as the expression is already being summed over all i and I am simply multiplying by a single arbitrary alpha operator (of course in the sum over i , that same operator will occur once but I don't know where because I haven't told you the value of q).

... "q" refers to a specific index, but then on the other hand it also refers to a summation index? Where am I getting it wrong?

It begins life as a simple index; it doesn't serve as a summation index until I decide to sum over that index.

(I suspect it's missing by accident since your rearrangement implies it was supposed to be there... if I know my math at all... which I may not :D )

You are absolutely correct :embarassed: . I have edited the thing to put it in and given you credit for spotting it.

I think I understand that too now, except for the little strangeness regarding k & l referring to specific elements first, but then them being used to perform a sum where they appear as summation indices suddenly... If I didn't know to look at them as a summation indices suddenly (from all our conversations), I'd see a very short explicit sum in my mind :/

This is exactly the same as the earlier sum over q. Initially k & l are just indices referring to some specific beta operator. They do not become summation indices until I decide to sum over them.

Yeah, I think I have a pretty decent idea of this step, I just hope you can still clear out my uneasyness with those "sums over specific indices" or how should I put it... :I

Hopefully I have cleared it up. If not, let me know and I will make another attempt.

Sorry I have been so slow, I have been spending a lot of time following the economic crises. Our retirement account has shrunk by about fifteen percent since last year; a bit better than the DOW but still a troubling issue. I keep saying that I am not really concerned with the “value” of my holdings but rather with the return. What I am really afraid of is a collapse of earning or massive inflation, effectively the same thing. I suspect Christmas will be the breaking point. If the US gets through Christmas without a complete collapse I will be a happy man.

How are things in Finland? :confused:

Have fun -- Dick

#36 AnssiH

AnssiH

    Understanding

  • Members
  • 790 posts

Posted 12 October 2008 - 01:51 PM

Sorry I have been so slow, I have been spending a lot of time following the economic crises.


Don't worry about that. I've been somewhat busy the whole weekend too, thought I'd give a quick reply nevertheless;

Yeah, it's physics knowledge but it's not really all that difficult.

.
.
.
When you mess with this mathematically, you will discover that the x, y and z components of angular momentum anticommute with one another. This is exactly what lies behind the idea of “spin” of elementary particles. As I said, we will get to that issue when I derive Dirac's equation. These alpha and beta operators give rise to both “spin” phenomena and electromagnetic phenomena. (Just put this in to keep you interested :bouquet: )


Okay, I would have had questions about that (I don't know why things are expressed as partials in QM), but I suppose they can wait till we get there.

But yes, it's interesting if you've found some epistemological reasons behind the concepts of quantum mechanical "spin" & electromagnetism.

The alpha operator is a vector operator: i.e., [imath]\vec{\alpha}_i = \alpha_{ix}\hat{x}+\alpha_{iy}\hat{y}+\alpha_{iz}\hat{z}+\alpha_{i\tau}\hat{\tau}[/imath]. Each one of those alpha operators is a unique alpha operator; there are four operators associated with each vector alpha associated with a particular xi. When we set that constraint that [imath]\sum_i\vec{\alpha}\vec{\Psi}=0[/imath], it means that each component of that vector sums to zero.


Right, okay... (btw, I think in this thread [imath]\vec{\alpha}_i[/imath] has been so far defined simply as [imath]\alpha_{ix}\hat{x}+\alpha_{i\tau}\hat{\tau}[/imath]; though you did mention somewhere that this treatment can be validly expanded into more dimensions. Just thought I'd say that out loud in case someone finds the sudden addition of y & z components confusing)

It begins life as a simple index; it doesn't serve as a summation index until I decide to sum over that index.
.
.
.

This is exactly the same as the earlier sum over q. Initially k & l are just indices referring to some specific beta operator. They do not become summation indices until I decide to sum over them.

Hopefully I have cleared it up. If not, let me know and I will make another attempt.


Actually I was already writing further questions to you about this, but while doing it I realized what it means, and what was throwing me off. I did understand before how the definition [imath]\sum_i \vec{\alpha}_i \vec{\Psi} = 0[/imath] meant that also [imath]\sum_q\alpha_{qx}\vec{\Psi} = 0[/imath], but since I looked at that [imath]\alpha_{qx}[/imath] as a single x-component from a single specific alpha, it was very strange to me that q could also be used as a summation index suddenly. But so essentially you are just using the "q" as a summation index since it is conveniently already used in the equation at all the right places.

Yeah I think I get it now; I feel like I got one rock off from my shoe.

Our retirement account has shrunk by about fifteen percent since last year; a bit better than the DOW but still a troubling issue. I keep saying that I am not really concerned with the “value” of my holdings but rather with the return. What I am really afraid of is a collapse of earning or massive inflation, effectively the same thing. I suspect Christmas will be the breaking point. If the US gets through Christmas without a complete collapse I will be a happy man.

How are things in Finland? :confused:


Well certainly there are some small repercussions, but I guess at least right now people are not expecting finnish banks to get in trouble. They are preparing for it though, as is the whole european union. If some bank somewhere finds itself from trouble, I suppose the plan is that the government of the given country backs the banks up and ends up owning a corresponding share.

In the EU they are trying to take into account the lessons learned from the banking crisis we had in finland in the early 90's. It was pretty severe, cost us 8% of the GDP to keep the banks afloat (And many personal bankruptcies). Since then the finnish banks have been kept in check little bit better I guess.

Helsingin Sanomat - International Edition - Business & Finance

or just google "finnish banking crisis"

-Anssi

#37 Doctordick

Doctordick

    Explaining

  • Members
  • 1071 posts

Posted 13 October 2008 - 08:55 PM

Okay, I would have had questions about that (I don't know why things are expressed as partials in QM), but I suppose they can wait till we get there.

My answer to that question should be obvious. The partials arise through shift symmetry on the arguments from which you get your expectations; now physicists have a different take on the issue, they presume if it works so it must be right and it does indeed work. Actually there is a pretty long history behind the issue. When Newton came up with his ideas, they included the idea of an “inertial” coordinate system which is the coordinate system where his equations are valid. If you are in an inertial coordinate system, F=ma is a valid relationship (so long as relativistic effects can be ignored).

Of interest is the fact that rotating coordinate systems are not inertial systems: in a non-inertial system, F=ma is invalid. If you are in a rotating coordinate system (think of sitting on a playground merry-go-round) and you drop a ball, it will not appear to go straight down; instead, it will appear to accelerate towards the outside of the merry-go-round along a curved path (it appears to accelerate without any real force being applied). One can still use Newtons equations by presuming the existence of unseen forces which cause these accelerations. Physicists used to call such forces “pseudo forces” but I have been advised that they no longer use the term. Centrifugal and Coriolis (controls the winds around lows and highs in weather systems) forces are typical cases of these pseudo forces.

The interesting thing about “pseudo forces” is that they are always proportional to the mass of the accelerating object (that is, the actual path followed by the object has nothing to do with its mass thus heavier things must see stronger pseudo forces). This is true for the very simple reason that the objects are actually going in a straight line; it is the coordinate system which is accelerating and the mass of the object apparently being disturbed has nothing to do with why or how the coordinate system is accelerating. Think about setting a cup of coffee on the dashboard of your car while driving on a bumpy or curved road. Using the car itself as your reference frame, “pseudo forces” are very apt to knock the coffee cup over. Actually the coffee cup is just not accelerating, it's the car that's bouncing around.

At any rate, it was the fact that “pseudo forces” are always proportional to the mass that led theorists to hypothesize that gravity, being proportional to mass, was a pseudo force. They began to look for a specialized coordinate system which would make gravity a pseudo force. A lot of mathematical work went into trying to solve that problem. There were all kinds of advances made in mathematical physics related to changing to strange coordinate systems. This led to general theorems related to general transformations of coordinate systems. Many physics problems which included complex extraneous forces (constrained motion) were solved by finding coordinate systems where the solution of the motion was F=ma (coordinate systems which made those extraneous forces into pseudo forces. In fact a lot of work revolved around finding transformation from one coordinate system to another.

They never did find a coordinate system which would make gravity a pseudo force, but some of the mathematics which came out of the effort became fundamental to modern interpretation of classical mechanics. I am referring here to Lagrangian mechanics and Hamiltonian mechanics. If you look at those two Wikipedia sites you will find them talking about “generalized coordinates”. This concept of generalized coordinates arose in that search for specialized coordinate systems which expressed the constraints on systems in different forms. You might scan the pages but I wouldn't worry about understanding them. I only bring them up because they go a long way towards your question as to how modern physics came upon those differentials you asked about.

Of course my classes in classical mechanics were over forty years ago and I have forgotten a lot but I vaguely remember something about “solution spaces” which looked a lot like quantum mechanics. But mathematics is a very powerful way to express required logical relationships, things which otherwise would not be obvious at all. Essentially, I think modern physics is held to be correct for the simple reason that it works! If you get down to actual facts, they have no other defense of their beliefs. I am not saying their beliefs are wrong, I am simply providing them with a logical defense of their beliefs. That is the exact reason why I think what I have done is important: the hard scientists actually put forward nothing beyond “it works, at least as far as they know!”

But yes, it's interesting if you've found some epistemological reasons behind the concepts of quantum mechanical "spin" & electromagnetism.

In essence, I am working with relationships already known by physicists; the only real difference is that my reasons for introducing these anti-commuting operators are fundamentally different, from a logical perspective. I do not defend my work with experiment as it is in no way theoretical; it is instead a logical deduction based upon how one should rationally approach the idea of building an epistemological construct on an unknown ontology; an issue no one, to my knowledge, has even a passing interest.

... you did mention somewhere that this treatment can be validly expanded into more dimensions. Just thought I'd say that out loud in case someone finds the sudden addition of y & z components confusing)

That issue is actually quite simple. When you analyze totally undefined ontological elements (via the numerical labels I have proposed) the constraints created to yield the correct past (those invalid ontological elements) are the source of all interactions (except for Pauli exclusion degeneracy pressure which is another interesting issue). In essence, one can choose to look at these undefined ontological elements as actually labeled by a collection of numbers: i.e., the ontological basis can be viewed as independent collections of labels. All that happens is that each of these independent collections must obey its own fundamental constraint. The dimensionality of the final representation is a totally open issue. I say we, as relatively primitive minds, cease with a three dimensional picture because that is the simplest picture to yield truly useful results. There is another subtle definition of dimensionality issue embedded in that idea which we will get to eventually (if I live long enough ;) ).

... it was very strange to me that q could also be used as a summation index suddenly.

An index is an index; it is something which identifies what you are referring to. The fact that it is called a summation index when one does a sum over the aspect being defined by that index in no way changes the character of the “index”.

But so essentially you are just using the "q" as a summation index since it is conveniently already used in the equation at all the right places.

No, it has become a summation index by virtue of the simple fact that I have summed the equation over all possibilities for that index.

I read the Business & Finance link you gave and it appears Finland has been being careful. Saturday I went to the ground breaking for a new library being built here and, while we were there, I talked to the contractor (and a couple others) about the US stock market crash. When they asked me what I thought about the expectations for Monday, I said I had a feeling that the thing had bottomed out Friday and that it would start coming back next week but that it could very well collapse again if Christmas sales don't come up to snuff. I was expecting a pretty flat result for today but the upturn which actually occurred was a nice surprise. I told my wife that they will probably think I am some kind of guru now ;) .

Have fun --Dick

#38 AnssiH

AnssiH

    Understanding

  • Members
  • 790 posts

Posted 26 October 2008 - 12:47 PM

Hi, sorry about the delays. Time to get back to dissecting the OP

If the actual function [imath]\vec{\Psi}_2[/imath] were known (i.e., a way of obtaining our expectations for set #2 is known), the above integrals could be explicitly done and we would obtain an equation of the form:


[math] \left\{\sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right\}\vec{\Psi}_1 = K\frac{\partial}{\partial t}\vec{\Psi}_1. [/math]

.
.
.
The above is an example of the kind of function the indices on our valid ontological elements must obey; however, it is still in the form of a many body equation and is of little use to us if we cannot solve it. In the interest of learning the kinds of constraints the equation implies, let us take the above procedure one step farther and search for the form of equation a single index must obey (remember the fact that we added invalid ontological elements until the index on any given element could be recovered if we had all n-1 other indices). We may immediately write [imath]P_1[/imath](set #1) = [imath]P_0(\vec{x}_1,t)P_r[/imath](remainder of set #1 given [imath]\vec{x}_1[/imath],t). Note that [imath]\vec{x}_1[/imath] can refer to any index of interest as order is of no significance. Once again, we can deduce that there exist algorithms capable of producing [imath]P_0[/imath] and [imath]P_r[/imath]; I will call these functions [imath]\vec{\Psi}_0[/imath] and [imath]\vec{\Psi}_r[/imath] respectively. It follows that [imath]\vec{\Psi}_1[/imath] may be written as follows:


[math]\vec{\Psi}_1(\vec{x}_1,\vec{x}_2, \cdots, \vec{x}_n, t)= \vec{\Psi}_0(\vec{x}_1,t)\vec{\Psi}_r(\vec{x}_1,\vec{x}_2, \cdots, \vec{x}_n, t).[/math]


If I make this substitution in the earlier equation for [imath]\vec{\Psi}_1[/imath], I will obtain the following relationship:


[math]\left\{\sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right\}\vec{\Psi}_0\vec{\Psi}_r = K\frac{\partial}{\partial t}(\vec{\Psi}_0\vec{\Psi}_r). [/math]


Once again I point out that [imath]\vec{\Psi}_r[/imath] constitutes the context for [imath]\vec{\Psi}_0(\vec{x}_1,t)[/imath]. Once again, I will take the position that, if we know dthe flaw-free explanation represented by [imath]\vec{\Psi}[/imath], we know our expectations for the set of indices two through n, set “r”,: i.e., we know [imath]\vec{\Psi}_r[/imath] (the context). As before, if we now left multiply the above equation by [imath]\vec{\Psi}_r^\dagger[/imath] (forming the inner or dot product with the algebraically modified [imath]\vec{\Psi}_r[/imath]) and integrate over the entire set of arguments referred to as set “r” (the remainder after [imath]\vec{x}_1[/imath] has been specified), we will obtain the following result:


[math]\vec{\alpha}_1\cdot \vec{\nabla}_1\vec{\Psi}_0 + \left\{\int \vec{\Psi}_r^\dagger\cdot \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right] \vec{\Psi}_r dV_r\right\}\vec{\Psi}_0 = K\frac{\partial}{\partial t}\vec{\Psi}_0\ + K\left\{\int \vec{\Psi}_r^\dagger \cdot \frac{\partial}{\partial t}\vec{\Psi}_r dV_r \right\}\vec{\Psi}_0. [/math]


With some pen and paper, and applying the information from post #7, I think I was able to walk through the algebra that leads to that last equation in the quote. Although it did raise one question that I did not know to ask earlier. Hmm, I think I should just put down all that algebra, so you can tell me if I have the right idea;

Left multiply by [imath]\vec{\Psi}_r^\dagger[/imath] and integrate over the "remainder" set "r":


[math]\int\vec{\Psi}_r^\dagger \cdot \left\{\sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right\}\vec{\Psi}_0\vec{\Psi}_r dV_r = \int\vec{\Psi}_r^\dagger \cdot K\frac{\partial}{\partial t}(\vec{\Psi}_0\vec{\Psi}_r) dV_r[/math]


Separate that single index from the sum:


[math]\int\vec{\Psi}_r^\dagger \cdot \left\{\sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i \right\}\vec{\Psi}_0\vec{\Psi}_r dV_r = \int\vec{\Psi}_r^\dagger \cdot ( \vec{\alpha}_1 \cdot \vec{\nabla}_1 \vec{\Psi}_0 \vec{\Psi}_r ) dV_r + \int\vec{\Psi}_r^\dagger \cdot \left\{\sum_{i=2}^n \vec{\alpha}_i \cdot \vec{\nabla}_i \right\}\vec{\Psi}_0\vec{\Psi}_r dV_r
[/math]


To further dissect that term with the single index:


[math]\int\vec{\Psi}_r^\dagger \cdot ( \vec{\alpha}_1 \cdot \vec{\nabla}_1 \vec{\Psi}_0 \vec{\Psi}_r ) dV_r =
\int\vec{\Psi}_r^\dagger \cdot \left\{ \vec{\Psi}_r ( \vec{\alpha}_1 \cdot \vec{\nabla}_1 \vec{\Psi}_0 ) +
\vec{\Psi}_0 ( \vec{\alpha}_1 \cdot \vec{\nabla}_1 \vec{\Psi}_r ) \right\} dV_r
[/math]


And we get to factor something out from the integral. Since [imath]\vec{\Psi}_0[/imath] is not a function of the set "r", the above can be written:


[math]
\int\vec{\Psi}_r^\dagger \cdot \vec{\Psi}_r dV_r ( \vec{\alpha}_1 \cdot \vec{\nabla}_1 \vec{\Psi}_0 ) +
\vec{\Psi}_0 \int\vec{\Psi}_r^\dagger \cdot ( \vec{\alpha}_1 \cdot \vec{\nabla}_1 \vec{\Psi}_r ) dV_r
[/math]


And since [math] \int\vec{\Psi}_r^\dagger \cdot \vec{\Psi}_r dV_r [/math] equals 1 by definition:


[math]
\vec{\alpha}_1 \cdot \vec{\nabla}_1 \vec{\Psi}_0 +
\vec{\Psi}_0 \int\vec{\Psi}_r^\dagger \cdot ( \vec{\alpha}_1 \cdot \vec{\nabla}_1 \vec{\Psi}_r ) dV_r
[/math]


Dub the latter term as term A for later reference (my question will have something to do with this term)

Going back to just having separated the single index from the sum, and concentrating on that remaining set:


[math]
\int\vec{\Psi}_r^\dagger \cdot \left\{\sum_{i=2}^n \vec{\alpha}_i \cdot \vec{\nabla}_i \right\}\vec{\Psi}_0\vec{\Psi}_r dV_r
[/math]


Simply factoring [imath]\vec{\Psi}_0[/imath] out from the integral:


[math]
\int\vec{\Psi}_r^\dagger \cdot \left\{\sum_{i=2}^n \vec{\alpha}_i \cdot \vec{\nabla}_i \right\}\vec{\Psi}_r dV_r \vec{\Psi}_0
[/math]


Then onto adding the above together with Term A:


[math]
\vec{\Psi}_0 \int\vec{\Psi}_r^\dagger \cdot ( \vec{\alpha}_1 \cdot \vec{\nabla}_1 \vec{\Psi}_r ) dV_r +
\int\vec{\Psi}_r^\dagger \cdot \left\{\sum_{i=2}^n \vec{\alpha}_i \cdot \vec{\nabla}_i \right\}\vec{\Psi}_r dV_r \vec{\Psi}_0
[/math]


Following the example at #7, and also your final result in the OP, this would become:


[math]
\left\{ \int\vec{\Psi}_r^\dagger \cdot \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i \vec{\Psi}_r dV_r \right\} \vec{\Psi}_0
[/math]


And here's my question. There was two [math]\vec{\Psi}_0[/math]'s to be added together, so what happened to the other one?

The rest of the albebra seemed pretty straightforward.

Phew, I hope I didn't make any mistakes with the latex (or the algebra :D)

-Anssi

#39 Doctordick

Doctordick

    Explaining

  • Members
  • 1071 posts

Posted 27 October 2008 - 01:53 PM

Hi Anssi, I could not be prouder of you and your work. It is not exactly the way I would have performed the algebra (which is good because I know you thought it out for yourself) but it is certainly absolutely correct. The answer to your question is so trivial you will kick yourself when you see it.

Then onto adding the above together with Term A:


[math]
\vec{\Psi}_0 \int\vec{\Psi}_r^\dagger \cdot ( \vec{\alpha}_1 \cdot \vec{\nabla}_1 \vec{\Psi}_r ) dV_r +
\int\vec{\Psi}_r^\dagger \cdot \left\{\sum_{i=2}^n \vec{\alpha}_i \cdot \vec{\nabla}_i \right\}\vec{\Psi}_r dV_r \vec{\Psi}_0
[/math]


Following the example at #7, and also your final result in the OP, this would become:


[math]
\left\{ \int\vec{\Psi}_r^\dagger \cdot \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i \vec{\Psi}_r dV_r \right\} \vec{\Psi}_0
[/math]


And here's my question. There was two [math]\vec{\Psi}_0[/math]'s to be added together, so what happened to the other one?

Note the integrals above. You have two of them:


[math]First=\int\vec{\Psi}_r^\dagger \cdot ( \vec{\alpha}_1 \cdot \vec{\nabla}_1 \vec{\Psi}_r ) dV_r[/math]

and

[math]Second=\int\vec{\Psi}_r^\dagger \cdot \left\{\sum_{i=2}^n \vec{\alpha}_i \cdot \vec{\nabla}_i\right\} \vec{\Psi}_r dV_r [/math]


If you call the first “A” and the second “B” then the resulting equation (which you have correctly written down) becomes:


[math]\vec{\Psi}_0 A +B\vec{\Psi}_0[/math]


and that is, of course, identical to [imath](A+B)\vec{\Psi}_0[/imath]: i.e., the [imath]\vec{\Psi}_0[/imath] simply factors out ([imath]\vec{\nabla}_i[/imath] only differentiates [imath]\vec{\Psi}_0[/imath] when i=1). Finally A is no more than the missing term in the sum explicitly referred to in B (which I think you knew).

Please don't feel bad; it is no more than a sign that you are new at this. You did great! I think we are making great progress. :mademyday::applause:

Have fun -- Dick

#40 AnssiH

AnssiH

    Understanding

  • Members
  • 790 posts

Posted 27 October 2008 - 02:40 PM

Hi Anssi, I could not be prouder of you and your work. It is not exactly the way I would have performed the algebra (which is good because I know you thought it out for yourself) but it is certainly absolutely correct.


Excellent :)

The answer to your question is so trivial you will kick yourself when you see it.
.
.
.
and that is, of course, identical to [imath](A+B)\vec{\Psi}_0[/imath]: i.e., the [imath]\vec{\Psi}_0[/imath] simply factors out ([imath]\vec{\nabla}_i[/imath] only differentiates [imath]\vec{\Psi}_0[/imath] when i=1).


Oh of course! [kick self]

Finally A is no more than the missing term in the sum explicitly referred to in B (which I think you knew).


Yup

Please don't feel bad; it is no more than a sign that you are new at this.


I'm only feeling amazed that I actually managed to do all that algebra :)
Well, hopefully will have time to concentrate on the next step soon!

-Anssi "dreaming of a good LaTex preview/editing tool"

#41 AnssiH

AnssiH

    Understanding

  • Members
  • 790 posts

Posted 04 November 2008 - 11:37 AM

Hi, I've been meaning to write this since sunday but hasn't had any time until now :I

Now, this resultant may be a linear differential equation in one variable but it is not exactly in a form one would call “transparent”. In the interest of seeing the actual form of possible solutions allow me to discuss an approximate solution discovered by setting three very specific constraints to be approximately valid. The first of these three is that the data point of interest, [imath]\vec{x}_1[/imath], is insignificant to the rest of the universe: i.e., [imath]P_r[/imath] is, for practical purposes, not much effected by any change in the actual form of [imath]\vec{\Psi}_0[/imath]: i.e., feed back from the rest of the universe due to changes in [imath]\vec{\Psi}_0[/imath] can be neglected.


I'm not sure I can interpret this correctly. When you say "...change in the actual form of [imath]\vec{\Psi}_0[/imath]", does that mean "changes in the probability distribution yielded by [imath]\vec{\Psi}_0[/imath]"?

Another question, I suppose [imath]\vec{\Psi}_0[/imath] yields the probability of finding the input argument from a specific future "present", but when I think about probability distribution, I tend to think about the probability of finding a given element from a given position. But position has not been defined (it was merely possible to interpret the result in terms of dust motes, right?), so I'm not sure what probability distribution means in this case when referring to one index?

The second constraint will be that the probability distribution describing the rest of the universe is stationary in time: that would be that [imath]P_r[/imath] is, for practical purposes, not a function of t.


I'm not able to interpret this unambiguously either. When you say "probability distribution is stationary in time", I first interpreted it as the same as "probability distribution is not being continuously re-evaluated", but when you say "not a function of t", it sounds to me that you'd have to get the same results when referring to any "t" in the input arguments... ...but I'm not sure what that would mean exactly...

If that is the case, the only form of the time dependence of [imath]\vec{\Psi}_r[/imath] which satisfies temporal shift symmetry is [imath]e^{iS_rt}[/imath].


I don't understand what that means either. Perhaps if I did, I could interpret the above correctly as well...

-Anssi

#42 Doctordick

Doctordick

    Explaining

  • Members
  • 1071 posts

Posted 05 November 2008 - 07:24 PM

Hi, I've been meaning to write this since sunday but hasn't had any time until now :I

Don't worry about it; as you know, two weeks from now I will essentially be out of contact (except for occasional access to the internet) until January
2009 so you will have plenty of time to think about things. If you have any specific questions, Qfwfq might be able to help.

Meanwhile, I feel I should begin with that “non transparent linear differential equation in one variable” because most everything you are asking about is concerned with simplifying that equation. First, I hope it is clear to you that the equation is indeed an equation in one variable. Only the one variable [imath]\vec{x}_1[/imath] remains because all other variables have been integrated out: i.e., except for the [imath]\vec{x}_1[/imath] and t, the consequences of all the arguments of [imath]\vec{\Psi}_r[/imath] have been summed over all possibilities (integration is the limit of a sum as the number of terms goes to infinity).

Essentially, as I have said to you elsewhere, this makes the presumption that our expectations for the rest of the universe are known (in essence, we know what [imath]\vec{\Psi}_r[/imath] is, or at least think we do: i.e., we know what our expectations are). I would comment that this is quite analogous to the common scientific analysis of reality: when we design an experiment, we presume the rest of the universe does not interfere with whatever it is we have set up to perform that experiment (we presume we know what is expected from the surrounding equipment). So the equation we are starting with is as follows:


[math]\vec{\alpha}_1\cdot \vec{\nabla}_1\vec{\Psi}_0 + \left\{\int \vec{\Psi}_r^\dagger\cdot \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right] \vec{\Psi}_r dV_r\right\}\vec{\Psi}_0 = K\frac{\partial}{\partial t}\vec{\Psi}_0\ + K\left\{\int \vec{\Psi}_r^\dagger \cdot \frac{\partial}{\partial t}\vec{\Psi}_r dV_r \right\}\vec{\Psi}_0. [/math]


What you need to understand is that the two integrals


First = [math]\int \vec{\Psi}_r^\dagger\cdot \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right] \vec{\Psi}_r dV_r[/math]

and

Second = [math]\int \vec{\Psi}_r^\dagger \cdot \frac{\partial}{\partial t}\vec{\Psi}_r dV_r[/math]


can be complex beyond belief as [imath]\vec{\Psi}_r[/imath], to be correct, must actually yield the flaw free structure of the entire rest of the universe. In ordinary physics experiments we essentially presume actual detailed structure of the rest of the universe has little bearing upon whatever experiment we are doing. That is a pretty caviler assertion and I would rather approach the question a little more carefully and with a little accuracy.

I'm not sure I can interpret this correctly. When you say "...change in the actual form of [imath]\vec{\Psi}_0[/imath]", does that mean "changes in the probability distribution yielded by [imath]\vec{\Psi}_0[/imath]"?

Yes, but not exactly the way you are expressing it. I am developing an equation which can be solved for [imath]\vec{\Psi}_0[/imath] and I want to rid myself of some of that complexity I just mentioned. What I am referring to here is the fact that [imath]\vec{\Psi}_r[/imath] is actually a function of [imath]\vec{x}_1[/imath] and [imath]\vec{\Psi}_0[/imath] (the solution of that one body problem we are attempting to represent) yields the expectations for [imath]\vec{x}_1[/imath]. What I am saying is that I want the form of that particular solution to have little impact upon the rest of the universe (there could be uncountable feed back mechanisms inside the given correct representation). I am just saying that I am specifically going to make the approximation that those feedback mechanisms can be ignored.

Another question, I suppose [imath]\vec{\Psi}_0[/imath] yields the probability of finding the input argument from a specific future "present", but when I think about probability distribution, I tend to think about the probability of finding a given element from a given position. But position has not been defined (it was merely possible to interpret the result in terms of dust motes, right?), so I'm not sure what probability distribution means in this case when referring to one index?

We have taken all these indices and mapped them to positions on an x (and tau) axis. At that point, [imath]\vec{x}_1[/imath] is indeed being conceptualized as a specific position (the position of a specific “dust mote”) in that hypothetical coordinate system. What you need to remember is that [imath]\vec{\Psi}_r[/imath] (which is presumed to be known) gives you exactly the specific expected positions for all the other indices in that original equation: i.e., that function specifies the structure of the entire rest of the universe in exactly that same conceptual coordinate system. That would include, in fact, the positions of every element in the laboratory which constitutes the surroundings of your experiment. This is exactly the structure which provides the information on distances, directions, and all other circumstances which you think of as the environment of the experiment.

Essentially, all I have said, up to this point, is that you know what to expect with regard to the environment of your experiment as represented in that conceptual coordinate system which is used to represent this single index (how the position, or rather the probability of the position) of this element behaves, given that you know (or think you know) everything of significance about the background (the impact of the rest of the universe).

I'm not able to interpret this unambiguously either. When you say "probability distribution is stationary in time", I first interpreted it as the same as "probability distribution is not being continuously re-evaluated", but when you say "not a function of t", it sounds to me that you'd have to get the same results when referring to any "t" in the input arguments... ...but I'm not sure what that would mean exactly...

Once again, the purpose here is to simplify the equation I want to solve. I want to make the approximation that [imath]P_r[/imath] is not a function of time. I am making the assumption that “the structure which provides the information on distances, directions, and all other circumstances which you think of as the environment of the experiment”, is not changing in time. The approximation is that the only thing changing is [imath]\vec{x}_1[/imath], the variable who's probability is to be provided by [imath]\vec{\Psi}_0[/imath]. Again, this is exactly the same approximation made on a simple experiment in any introductory lab secession: i.e., only changes in the variable under examination are of interest.

If you go to the end of my opening post on this thread, you will find that, after deriving Schroedinger's equation and coming up with definitions for “mass”, “momentum” and “energy”, I examine the consequences of relaxing these self same approximations and note that “all of these results are entirely consistent with Schroedinger's equation, they simply require interactions not commonly seen on the introductory level.” The only real reason for making these approximations was the fact that “Inclusion of these complications would only have served to obscure the fact that what was deduced was, in fact, exactly Schroedinger's equation”.

I don't understand what that means either. Perhaps if I did, I could interpret the above correctly as well...

Not really; that line brings up the solution to the differential equation derived from the requirement of “global” shift symmetry in the argument “t”. That is,


[math] \frac{\partial}{\partial t}P_r(t) = 0,[/math]


or, since [imath]P_r(t)[/imath] is defined to be given by [imath]\vec{\Psi}_r^\dagger(t)\cdot\vec{\Psi}_r(t)[/imath],


[math] \left\{\frac{\partial}{\partial t}\vec{\Psi}_r^\dagger(t)\right\}\cdot\vec{\Psi}_r(t)+\vec{\Psi}_r^\dagger(t) \cdot\left\{\frac{\partial}{\partial t}\vec{\Psi}_r(t)\right\} = 0.[/math]


In deference to Qfwfq I should have said, the simplest form of time dependence, not [imath]\vec{\Psi}=0[/imath] which solves that equation is of the form [imath]e^{iS_rt}[/imath]. That form would yield a [imath]\vec{\Psi}^\dagger= e^{-iS_rt}[/imath] via the definition of the “complex conjugate” (the meaning of that [imath]\dagger[/imath] symbol). The differentiation of the product representation of P with respect to t yields two term which differ only in their sign thus their sum is zero.


[math]\frac{d}{dx}e^{ax}=ae^{ax}[/math]


Something I could have easily proved fifty years ago but the proof currently seems to have slipped my mind. If you don't believe the factual nature of the above dirivative, see Derivative of the Exponential Function.

I did a short cursory search for the proof but didn't find it. Perhaps Qfwfq can produce it.

Speaking of Qfwfq, he has complained about that step in my work, insisting that it eliminates some possible solutions where S is not a constant. I again comment that the vector nature of [imath]\vec{\Psi}[/imath] takes care of any additional complexities one might wish to put into possible solutions. What I have put forth is the simplest form which satisfies the differential equation and is not explicitly zero (and zero is included by setting “E”, the amplitude of the differential, to zero). No possible solutions satisfying shift symmetry have been eliminated.

I hope that clears up some of your problems.

Have fun -- Dick

#43 AnssiH

AnssiH

    Understanding

  • Members
  • 790 posts

Posted 16 November 2008 - 08:47 AM

Hi, sorry I've been slow, had some issues... Including having no internet for a while.

If you have any specific questions, Qfwfq might be able to help.


I hope so. Qfwfq, I'd appreciate your help. Don't worry about being slow to reply due to lack of time, that's my main problem as well.

Meanwhile, I feel I should begin with that “non transparent linear differential equation in one variable” because most everything you are asking about is concerned with simplifying that equation. First, I hope it is clear to you that the equation is indeed an equation in one variable. Only the one variable [imath]\vec{x}_1[/imath] remains because all other variables have been integrated out: i.e., except for the [imath]\vec{x}_1[/imath] and t, the consequences of all the arguments of [imath]\vec{\Psi}_r[/imath] have been summed over all possibilities (integration is the limit of a sum as the number of terms goes to infinity).


Hmmm, so, are you referring to similar issue as what we had with [imath]\vec{\Psi}_2[/imath]; in essence "If the actual function [imath]\vec{\Psi}_r[/imath] were known (way of obtaining our expectations from set "r" is known), the integrals could be explicitly done"?
If so, then I think I understand what you mean by it being a differential equation in one variable.

Essentially, as I have said to you elsewhere, this makes the presumption that our expectations for the rest of the universe are known (in essence, we know what [imath]\vec{\Psi}_r[/imath] is, or at least think we do: i.e., we know what our expectations are). I would comment that this is quite analogous to the common scientific analysis of reality: when we design an experiment, we presume the rest of the universe does not interfere with whatever it is we have set up to perform that experiment (we presume we know what is expected from the surrounding equipment). So the equation we are starting with is as follows:


[math]\vec{\alpha}_1\cdot \vec{\nabla}_1\vec{\Psi}_0 + \left\{\int \vec{\Psi}_r^\dagger\cdot \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right] \vec{\Psi}_r dV_r\right\}\vec{\Psi}_0 = K\frac{\partial}{\partial t}\vec{\Psi}_0\ + K\left\{\int \vec{\Psi}_r^\dagger \cdot \frac{\partial}{\partial t}\vec{\Psi}_r dV_r \right\}\vec{\Psi}_0. [/math]


What you need to understand is that the two integrals


First = [math]\int \vec{\Psi}_r^\dagger\cdot \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right] \vec{\Psi}_r dV_r[/math]

and

Second = [math]\int \vec{\Psi}_r^\dagger \cdot \frac{\partial}{\partial t}\vec{\Psi}_r dV_r[/math]


can be complex beyond belief as [imath]\vec{\Psi}_r[/imath], to be correct, must actually yield the flaw free structure of the entire rest of the universe. In ordinary physics experiments we essentially presume actual detailed structure of the rest of the universe has little bearing upon whatever experiment we are doing. That is a pretty caviler assertion and I would rather approach the question a little more carefully and with a little accuracy.


Yup.

Yes, but not exactly the way you are expressing it. I am developing an equation which can be solved for [imath]\vec{\Psi}_0[/imath] and I want to rid myself of some of that complexity I just mentioned. What I am referring to here is the fact that [imath]\vec{\Psi}_r[/imath] is actually a function of [imath]\vec{x}_1[/imath] and [imath]\vec{\Psi}_0[/imath] (the solution of that one body problem we are attempting to represent) yields the expectations for [imath]\vec{x}_1[/imath]. What I am saying is that I want the form of that particular solution to have little impact upon the rest of the universe (there could be uncountable feed back mechanisms inside the given correct representation). I am just saying that I am specifically going to make the approximation that those feedback mechanisms can be ignored.


Ah, right, since the actual [imath]\vec{\Psi}_r[/imath] is unknown to us, I think I get it.

We have taken all these indices and mapped them to positions on an x (and tau) axis. At that point, [imath]\vec{x}_1[/imath] is indeed being conceptualized as a specific position (the position of a specific “dust mote”) in that hypothetical coordinate system.


Hmm, I'm not really sure how to interpret this business of moving indices. I mean, I'm getting confused about how the indices are really identified now (are they positions or elements?).

I mean - dropping "tau" for a moment from the conversation - let's say our mapping of a specific "present" includes some elements on the x-axis on positions "1" and "2"; and accordingly we call these elements "1" and "2".

Then we have a function which tells us the probability of finding the elements "1" and "2" from a future "present"... ...OR rather, since we have interpreted the data as dust motes that move around, it tells us the probability that SOME dust motes exist in the positions "1" and "2", while the original dust motes from those positions might have - in our interpretation - moved elsewhere?

So, basically I'm getting confused with whether an input argument to [imath]\vec{\Psi}[/imath] is identified as a position on an X-axis or is it referring to an element that can have different position on X-axis...

And that causes complications when I'm trying to understand what it means exactly that we have a probability distribution for one index. Understanding that index as just one specific position on the x,tau plane, it seems like [imath]\vec{\Psi}_0[/imath] could only yield the probability that something exists in that position... Alternatively, if the index can have a position that is not the same as the index itself, then I'm not sure how it works in terms of inputting that position to [imath]\vec{\Psi}_0[/imath]... Since we just input one index, seems like we are not asking whether "the element from position "1" at t1 is found from position "2" at t2"

You seem to be talking a lot in terms of indices that move, so I'm thinking I must have missed something about how exactly the indices are identified as "themselves" in this picture... Or maybe there is something more subtle going on? :I

-Anssi

#44 Michaelangelica

Michaelangelica

    Creating

  • Members
  • 7797 posts

Posted 18 November 2008 - 03:03 AM

I am not really here.
This is just an almost random thread attack that I hope you enjoy.

Posted Image
xkcd - A Webcomic - Schrodinger

#45 Bombadil

Bombadil

    Questioning

  • Members
  • 180 posts

Posted 20 November 2008 - 08:23 PM

Seeing as I seem to be doing the same thing as you that is waiting for Doctordick to get back I thought that I’d see if I could give you some help while he is away. I don’t think that any one will object.

Hmmm, so, are you referring to similar issue as what we had with [imath]\vec{\Psi}_2[/imath]; in essence "If the actual function [imath]\vec{\Psi}_r[/imath] were known (way of obtaining our expectations from set "r" is known), the integrals could be explicitly done"?
If so, then I think I understand what you mean by it being a differential equation in one variable.


While we might be able to perform the integrals if the form of [imath]\vec{\Psi}_r[/imath] where known what I think that he is trying to bring attention to is that after they are perfumed only derivatives of [imath]\vec{\Psi}_0[/imath] would remain in the equation and so it is somewhat simpler then the original n body equation.

Hmm, I'm not really sure how to interpret this business of moving indices. I mean, I'm getting confused about how the indices are really identified now (are they positions or elements?).

I mean - dropping "tau" for a moment from the conversation - let's say our mapping of a specific "present" includes some elements on the x-axis on positions "1" and "2"; and accordingly we call these elements "1" and "2".

Then we have a function which tells us the probability of finding the elements "1" and "2" from a future "present"... ...OR rather, since we have interpreted the data as dust motes that move around, it tells us the probability that SOME dust motes exist in the positions "1" and "2", while the original dust motes from those positions might have - in our interpretation - moved elsewhere?

So, basically I'm getting confused with whether an input argument to [imath]\vec{\Psi}[/imath] is identified as a position on an X-axis or is it referring to an element that can have different position on X-axis...

And that causes complications when I'm trying to understand what it means exactly that we have a probability distribution for one index. Understanding that index as just one specific position on the x,tau plane, it seems like [imath]\vec{\Psi}_0[/imath] could only yield the probability that something exists in that position... Alternatively, if the index can have a position that is not the same as the index itself, then I'm not sure how it works in terms of inputting that position to [imath]\vec{\Psi}_0[/imath]... Since we just input one index, seems like we are not asking whether "the element from position "1" at t1 is found from position "2" at t2"

You seem to be talking a lot in terms of indices that move, so I'm thinking I must have missed something about how exactly the indices are identified as "themselves" in this picture... Or maybe there is something more subtle going on? :I


I’m not sure of just what you mean so someone else might have to answer this but I will see if I can help you out.

I think that the first thing to remember is that the function [imath]
\sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j)\vec{\Psi} [/imath]
is still in there and that will insure that no two element are in the same place although I don’t think that it has any effect on where the elements are at. With this in mind the function [imath]\Psi[/imath] is still not the positions of the elements but is a probability distribution that will evolve along the t axis. But I think that the actual movement of the elements is just an interpretation of the changing probability wave.

As to whether or not the elements are in specific locations I think the point here is that the equation can only give us expectations for any location and that it will in fact be a function of where those elements where in the first place and the fact that two elements can‘t be in the same spot only has an effect in changing the shape of the probability distribution but that it is still going to be continious.

As for if the elements ever have to be in any particular location, I don’t know. This all sounds more related to the behavior of the solutions then to the fundamental equation so I think that we will have to wait for Doctordick to answer this.

#46 AnssiH

AnssiH

    Understanding

  • Members
  • 790 posts

Posted 23 November 2008 - 10:18 AM

Seeing as I seem to be doing the same thing as you that is waiting for Doctordick to get back I thought that I’d see if I could give you some help while he is away. I don’t think that any one will object.


Thank you, I really appreciate this.
Anyone else, if you think you can help with any bits of the math, feel free to.

While we might be able to perform the integrals if the form of [imath]\vec{\Psi}_r[/imath] where known what I think that he is trying to bring attention to is that after they are perfumed only derivatives of [imath]\vec{\Psi}_0[/imath] would remain in the equation and so it is somewhat simpler then the original n body equation.


Okay, then I think I understand what it means that it's a "differential equation in one variable"

I’m not sure of just what you mean so someone else might have to answer this but I will see if I can help you out.


Well, I'm probably missing something fairly simple, but I'm just getting confused because an element (1,2) was originally exactly the element in the position (1,2) in the x,tau plane, i.e. the label = position.

So with that definition it would not make sense to say that that element could move to position (2,2), because then it would be considered different element; the element called (2,2), and not (1,2) anymore.

And for example, when I'm thinking of [imath]\vec{\Psi}_0(\vec{x}_1, t)[/imath], and let's say [imath]\vec{x}_1[/imath] is element (1,2), then the way I'd see it is that [imath]\vec{\Psi}_0((1,2), t)[/imath] would yield the (square root of) probability of something existing in x,tau position (1,2) at time t. I do not know how to interpret it as it yielding a probability distribution for 1,2 (I'm assuming this would mean "probability distribution over other locations in the x,tau plane") as its only input argument was that one single location/index.

Also I'm slightly confused about:

...the equation can only give us expectations for any location and that it will in fact be a function of where those elements where in the first place...


As I look at those input arguments as a query for some specific end result, I don't know where the "initial locations" of those elements is communicated; is that information embedded in [imath]\vec{\Psi}[/imath] itself?

I'll have to try and really think about what I'm missing (not feeling very good today), I feel like I may be moving away from the real issue a bit, but this is still bit of a rock in my shoe as I don't see this clearly...

Btw, have you tried to follow the OP of this thread yourself? All looks valid to you?

-Anssi

#47 Bombadil

Bombadil

    Questioning

  • Members
  • 180 posts

Posted 28 November 2008 - 10:58 AM

AnssiH you might keep in mind reading this that I might be way off and my answers may in fact be completely different from what the case in fact is. But I suspect that how I understand this is a little bit different then how you do so maybe this will help. On the other hand if you see a problem with what I am suggesting please point it out.

Well, I'm probably missing something fairly simple, but I'm just getting confused because an element (1,2) was originally exactly the element in the position (1,2) in the x,tau plane, i.e. the label = position.

So with that definition it would not make sense to say that that element could move to position (2,2), because then it would be considered different element; the element called (2,2), and not (1,2) anymore.


One thing you might think about is that as of yet movement has not been defined. Or at least I haven’t found anywhere that it has been. The way that I’m looking at this right now though is that movement is just an interpretation of a changing probability wave, post 36 of some subtle aspects of relativity seems to support that movement is an interpretation. The first half post of that where he puts forward an interpretation of the fundamental equation might be worth reading if you haven’t already.

At this point I really think that where those elements are at is all just an interpretation of the probability wave, so to have an interpretation where they move wouldn’t seem to be a problem to me as they might not be there in the first place.

The only other thing that I can think of is that it is only the probability that an element is there and only one element for each probability wave is allowed but that it might not be the same element that is there. Movement then could be defined in the same way as on a screen but I’m not convinced that this is anything like what movement is going to be defined as so it might not be worth trying too hard to understand what I’m saying until we know if movement is defined this way.

As I look at those input arguments as a query for some specific end result, I don't know where the "initial locations" of those elements is communicated; is that information embedded in [imath]\vec{\Psi}[/imath] itself?


I’m going to say that the initial locations are communicated only in a particular [imath]\vec{\Psi}[/imath] and that the locations are a interpretation of the probability wave of [imath]\vec{\Psi}[/imath] I could go into what makes me think this but there is probably not a lot of reason in doing so right know.

I'll have to try and really think about what I'm missing (not feeling very good today), I feel like I may be moving away from the real issue a bit, but this is still bit of a rock in my shoe as I don't see this clearly...


Well as of yet I don’t think that the issue has been fully explained anywhere so its probably not worth worrying to much about yet. Maybe what I have above will help you out for now.

Btw, have you tried to follow the OP of this thread yourself? All looks valid to you?


I’ve been following this thread although most of what I have been doing is in the thread some subtle issues of relativity and seeing as doctordick has been doing a better job then I could do without a better understanding of the subject then I have at present I haven’t seen much reason to comment on anything.
But yes, it all looks good to me. I haven’t seen anything that doesn’t look valid with what is being done.

#48 AnssiH

AnssiH

    Understanding

  • Members
  • 790 posts

Posted 30 November 2008 - 09:50 AM

One thing you might think about is that as of yet movement has not been defined. Or at least I haven’t found anywhere that it has been. The way that I’m looking at this right now though is that movement is just an interpretation of a changing probability wave, post 36 of some subtle aspects of relativity seems to support that movement is an interpretation. The first half post of that where he puts forward an interpretation of the fundamental equation might be worth reading if you haven’t already.


That is correct, movement is an interpretation at this point.

I had read that post before, but now that I read it again, it makes me wonder if it is relevant here that the velocity of the elements is constant. I have not yet figured out why the velocity of all the elements is constant, I don't think that issue has been mentioned in this thread :I

The only other thing that I can think of is that it is only the probability that an element is there and only one element for each probability wave is allowed but that it might not be the same element that is there. Movement then could be defined in the same way as on a screen but I’m not convinced that this is anything like what movement is going to be defined as so it might not be worth trying too hard to understand what I’m saying until we know if movement is defined this way.


Well yeah, but I'm having some difficulties even understanding exactly how to get to the probability wave. I mean how [imath]\vec{\Psi}[/imath] is used to get to a "probability wave" instead of just single probability. I have not had much time to think about this, but I'm starting to think that when you have a specific [imath]\vec{\Psi}[/imath] whose magnitude is the (square root of) probability of its input arguments, then you get to a "probability wave" by evaluating all the possible input arguments (or just some range of possibilities). I suppose that is pretty much how it is with ordinary Schrödinger equation as well?

So in that case [imath]\vec{\Psi}_0(\vec{x}_1, t)[/imath] can be said to yield a "probability wave" when it is evaluated over some range of possibilities for [imath]\vec{x}_1[/imath], only in that case [imath]\vec{x}_1[/imath] cannot mean a specific (x, tau)-index as it's equal to just one position (essentially just one "possibility").

As to the identity of specific elements; of course any specific epistemological construct (i.e. "worldview") would include that information. I guess there must be some assumption about the identity of that single element before there can exist any explanation to it, i.e. before any expectations about its future could exist. Hmm... So in that sense any specific [imath]\vec{\Psi}[/imath] must include the information about the identity of the elements. On the other hand, which element is which is never communicated in its input arguments. I guess that's not necessary?

Another possibility which passed through my mind was that the order of the input arguments yielded the identity information, but on the other hand it has been voiced many times that each "present" is an unordered set, so... I'm a bit confused still.

But yes, it all looks good to me. I haven’t seen anything that doesn’t look valid with what is being done.


That's nice to know.

-Anssi

#49 Bombadil

Bombadil

    Questioning

  • Members
  • 180 posts

Posted 04 December 2008 - 08:22 PM

I had read that post before, but now that I read it again, it makes me wonder if it is relevant here that the velocity of the elements is constant. I have not yet figured out why the velocity of all the elements is constant, I don't think that issue has been mentioned in this thread :I


Relevant to what? The only thing that I can assume is that you mean is it relevant to the derivation of the Schrödinger equation. If so I can’t be sure as I don’t have much understanding of the Schrödinger equation but if you have a understanding of that derivation I might point out that it seems that such a effect (that is all elements moving with the same velocity) is likely removed to arrive at the Schrödinger equation.

Well yeah, but I'm having some difficulties even understanding exactly how to get to the probability wave. I mean how [imath]\vec{\Psi}[/imath] is used to get to a "probability wave" instead of just single probability. I have not had much time to think about this, but I'm starting to think that when you have a specific [imath]\vec{\Psi}[/imath] whose magnitude is the (square root of) probability of its input arguments, then you get to a "probability wave" by evaluating all the possible input arguments (or just some range of possibilities). I suppose that is pretty much how it is with ordinary Schrödinger equation as well?


I don’t know about the Schrödinger equation but I it seems to me that the only thing of importance is that whatever scalar product we use to arrive at the probability it must be a scalar that is arrived at. This leads me to an interesting observation that is that there is only one probability wave for all of the elements this leads me to some rather strange questions such as are all the points that a element can exist at does one exist at? And why do we interpret any particular element or set of elements as moving.

Now if you are wondering what is meant by a probability wave try and think of it like this, if you take the probability at all of the possible locations and graph them what you will get will be in the shape of a wave. It may be a highly complex wave but it will be a wave. The case in which the probability at any one point is arbitrarily small I imagine can be handled in a similar manner only you must then decide over what interval you wish to graph it. Also the wave nature may not be as obvious in this case.

So in that case [imath]\vec{\Psi}_0(\vec{x}_1, t)[/imath] can be said to yield a "probability wave" when it is evaluated over some range of possibilities for [imath]\vec{x}_1[/imath], only in that case [imath]\vec{x}_1[/imath] cannot mean a specific (x, tau)-index as it's equal to just one position (essentially just one "possibility").


I’m not quite sure what you mean by [imath]\vec{x}_1[/imath] I assume that you mean the location of the first element in which case I have to take the stance that it is built into a particular [imath]\vec{\Psi}[/imath] in that the identity of any such element is only an interpretation in that particular explanation. As for the remainder of what you are saying I can’t follow what you mean.

As to the identity of specific elements; of course any specific epistemological construct (i.e. "worldview") would include that information. I guess there must be some assumption about the identity of that single element before there can exist any explanation to it, i.e. before any expectations about its future could exist. Hmm... So in that sense any specific \vec{\Psi} must include the information about the identity of the elements. On the other hand, which element is which is never communicated in its input arguments. I guess that's not necessary?

Another possibility which passed through my mind was that the order of the input arguments yielded the identity information, but on the other hand it has been voiced many times that each "present" is an unordered set, so... I'm a bit confused still.


The second of your possibilities seems obviously incorrect. The fact that we have based the analysis on mathematics would seem to me to indicate that if we get a different result if we do it differently then we must have made a mistake in how we did it differently.

Now for your first possibility, are you suggesting that different elements behave differently? If so while this stance is defendable from everyday experiences if you examine the derivation of the Schrödinger equation you will see that taken on there own all of the elements must behave approximately in the same way. This suggests to me that taken on there own any two elements are indistinguishable.

You seem to be trying to tell the elements apart. If so one thing that comes to mind is that I seem to remember that the context is how you tell one element from any other element. So what I am suggesting is that the only difference between any two elements is the context that they appear in is different. I am seeing no other way to tell any two elements apart, especially since it has been shown that all element when taken on there own must obey the Schrödinger equation.

#50 AnssiH

AnssiH

    Understanding

  • Members
  • 790 posts

Posted 07 December 2008 - 11:04 AM

Relevant to what?


Relevant to being able to interpret the equation in terms of dust motes.

I don’t know about the Schrödinger equation but I it seems to me that the only thing of importance is that whatever scalar product we use to arrive at the probability it must be a scalar that is arrived at. This leads me to an interesting observation that is that there is only one probability wave for all of the elements this leads me to some rather strange questions such as are all the points that a element can exist at does one exist at? And why do we interpret any particular element or set of elements as moving.


Hmm, I don't understand what you're saying there... Can you rephrase?

Now if you are wondering what is meant by a probability wave try and think of it like this, if you take the probability at all of the possible locations and graph them what you will get will be in the shape of a wave. It may be a highly complex wave but it will be a wave. The case in which the probability at any one point is arbitrarily small I imagine can be handled in a similar manner only you must then decide over what interval you wish to graph it. Also the wave nature may not be as obvious in this case.


Yup, then, I think I understand that part correctly. In essence, some area of the "possibility space" is evaluated and since different possibilities get a different probability, the end result can be seen as a probability wave. Sounds reasonable.

I’m not quite sure what you mean by [imath]\vec{x}_1[/imath] I assume that you mean the location of the first element in which case I have to take the stance that it is built into a particular [imath]\vec{\Psi}[/imath] in that the identity of any such element is only an interpretation in that particular explanation.


That sounds reasonable too.

By [imath]\vec{x}_1[/imath] I meant the single element whose probability wave [imath]\vec{\Psi}_0[/imath] provides. ([imath]\vec{\Psi}_0[/imath] was defined as the function that yields the probability of a single element)

So this reaffirms my belief that it is part of a specific [imath]\vec{\Psi}[/imath] that contains the information about the identity of the (supposed) elements of that particular explanation. Just none of that stuff is explicitly communicated here since we are looking at general properties of all the possible [imath]\vec{\Psi}[/imath].

Now for your first possibility, are you suggesting that different elements behave differently? If so while this stance is defendable from everyday experiences if you examine the derivation of the Schrödinger equation you will see that taken on there own all of the elements must behave approximately in the same way. This suggests to me that taken on there own any two elements are indistinguishable.

You seem to be trying to tell the elements apart.


Not really. The only issue was the apparent conflict between labeling elements according to their position (label = position) and then talking about this in terms of an interpretation of those same elements being capable of moving from one position to another (label [imath]\ne[/imath] position?). So it seems that as soon as you start talking about this in terms of the "moving dust mote" interpretation, the input arguments to [imath]\vec{\Psi}[/imath] should be seen as "positions", instead of "labels"? I mean for clearer communication...

-Anssi

#51 Bombadil

Bombadil

    Questioning

  • Members
  • 180 posts

Posted 11 December 2008 - 10:46 PM

Relevant to being able to interpret the equation in terms of dust motes.


I don’t know if I fully understand what you are asking, in what way do you not think that it is relevant, from how I understand it the whole use of the interpretation is that it obeys the fundamental equation and in so doing satisfies all of the resulting constraints so that it would not be relevant only if it had no interesting consequences.

Hmm, I don't understand what you're saying there... Can you rephrase?


What I’m trying to say is that the function [imath]\vec{\Psi}[/imath] is only half of the probability wave, the probability wave is in fact defined by [imath]\vec{\Psi}\vec{\Psi}^\dagger[/imath] in which a scalar product is how they operate on each other. Now this is a positive number do to the definition of a scalar product that defines the value of the probability wave at any point of interest.

Now if we take this to be the case which from what I can tell it is the case then there is only one wave defined by the fundamental equation. Now if this is so it seems to me that all of the elements in the fundamental equation share the same probability wave. This seems to me that it should lead to some interesting consequences and at the very least some interesting questions.

By [imath]\vec{x}_1[/imath] I meant the single element whose probability wave [imath]\vec{\Psi}_0[/imath] provides. [imath](\vec{\Psi}_0[/imath] was defined as the function that yields the probability of a single element)

So this reaffirms my belief that it is part of a specific [imath]\vec{\Psi}[/imath] that contains the information about the identity of the (supposed) elements of that particular explanation. Just none of that stuff is explicitly communicated here since we are looking at general properties of all the possible [imath]\vec{\Psi}[/imath].


What I think that is being missed here is that [imath]\vec{\Psi}[/imath] dose not contain the actual information about the elements that are in it. All that the function is, is a probability wave and from how I am understanding it, it would be derived from a general solution to the fundamental equation by defining some arbitrary constants making it into a particular wave and I suspect that it is only constraining the arbitrary constants and not necessarily defining them.

Not really. The only issue was the apparent conflict between labeling elements according to their position (label = position) and then talking about this in terms of an interpretation of those same elements being capable of moving from one position to another (label [imath]\ne[/imath] position?). So it seems that as soon as you start talking about this in terms of the "moving dust mote" interpretation, the input arguments to [imath]\vec{\Psi}[/imath] should be seen as "positions", instead of "labels"? I mean for clearer communication...


I don’t think that the location of those elements is even defined in anything other then the interpretation of the fundamental equation or in a list of the locations that the elements have been in.

There are a couple of different equations being discussed here and I’m wondering if you understand the differences between them. They are the fundamental equation with the assumption that the effect of the rest of the universe is known (this is that ordinary differential equation derived above) and the Schrödinger equation, we also have the general solutions to these and particular solutions that are derived from the general solutions to the equations (these have never been really talked about but I am quite sure that such equations are the ones that we are talking about when we are talking about the locations of the elements having been defined).

The way that I understand this is that the particular solutions are derived from the general ones by using a list of the locations that the elements have been in to derive a particular solution so that only the particular solutions have any information about the locations of the elements. The way that I think such a thing would be done if it where possible to do so is that the list of elements in combination with the general solution to the equation of interest will set up a series of equations that when solved will give the particular solutions.

The thing here is that the particular solutions are of little interest seeing as we can’t even solve for the general solutions. The general solutions can’t be solved for, this has been said several times. Therefore, the only thing left is to learn more about the possible solutions in some other way so what we are looking for are the constraints that all flaw free explanation must obey.

If you can understand this it might at least give you a different view of what you are asking if you cant understand it I’d rather not try and go into much more detail as it would likely take us considerably off topic which is something I think we are already doing.