Jump to content


Photo
- - - - -

Deriving Schrödinger's Equation From My Fundamental Equation


  • Please log in to reply
144 replies to this topic

#1 Doctordick

Doctordick

    Explaining

  • Members
  • 1071 posts

Posted 13 July 2008 - 01:50 AM

This post is essentially a quote of post #194 on the thread “What can we know of reality?”

I want to make it clear to anyone who reads this that the issue is not really a solution of my fundamental equation but rather an examination of possible solutions. By definition, [math]\vec{\Psi}[/math] is a mathematical representation of our expectations. Those expectations are the result of a flaw free explanation of reality. The explanation itself is a epistemological construct which provides a consistent and flaw free explanation of the past. As such, I have no real interest in the actual solution or how it was achieved; my only interest is in the fact that such a solution exists: i.e., you do in fact have expectations.

There are two facts extant here: first, a function (a method of obtaining one's expectations from a given set of known elements: i.e., [math]\vec{\Psi}[/math] exists and that function must be a solution to my fundamental equation. Furthermore, if I understand that flaw-free explanation, the method of obtaining the appropriate expectations is known to me. It is very important here to remember that [math]\vec{\Psi}[/math] is a mathematical representation of our expectations and is not necessarily a correct representation of the future. What I am trying to point out is that our expectations are never necessarily correct (see Kriminal99's post on induction); what is being enforced is that the known past is consistent with those expectations,not the future. The future is a totally unknown issue. Our only defense of our expectations is that the volume of information which goes to make up the past is far far in excess of the next “present” (from our perspective): i.e., it would be rather ridiculous to conclude that anything in the next “present” would be sufficiently significant to be a major alteration to the net past (that would be “all the information we are trying to make sense of”).

With that in mind, the equation of interest is


[math]\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\Psi} = K\frac{\partial}{\partial t}\vec{\Psi}.[/math]



This expression is quite analogous to a differential equation describing the evolution of a many body system which, as anyone competent in physics knows, is not an easy thing to solve. What we would like to do is to reduce the number of arguments to something which can be handled: i.e., we want to know the nature of the equations which must be obeyed by a subset of those variables. In an interest towards accomplishing that result, my first step is to divide the problem into two sets of variables: set number one will be the set referring to our “valid” ontological elements (together with the associated tau indices) and set number two will refer to all the remaining arguments. I will refer to these sets as #1 and #2 respectively. (You should comprehend that #1 must be finite and that #2 can possibly be infinite.) Now, when we started this whole thing, I defined the probability of specific expectations to be given by the squared magnitude of [math]\vec{\Psi}[/math] under the argument that such a notation (that abstract vector) can represent absolutely any method of getting from one set of numbers to another: i.e., there exists no operation capable of yielding one's expectations which cannot be represented by such a structure.

Having divided the arguments into two sets, a competent understanding of probability should lead to acceptance of the following relationship: the probability of #1 and #2 (i.e., the expectation that these two specific sets occur together) is given by the product of two specific probabilities: [math]P_1(set\;1)[/math], the probability of set number one, times [math]P_2(set\;2\; given\;set\;1)[/math], the probability of set number two given set number one exists. The existence of set #1 in the second probability is necessary as the probability of set #2 can very much depend upon that existence. At this point, exactly the same argument used to defend [math]\vec{\Psi}[/math] as embodying a method of obtaining expectations (the probability distribution) for the entire collection of arguments can be used to assert that there must exist abstract vector functions [math]\vec{\Psi}_1[/math] and [math]\vec{\Psi}_2[/math] which will yield, respectively [math]P_1[/math] and [math]P_2[/math].

It should be clear that, under these definitions (representing the argument [math](x,\tau)_i[/math] as [math]\vec{x}_i[/math]),


[math]\vec{\Psi}(\vec{x}_1,\vec{x}_2,\cdots, t)=\vec{\Psi}_1(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n, t)\vec{\Psi}_2(\vec{x}_1,\vec{x}_2,\cdots, t).[/math]


Substituting this result into our fundamental equation, what we obtain can be written


[math]\left\{\sum_{set\:1} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (set\;1)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}_1\vec{\Psi}_2 + 2\left\{ \sum_{i=set\;1 j=set\;2}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}_1\vec{\Psi}_2+[/math]
[math] \left\{\sum_{set\;2} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (set\;2)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}_1\vec{\Psi}_2 = K\frac{\partial}{\partial t}(\vec{\Psi}_1\vec{\Psi}_2).[/math]


At this point, it is important to realize that set #2 consists of invalid ontological elements created for the purpose of constraining set #1 to what they actually were. I often used to ask the question, “how does one tell the difference between an electron and a Volkswagen?” No one except Anssi seemed to ever grasp the essence of that question. The answer is of course: “context”. In my original proof, arbitrary invalid ontological elements were added until one achieved the state where knowing the specific indices of any n-1 elements associated with a given t index would guarantee that the index of the missing element could be determined. Under this picture, set #2 is certainly context as since they are invalid ontological elements, they can be anything so long as they are consistent with the explanation: i.e., the only requirement here is that they need to obey the fundamental equation. Thus it is that I will take the position that, if we know a flaw-free explanation, we know the method of obtaining our expectations for set #2: i.e., we know [math]\vec{\Psi}_2[/math]. If we left multiply the above equation by [math]\vec{\Psi}_2^\dagger[/math] (forming the inner or dot product with the algebraically modified [math]\vec{\Psi}_2[/math]) and integrate over the entire set of arguments referred to as set #2, we will obtain the following result:


[math]\left\{\sum_{set\;1} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (set\;1)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\right\}\vec{\Psi}_1 + \left\{2 \sum_{i=set\;1 j=set\;2}\int \vec{\Psi}_2^\dagger \cdot \beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\vec{\Psi}_2 dV_2 \right. +[/math]
[math] \left.\int \vec{\Psi}_2^\dagger \cdot \left[\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (set\;2)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right]\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1 = K\frac{\partial}{\partial t}\vec{\Psi}_1+K \left\{\int \vec{\Psi}_2^\dagger \cdot \frac{\partial}{\partial t}\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1[/math]


Notice that [math]\int \vec{\Psi}_2^\dagger \cdot\vec{\Psi}_2dV_2 [/math] equals unity by definition of normalization. Furthermore, the tau axis was introduced for the sole purpose of assuring that two identical indices associated with valid ontological elements existing in the same B(t) ( now being represented by an [math](x,\tau)_t[/math] point in the [math]x,\tau[/math] plane) would not be represented by the same point. We came to the conclusion that this could only be guaranteed in the continuous limit by requiring [math]\vec{\Psi}_1[/math] to be asymmetric with regard to exchange of arguments. If that is indeed the case (as it must be) then the second term in the above equation will vanish identically as [math]\vec{x}_i[/math] can never equal [math]\vec{x}_j[/math] for any i and j both chosen from set #1.

If the actual function [math]\vec{\Psi}_2[/math] were known (i.e., a way of obtaining our expectations for set #2 is known), the above integrals could be explicitly done and we would obtain an equation of the form:


[math] \left\{\sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right\}\vec{\Psi}_1 = K\frac{\partial}{\partial t}\vec{\Psi}_1. [/math]


The function f must be a linear weighted sum of alpha and beta operators plus one single term which does not contain such an operator. That single term arises from the final integral of the time derivative of [math]\vec{\Psi}_2[/math] on the right side of the original representation of the result of integration:


[math]\int \vec{\Psi}_2^\dagger\cdot\frac{\partial}{\partial t}\vec{\Psi}_2dV_2.[/math]


The above is an example of the kind of function the indices on our valid ontological elements must obey; however, it is still in the form of a many body equation and is of little use to us if we cannot solve it. In the interest of learning the kinds of constraints the equation implies, let us take the above procedure one step farther and search for the form of equation a single index must obey (remember the fact that we added invalid ontological elements until the index on any given element could be recovered if we had all n-1 other indices). We may immediately write [math]P_1[/math](set #1) = [math]P_0(\vec{x}_1,t)P_r[/math](remainder of set #1 given [math]\vec{x}_1[/math],t). Note that [math]\vec{x}_1[/math] can refer to any index of interest as order is of no significance. Once again, we can deduce that there exist algorithms capable of producing [math]P_0[/math] and [math]P_r[/math]; I will call these functions [math]\vec{\Psi}_0[/math] and [math]\vec{\Psi}_r[/math] respectively. It follows that [math]\vec{\Psi}_1[/math] may be written as follows:


[math]\vec{\Psi}_1(\vec{x}_1,\vec{x}_2, \cdots, \vec{x}_n, t)= \vec{\Psi}_0(\vec{x}_1,t)\vec{\Psi}_r(\vec{x}_1,\vec{x}_2, \cdots, \vec{x}_n, t).[/math]


If I make this substitution in the earlier equation for [math]\vec{\Psi}_1[/math], I will obtain the following relationship:


[math]\left\{\sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right\}\vec{\Psi}_0\vec{\Psi}_r = K\frac{\partial}{\partial t}(\vec{\Psi}_0\vec{\Psi}_r). [/math]


Once again I point out that [math]\vec{\Psi}_r[/math] constitutes the context for [math]\vec{\Psi}_0(\vec{x}_1,t)[/math]. Once again, I will take the position that, if we know the flaw-free explanation represented by [math]\vec{\Psi}_r[/math], we know our expectations for the set of indices two through n, set “r”,: i.e., we know [math]\vec{\Psi}_r[/math] (the context). As before, if we now left multiply the above equation by [math]\vec{\Psi}_r^\dagger[/math] (forming the inner or dot product with the algebraically modified [math]\vec{\Psi}_r[/math]) and integrate over the entire set of arguments referred to as set “r” (the remainder after [math]\vec{x}_1[/math] has been specified), we will obtain the following result:


[math]\vec{\alpha}_1\cdot \vec{\nabla}_1\vec{\Psi}_0 + \left\{\int \vec{\Psi}_r^\dagger\cdot \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right] \vec{\Psi}_r dV_r\right\}\vec{\Psi}_0 = K\frac{\partial}{\partial t}\vec{\Psi}_0 + K\left\{\int \vec{\Psi}_r^\dagger \cdot \frac{\partial}{\partial t}\vec{\Psi}_r dV_r \right\}\vec{\Psi}_0. [/math]


Notice once again that [math]\int \vec{\Psi}_r^\dagger \cdot\vec{\Psi}_rdV_r [/math] equals unity by definition of normalization. Notice also that the term [math]\vec{\alpha}_1\cdot \vec{\nabla}_1[/math] appears both standing alone and inside the integral over the indices represented by the set “r”; this occurs because [math]\vec{\Psi}_r[/math] is a function of [math]\vec{x}_1[/math] and the chain rule applies to differential operation on the product function [math]\vec{\Psi}_0\vec{\Psi}_r[/math].

Now, this resultant may be a linear differential equation in one variable but it is not exactly in a form one would call “transparent”. In the interest of seeing the actual form of possible solutions allow me to discuss an approximate solution discovered by setting three very specific constraints to be approximately valid. The first of these three is that the data point of interest, [math]\vec{x}_1[/math], is insignificant to the rest of the universe: i.e., [math]P_r[/math] is, for practical purposes, not much effected by any change in the actual form of [math]\vec{\Psi}_0[/math]: i.e., feed back from the rest of the universe due to changes in [math]\vec{\Psi}_0[/math] can be neglected. The second constraint will be that the probability distribution describing the rest of the universe is stationary in time: that would be that [math]P_r[/math] is, for practical purposes, not a function of t. If that is the case, the only form of the time dependence of [math]\vec{\Psi}_r[/math] which satisfies temporal shift symmetry is [math]e^{iS_rt}[/math].

At this point, we must carefully analyze the development of the function f created when we integrated over set #2 in our earlier example. As mentioned at the time, f was a linear weighted sum of alpha and beta operators except for one strange term introduced by the time derivative of [math]\vec{\Psi}_2[/math]. Please note that, if [math]P_r[/math] is insensitive to [math]\vec{\Psi}_0[/math] and stationary in time then so is [math]P_2[/math]. This follows directly from the fact that [math]P_2[/math] is the probability distribution of the “invalid” ontological elements required to constrain the “valid” ontological elements to what is to be explained. There is certainly no required time dependence if the set to be explained has no time dependence, nor can there be any dependence upon [math]\vec{\Psi}_0[/math] if the set “r” can be seen as uninfluenced by [math]\vec{\Psi}_0[/math]. This leads to the conclusion that


[math]K\left\{\int \vec{\Psi}_2^\dagger \frac{\partial}{\partial t}\vec{\Psi}_2dV_2\right\}\vec{\Psi}_1=iKS_2\vec{\Psi}_1[/math]


and that the function “f” may be written [math]f=f_0 -iKS_2[/math] where [math]f_0[/math] is entirely made up of a linear weighted sum of alpha and beta operators. So long as the above constraints are approximately valid, our differential equation for [math]\vec{\Psi}_0(\vec{x}_1,t)[/math] may be written in the following form.


[math]\vec{\alpha}_1\cdot \vec{\nabla}\vec{\Psi}_0 + \left\{\int \vec{\Psi}_r^\dagger\cdot \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f_0(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right] \vec{\Psi}_r dV_r\right\}\vec{\Psi}_0 = K\frac{\partial}{\partial t}\vec{\Psi}_0 + iK\left(S_2+S_r\right)\vec{\Psi}_0. [/math]


For the simple convenience of solving this differential equation, this result clearly suggests that one redefine [math]\vec{\Psi}_0[/math] via the definition [math]\vec{\Psi}_0 = e^{-iK(S_2+S_r)t}\vec{\Phi}[/math]. If one further defines the integral within the curly braces to be [math]g(\vec{x}_1)[/math], [math]\vec{x}_1[/math] being the only variable not integrated over, the equation we need to solve can be written in an extremely concise form:


[math]\left\{\vec{\alpha}\cdot \vec{\nabla} + g(\vec{x})\right\}\vec{\Phi} = K\frac{\partial}{\partial t}\vec{\Phi}, [/math]


which implies the following operational identity:


[math]\vec{\alpha}\cdot \vec{\nabla} + g(\vec{x}) = K\frac{\partial}{\partial t}. [/math]


That is, as long as these operators are operating on the appropriate [math]\vec{\Phi}[/math] they must yield identical results. If we now multiply the original equation by the respective sides of this identity, recognizing that the multiplication of the alpha and beta operators yields either one half (for all the direct terms) or zero (for all the cross terms) and defining the resultant of [math]g(\vec{x})g(\vec{x})[/math] to be [math]\frac{1}{2}G(\vec{x})[/math] (note that all alpha and beta operators have vanished), we can write the differential equation to be solved as


[math] \nabla^2\vec{\Phi}(\vec{x},t) + G(\vec{x})\vec{\Phi}(\vec{x},t)= 2K^2\frac{\partial^2}{\partial t^2}\vec{\Phi}(\vec{x},t).[/math]


At this point we must turn to analysis of the impact of our tau axis, a pure creation of our own imagination and not a characteristic of the actual data defining the collection of referenced elements we need to explain. Since we are interested in the implied probability distribution of x, we must (in the final analysis) integrate over the probability distribution of tau. Since tau is a complete fabrication of our imagination, the final [math]P(x,\tau,t)[/math] certainly cannot depend upon tau. It follows directly from this observation that the dependence of [math]\vec{\Phi}[/math] on tau must (at worst) be of the form [math]e^{iq\tau}[/math]. It follows directly from this observation that the differential equation can be written.


[math] \left\{\frac{\partial^2}{\partial x^2} - q^2 + G(x)\right\}\vec{\Phi}(x,t)= 2K^2\frac{\partial^2}{\partial t^2}\vec{\Phi}(x,t).[/math]


Notice that, if the term [math]q^2[/math] is moved to the right side of the equal sign, we may factor that side and obtain,


[math] \left\{\frac{\partial^2}{\partial x^2} + G(x)\right\}\vec{\Phi}(x,t)=\left\{K\sqrt{2}\frac{\partial}{\partial t}- iq\right\}\left\{K\sqrt{2}\frac{\partial}{\partial t}+iq\right\}\vec{\Phi}(x,t).[/math]


At this point, I will invoke a third approximation. I will concern myself only with cases where [math]K\sqrt{2}\frac{\partial}{\partial t}\vec{\Phi} \approx -iq\vec{\Phi}[/math] to a high degree of accuracy. In this case, the first term on the right may be replaced by -2iq and, after devision by 2q, we have


[math]\left\{\frac{1}{2q}\frac{\partial^2}{\partial x^2}+\frac{1}{2q}G(x)\right\}\vec{\Phi}(x,t)= -i\left\{\sqrt{2}K \frac{\partial}{\partial t} + iq \right\}\vec{\Phi}(x,t).[/math]


Once again, the form of the equation suggests we redefine [math]\vec{\Phi}[/math] via an exponential adjustment [math]\vec{\Phi}(x,t)=\vec{\phi}(x,t)e^{\frac{-iqt}{K\sqrt{2}}}[/math], thus simplifying the differential equation by removing the final iq term. To anyone familiar with modern physics, the equation should be beginning to look very familiar. In fact, if we multiply through by [math]-\hbar c[/math] (which clearly has utterly no impact on the solution as it multiplies every term) and make the following definitions directly related to constants already defined,


[math]m=\frac{q\hbar}{c}\;,\quad c=\frac{1}{K\sqrt{2}} \; \quad and \; \quad V(x)= -\frac{\hbar c}{2q}G(x)[/math]


it turns out that the equation of interest (without the introduction of a single free parameter: please note that no parameters not defined in the derivation of the equation have been introduced) is exactly one of the most fundamental equations of modern physics.


[math]\left\{-\left(\frac{\hbar^2}{2m}\right)\frac{\partial^2}{\partial x^2}+ V(x)\right\}\vec{\phi}(x,t)=i\hbar\frac{\partial}{\partial t}\vec{\phi}(x,t)[/math]


This is, in fact, exactly Schrӧdinger's equation in one dimension.

This is a truly astounding conclusion. The fact that the probability of seeing a particular number in a stream of totally undefined numbers can be deduced to be found via Schrӧdinger's equation, no matter what the rule behind those numbers might be, is totally counter intuitive. It is extremely important that we check the meaning of the three constraints I placed on the problem in terms of the conclusion reached.

The first two are quite obvious. Recapping, they consisted of demanding that the data point under consideration had negligible impact on the rest of the universe and that the pattern representing the rest of the universe was approximately constant in time. These are both common approximations made when one goes to apply Schrӧdinger's equation: that is, we should not be surprised that these approximations made life convenient. What is important is that Schrӧdinger's equation is still applicable to physical situations where these constraints are considerably relaxed. In other words, the constraints are not required by Schrӧdinger's equation itself.

The serious question then is, what happens to my derivation when those constraints are relaxed. If one examines that derivation carefully, one will discover that the only result of these constraints was to remove the time dependent term from the linear weighted sum expressed by g(x). If this term is left in, G(x) will be complicated in three ways: first, the general representation must allow for time dependence; second, the representation must allow for terms proportional to [math]\frac{\partial}{\partial x}[/math] and, finally, the resultant V(x) will be a linear weighted sum of the alpha and beta operators.

The time dependence creates no real problems: V(x) merely becomes V(x,t). The terms proportional to [math]\frac{\partial}{\partial x}[/math] correspond to velocity dependent terms in V and, finally, retention of the alpha and beta operators essentially forces our deductive result to be a set of equation, each with its own V(x,t). All of these results are entirely consistent with Schrӧdinger's equation, they simply require interactions not commonly seen on the introductory level. Inclusion of these complications would only have served to obscure the fact that what was deduced was, in fact, Schrӧdinger's equation.

That brings us down to the final constraint, [math]K\sqrt{2}\frac{\partial}{\partial t}\vec{\Phi}\approx -iq\vec{\Phi}[/math]. If we multiply this relationship through by [math] i\hbar[/math] and divide by [math]K\sqrt{2}[/math] the definitions given for m and c above imply the constraint can be written


[math]i\hbar\frac{\partial}{\partial t}\vec{\Phi}\approx q\hbar c \vec{\Phi}= \left( \frac{q\hbar}{c}\right) c^2\vec{\Phi} = mc^2\vec{\Phi}.[/math]


The term [math]mc^2[/math] should be familiar to everyone and the left hand side, [math]i\hbar\frac{\partial}{\partial t}[/math], should be recognized as the energy operator from the standard Schrӧdinger representation of quantum mechanics. Putting these two facts together, it is clear that the redefinition of [math]\vec{\Phi}[/math] to [math]\vec{\phi}[/math] in the above deduction was completely analogous to adjusting the zero energy point to non-relativistic energies. This step is certainly necessary as Schrӧdinger's equation is well known to be a non-relativistic approximation: i.e., Schrӧdinger's equation is known to be false if this approximation is not valid. The central issue of the approximation was that the “non-relativistic” energies must be negligible compared to [math]mc^2[/math]. Since classical mechanics uses an "energy" reference of zero for a free entity at rest, this is exactly equivalent to "non-relativistic" phenomena.

A very strange thing has happened: that the above approximation is necessary is not surprising; that it arose the way it did is rather astonishing as we have arrived at the expression [math]E=mc^2[/math] without even mentioning the concept of relativity. This certainly implies that at least some aspects of relativity seem to be embedded in the paradigm I am presenting. That will turn out to be exactly correct and will become overtly evident a few posts from here.

Meanwhile, the fact that the Schrӧdinger equation is an approximate solution to my equation leads me to put forth a few more definitions. Note to Buffy: there is no presumption of reality in these definitions; they are no more than definitions of abstract relationships embedded in the mathematical constraint of interest to us. That is, these definitions are entirely in terms of the mathematical representation and are thus defined for any collection of indices which constitute references to the elements the function [math]\vec{\Psi}[/math] was defined to explain.

First, I will define ”the Energy Operator” as [math]i\hbar\frac{\partial}{\partial t}[/math] (and thus, the conserved quantity required by the fact of shift symmetry in the t index becomes “energy”: i.e., energy is conserved by definition). A second definition totally consistent with what has already been presented is to define the expectation value of “energy” to be given by


[math]E=i\hbar\int\vec{\Psi}^\dagger\cdot\frac{\partial}{\partial t}\vec{\Psi}dV.[/math]


I am putting this forward as a definition of the expectation value of energy for the sole reason that the concept is then applicable to the various functions I have proceeded through in deducing the Schrӧdinger equation above. What is important here is that the energy so defined is not conserved in the approximations used above (when the individual individual reference indices of ontological elements are examined) but rather that, when the entire collection of indices referring to these elements is represented by the appropriate function, total energy so defined will be conserved.

In addition, the comparison with Schrӧdinger's equation also suggests the definition of another mathematical operator which can, via exactly the same analogy, be called "the Momentum Operator" as [math]-i\hbar\frac{\partial}{\partial x}[/math] (and thus, the conserved quantity required by the fact of shift symmetry in the “x” index becomes “momentum”: i.e., the total momentum of the entire collection of references to our ontological elements will be conserved via the constraint [math]\sum\frac{\partial}{\partial x_i}\vec{\Psi}=0[/math]). Once again, a second definition totally consistent with what has already been presented is to define the expectation value of “momentum” to be given by


[math]P=-i\hbar\int\vec{\Psi}^\dagger\cdot\frac{\partial}{\partial x}\vec{\Psi}dV.[/math]


Once again, this says nothing about the conservation of an individual indices “momentum”. The momentum of an individual index is a function of actual [math]\vec{\phi}[/math] describing the expectation of the element referenced by that index. Nevertheless, it does imply that the total momentum of all the reference indices will be conserved.

Finally, I would like to introduce a third operator defended by exactly the same analysis provided above. This third operator is completely fictional as it arises from shift symmetry in the fictional axis tau. I will call this operator "the Mass Operator" and define it as [math]-i\frac{\hbar}{c}\frac{\partial}{\partial \tau}[/math]. Likewise, this leads to a second definition: the expectation value of “mass” to be given by


[math]m=-i\frac{\hbar}{c}\int\vec{\Psi}^\dagger\cdot\frac{\partial}{\partial \tau}\vec{\Psi}dV.[/math]


Once again, I have managed to define a term (a mathematical operator) applicable to each and every reference index to every element in the entire collection. The relationship between reference indices implied here is a little more involved than energy and momentum. The fact that tau is a totally fictional axis requires not only shift symmetry (which yields conservation of mass when summed over the entire collection) but also yields conservation of mass on the reference index level as nothing can actually be a function of tau in the final analysis. That is, not only do we have shift symmetry (which yields total mass as a conserved quantity) but we also have the fact that no details of the final result cannot possibly be a function of tau. This leads to the conclusion that the “mass” of individual references to valid ontological elements cannot be a function of tau.

I'll see what kinds of objections that presentation leads to before I will go on. As a comment to Buffy, this is still a completely abstract paradigm and there is utterly no implied relationship to reality. All I have done is show that there always exists a paradigm designed to yield expectations from a set of numbers which can see those numbers as elements approximately obeying Schrӧdinger's equation: i.e., time, position, mass, momentum and energy are all terms which can be defined for any collection of numerical indices to be analyzed. Once upon a time (back in the mid eighties) an economics professor asked me what what I was doing had to do with economics and I composed a paper for him showing exactly how all the above concepts could be mapped directly into economic theory. Not only that, but most all the economists already knew most of it; they already use terms like “energy” and “momentum” in their own discussions of trends and what kinds of changes one should expect. What I have shown is that these concepts can be quite well defined universal concepts applicable to any numerical analysis whatsoever.

Have fun -- Dick

#2 LaurieAG

LaurieAG

    Explaining

  • Members
  • 1317 posts

Posted 14 July 2008 - 06:23 PM

Since we are interested in the implied probability distribution of x, we must (in the final analysis) integrate over the probability distribution of tau. Since tau is a complete fabrication of our imagination, the final certainly cannot depend upon tau.


Hi Doctordick,

Schrödinger equation - Wikipedia, the free encyclopedia

So that not only is the probability of finding the particle the same everywhere, but the probability flux is as expected from an object moving at the the classical velocity p / m.

The reason that the Schrodinger equation admits a probability flux is because all the hopping is local and forward in time.



#3 Doctordick

Doctordick

    Explaining

  • Members
  • 1071 posts

Posted 15 July 2008 - 06:34 AM

Hi Doctordick,

Schrödinger equation - Wikipedia, the free encyclopedia

I do not understand the purpose of your post. What are you trying to say?

Dick

#4 LaurieAG

LaurieAG

    Explaining

  • Members
  • 1317 posts

Posted 15 July 2008 - 05:43 PM

I do not understand the purpose of your post. What are you trying to say? Dick


Hi Doctordick,

If the probability distribution of the imaginary tau is constant, just like the Schrödinger probability (without the classic velocity flux), then the core of your proof derives from this relationship.

#5 Doctordick

Doctordick

    Explaining

  • Members
  • 1071 posts

Posted 17 July 2008 - 04:28 AM

... then the core of your proof derives from this relationship.

I would deny that. I don't think you understand the essence of my presentation.

Sorry -- Dick

#6 Rade

Rade

    Understanding

  • Members
  • 1224 posts

Posted 17 July 2008 - 08:54 PM

I would deny that. I don't think you understand the essence of my presentation.Sorry -- Dick

Not good enough answer. Please explain why you "deny" LaurieAG presentation.

#7 Doctordick

Doctordick

    Explaining

  • Members
  • 1071 posts

Posted 27 July 2008 - 06:11 AM

Not good enough answer. Please explain why you "deny" LaurieAG presentation.

Very simple, there is nothing in my derivation which says anything about “a probability flux” or any “hopping”. The standard defense of Schroedinger's equation bears utterly no resemblance to my analytical derivation. LaurieAG's comment is simply not true.

To Anssi on his difficulties. Let me take another tack on this issue as it seems I am failing to communicate some rather simple issues. We started with my deduced fundamental equation:


[math]\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\Psi} = K\frac{\partial}{\partial t}\vec{\Psi}.[/math]



I then proposed that I could divide the underlying data (the numerical references to the unknown ontological elements) into two sets (set #1 and set #2). I could then assert that the probability of any given distribution, P (set #1 and set #2) would be equal to the probability of set #1 times the probability of set #2 (under the constraint that set #1 was given):


[math]P(set \; \# 1 \; and \; set \; \# 2) = P(set \; \# 1)P(set \; \# 2, \; given \; set \; \# 1)[/math]


Purely from the definition of probability, I know that these two probabilities can be individually represented by a scalar product of some vector function of the specific arguments. It follows directly from this fact that I can represent [imath]\vec{\Psi}[/imath] with the expression:


[math]\vec{\Psi}(\vec{x}_1,\vec{x}_2,\cdots, t)=\vec{\Psi}_1(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n, t)\vec{\Psi}_2(\vec{x}_1,\vec{x}_2,\cdots, t),[/math]


where the arguments [imath](x,\tau)_i[/imath] are represented by the expression [imath]\vec{x}_i[/imath]), If we make this substitution in the original equation above we have


[math]\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\Psi}_1 \vec{\Psi}_2 = K\frac{\partial}{\partial t}\vec{\Psi}_1 \vec{\Psi}_2.[/math]



Because we want to find the equation obeyed by the numerical references defined as set #1 for the case where set #2 is allowed to be any and all possibilities, we need this equation summed over all possibilities for set #2. We accomplish that by left multiplying by [imath]\vec{\Psi}_2^\dagger\cdot[/imath] and integrating over all arguments from set #2. We multiply by [imath]\vec{\Psi}_2^\dagger\cdot[/imath] because we know the result of [imath]\vec{\Psi}_2^\dagger\cdot\vec{\Psi}_2[/imath] ([imath]\vec{\Psi}_2[/imath] is defined by the fact that this dot product is the probability of having set #2) and we integrate over all the arguments from set #2 because that act will remove those arguments from the equation (the sum constitutes the sum of the probability of any specific set #2 over all possibilities).

I need to make a few minor comments about those acts. First, we left multiply for a very simple reason. If we were dealing with simple numbers only, direction would be of no consequence; however, we are, in this case, dealing with abstract mathematical operators. Now most all mathematical operators are defined by what they do to operators to their right, not by what they do to operators to their left (the definitions simply presume they don't operate to the left).

The central issue of algebra is the fact that, if we have an equation and we do exactly the same thing to both sides of the equation, the equation is still valid. This fact does not require that the equation contain only numbers (as high school algebra is presented), it can contain any kind of properly defined operators. The issue being that the equation is still valid after we left multiply by [imath]\vec{\Psi}_2^\dagger\cdot[/imath]. This would not be true if we “right multiplied” by this function as placing the function on the right would mean that the operators defined in the original equation would operate on that function and, as the operators on the opposite sides of the equal sign might not be the same, the result (which is what the equal sign is referring to) might not be the same and we would thus not be doing the same thing to both sides of the equation (as an algebraic act, the step would be invalid).

The next step is to remove the arguments of set #2 from the equation. To accomplish this, we integrate both sides of the equation over the arguments of set #2. The resulting equation is explicitly


[math]\int\vec{\Psi}_2^\dagger \cdot \left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\Psi}_1 \vec{\Psi}_2 dV_2=\int \vec{\Psi}_2^\dagger \cdot K\frac{\partial}{\partial t}\vec{\Psi}_1 \vec{\Psi}_2.[/math]



I am going to presume that, up to this point, everything is perfectly clear to you. The next step is to simplify that expression into a form which is inherently more meaningful to us; mostly by factoring out terms which need not be under the integral sign so that we can understand what has to be done to accomplish the relevant integrals. The first thing is to recognize that the integral of a sum is exactly equal to the sum of the integrals of the terms of the sum. That suggests that it would be very valuable to expand the expression from the sums over i and j as represented to sums over i and j taken from the two different defined sets (set #1 and set #2). Notice that the first term of the above equation (the term containing the differential operator [imath]\vec{\nabla}[/imath]) can be written as a sum of two terms:


[math]\int\vec{\Psi}_2^\dagger \cdot \left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i \right\}\vec{\Psi}_1 \vec{\Psi}_2 dV_2=\int\vec{\Psi}_2^\dagger \cdot \left\{\sum_{i = \# 1} \vec{\alpha}_i \cdot \vec{\nabla}_i \right\}\vec{\Psi}_1 \vec{\Psi}_2 dV_2 +\int\vec{\Psi}_2^\dagger \cdot \left\{\sum_{i=\#2}\vec{\alpha}_i \cdot \vec{\nabla}_i \right\}\vec{\Psi}_1 \vec{\Psi}_2 dV_2.[/math]



The first of these two terms can be again divided into two terms because the differential (which arises from [imath]\vec{\nabla}_i[/imath]) must operate on two functions ([imath]\vec{\Psi}_2[/imath] is a function of set #1) and we need the differential of a product (essentially [imath]\frac{d}{dx}A(x)B(x)=B(x)\frac{d}{dx}A(x)+A(x)\frac{d}{dx}B(x)[/imath]). Writing out the first term from above explicitly, we have


[math]\int\vec{\Psi}_2^\dagger \cdot \left\{\vec{\Psi}_2 \sum_{i=\#1} \vec{\alpha}_i \cdot \vec{\nabla}_i \vec{\Psi}_1 + \vec{\Psi}_1 \sum_{i=\#1} \vec{\alpha}_i \cdot \vec{\nabla}_i \vec{\Psi}_2\right\}dV_2.[/math]



Notice that I have commuted [imath]\vec{\Psi}_2[/imath] to the left. I have done this because I want to avoid the ambiguity in representation which would arise if I left the order the same. Now, let us look at the first term of the latest expression. The sum is being taken over set #1 and [imath]\vec{\Psi}_1[/imath] is not a function of set #2. It follows that these terms have exactly the same value for all possible selections from set #2. This means that those terms may be factored out of the integral. Thus, the first term of this latest expression (the latest “first term”) becomes:


[math]\int\vec{\Psi}_2^\dagger \cdot \vec{\Psi}_2 dV_2\left\{\sum_{i=\#1} \vec{\alpha}_i \cdot \vec{\nabla}_i \right\}\vec{\Psi}_1.[/math]



The integral indicated there is quite obviously equal to unity: i.e., it is exactly the total probability of all possibilities for set #2. So, let us lay aside the first term of our finished expression as:


[math]\left\{\sum_{i=\#1} \vec{\alpha}_i \cdot \vec{\nabla}_i \right\}\vec{\Psi}_1.[/math]



Mark this as term #1.

We now need to work backwards and examine the remaining terms in detail. In order to be sure we have not omitted a term, let us next look at the second term of that “latest expression”.


[math]\int\vec{\Psi}_2^\dagger \cdot \left\{ \vec{\Psi}_1 \sum_{i=\#1} \vec{\alpha}_i \cdot \vec{\nabla}_i \vec{\Psi}_2\right\}dV_2.[/math]



The only thing we can do with this term is to point out that [imath]\vec{\Psi}_1[/imath] is not a function of set #2 (thus it can be factored from the integral) and point out that the integral of a sum is identical to the sum of the integrals. The resulting expression is:


[math]\vec{\Psi}_1\int\vec{\Psi}_2^\dagger \cdot \left\{\sum_{i=\#1}\vec{\alpha}_i \cdot \vec{\nabla}_i \right\} \vec{\Psi}_2dV_2.[/math]



We will lay this term aside to be picked up later. You can mark this as Term A.

Now, let us go back to the next previous term. That would be the remaining term with a differential operator (the second term of the original pair containing the [imath]\vec{\nabla}[/imath] operator):


[math]\int\vec{\Psi}_2^\dagger \cdot \left\{\sum_{i=\#2}\vec{\alpha}_i \cdot \vec{\nabla}_i \right\}\vec{\Psi}_1 \vec{\Psi}_2 dV_2.[/math]


Again, the only thing we can do here is to point out is that [imath]\vec{\Psi}_1[/imath] is not a function of set #2. Thus the [imath]\vec{\nabla}[/imath] operator (which the sum has constrained to set #2) does not yield any differentials of [imath]\vec{\Psi}_1[/imath] (the product rule does not apply because there is no differentiable product). Once again [imath]\vec{\Psi}_1[/imath] can be factored from the integral and the sum. The result can then be written:


[math]\int\vec{\Psi}_2^\dagger \cdot \left\{\sum_{i=\#2}\vec{\alpha}_i \cdot \vec{\nabla}_i \right\} \vec{\Psi}_2 dV_2\vec{\Psi}_1.[/math]


Since the only difference between this term and term A expressed earlier is the range of the sum and the fact that [imath]\vec{\Psi}_1[/imath] function is on opposite sides of the expression, if one uses parenthesis to indicate that whatever is inside the parenthesis is evaluated first (thus, even when the sum over the [imath]\vec{\nabla}[/imath] operator includes set #1, it still does not operate on [imath]\vec{\Psi}_1[/imath] as that function is outside the parenthesis), we can add these two terms together and obtain the expression:


[math]\left\{\int\vec{\Psi}_2^\dagger \cdot \sum_i\vec{\alpha}_i \cdot \vec{\nabla}_i \vec{\Psi}_2 dV_2\right\}\vec{\Psi}_1.[/math]


Mark this as term #2. As I have said several times, this representation is slightly ambiguous but the intention can be taken from context as clear because of the integral (what is contained within the parenthesis takes precedence over any other operations). The differential operator does not operate on [imath]\vec{\Psi}_1[/imath] because what is in the parenthesis evaluates to an ordinary algebraic expression and does not end up being a differential operator.

That brings us down to the portion of our original equation which involves the Dirac delta functions. One thing is nice and that is the fact that commutation brings on no difficulties here. The beta operators only appear once and they commute with all the other functions which are ordinary mathematical functions. The only issue of importance is the range of the sums on i and j. We have only three possibilities: i and j are chosen from set #1, i and j are chosen from set #2 and i and j are chosen from opposite sets. Let us examine the opposite sets case first. The expression to be evaluated is,


[math]\int\vec{\Psi}_2^\dagger \cdot \left\{ \sum_{i=\# 1 j=\#2}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) +\sum_{i=\# 2 j=\#1}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\Psi}_1 \vec{\Psi}_2 dV_2[/math]


Since the Dirac delta function is defined to yield a positive result only when its argument vanishes, the two sums shown above are exactly equal (i and j are merely inverted) and the result is exactly twice the result of evaluating either sum by itself. Put this together with the fact that [imath]\vec{\Psi}_1[/imath] can be factored from the integral and it should be clear that this term can be written:


[math]\left\{2 \sum_{i=\#1 j=\#2}\int \vec{\Psi}_2^\dagger \cdot \beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1[/math]


which we can Mark as term #3.

The case where i and j are both taken from set #1 evaluates to a rather simple expression as everything, including the Dirac delta function, can be factored from the integral (the only expressions which are a function of set #2 are [imath]\vec{\Psi}_2^\dagger[/imath] and [imath]\vec{\Psi}_2[/imath]) the result can be written,


[math] \left\{ \sum_{i \neq j (\#1)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\int \vec{\Psi}_2^\dagger \cdot \vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1.[/math]


Since the integral obviously evaluates to unity (it is again the sum over the probability of all possibilities for set #2) that expression may be removed and one has,


[math] \left\{ \sum_{i \neq j (\#1)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}_1.[/math]


Add this expression to term #1 and one has:


[math]\left\{\sum_{i=\#1} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#1)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}_1.[/math]



The final expression to evaluate is the case where i and j are both taken from set #2. In that case, all that can be said is that [imath]\vec{\Psi}_1[/imath] may be factored out of both the integral and the sum (it is entirely dependent upon set #1 and does not change for any such set of i and j or any change in the arguments of set #2). Thus we obtain:


[math] \left\{\int \vec{\Psi}_2^\dagger \cdot \left[ \sum_{i \neq j (\#2)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\right]\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1.[/math]


This term can be added to term #2. The result can be written as


[math]\left\{\int\vec{\Psi}_2^\dagger \cdot \left[\sum_i\vec{\alpha}_i \cdot \vec{\nabla}_i +\sum_{i \neq j (\#2)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\right]\vec{\Psi}_2 dV_2\right\}\vec{\Psi}_1.[/math]


This completes the evaluation of the left hand side of our algebraically altered fundamental equation:


[math]\left\{\sum_{\#1} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#1)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\right\}\vec{\Psi}_1 + \left\{2 \sum_{i=\#1 j=\#2}\int \vec{\Psi}_2^\dagger \cdot \beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\vec{\Psi}_2 dV_2 \right. +[/math]
[math] \left.\int \vec{\Psi}_2^\dagger \cdot \left[\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#2)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right]\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1 [/math]


All which remains is to evaluate the right hand side of the equal sign which was explicitly.


[math]\int \vec{\Psi}_2^\dagger \cdot K\frac{\partial}{\partial t}\vec{\Psi}_1 \vec{\Psi}_2 dV_2.[/math]



The only problem here is that we must use the product rule to evaluate the differential with respect to t.


[math]\frac{\partial}{\partial t}\vec{\Psi}_1 \vec{\Psi}_2 = \vec{\Psi}_2 \frac{\partial}{\partial t}\vec{\Psi}_1 +\vec{\Psi}_1\frac{\partial}{\partial t}\vec{\Psi}_2[/math]


Thus, noting again that [imath]\vec{\Psi}_1[/imath] (and its differential with respect to t) can be factored from the integral, the right hand side of the equal sign becomes,


[math]\left\{\int \vec{\Psi}_2^\dagger \cdot \vec{\Psi}_2 dV_2\right\} K \frac{\partial}{\partial t}\vec{\Psi}_1 +K\left\{\int \vec{\Psi}_2^\dagger \cdot \frac{\partial}{\partial t}\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1[/math]


where again I have used the parenthesis to indicate that everything inside the parenthesis is to be evaluated first: i.e., the second term does not imply any differentiation of [imath]\vec{\Psi}_1[/imath]. Of course the integral in the left hand term once again evaluates to unity for the same reasons given earlier. The final result for the right side of the equal sign is,


[math]K \frac{\partial}{\partial t}\vec{\Psi}_1 +K\left\{\int \vec{\Psi}_2^\dagger \cdot \frac{\partial}{\partial t}\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1[/math]


The final result is exactly what is shown in the first post of this thread.


[math]\left\{\sum_{\#1} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#1)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\right\}\vec{\Psi}_1 + \left\{2 \sum_{i=\#1 j=\#2}\int \vec{\Psi}_2^\dagger \cdot \beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\vec{\Psi}_2 dV_2 \right. +[/math]
[math] \left.\int \vec{\Psi}_2^\dagger \cdot \left[\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#2)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right]\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1 = K\frac{\partial}{\partial t}\vec{\Psi}_1+K \left\{\int \vec{\Psi}_2^\dagger \cdot \frac{\partial}{\partial t}\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1[/math]

I hope you don't find that too tricky to figure out.

Glad we are not in particular hurry :)

So am I. I also hope you appreciate the work it took to write and debug all that LaTex code I just composed. :eek_big:

Have fun -- Dick

#8 AnssiH

AnssiH

    Understanding

  • Members
  • 790 posts

Posted 31 July 2008 - 06:41 AM

Doh, sorry it took me a while to reply. I was not subscribed to this thread and instead kept checking your post history to see when you post your reply... I expected to see it posted after your note to the old thread... ...and didn't notice at all you'd already posted it before :D

I also hope you appreciate the work it took to write and debug all that LaTex code I just composed. :eek_big:


Certainly. I find it incredibly time consuming to type in any LaTex code since there's no good way to preview it. It would probably be possible to implement the post editing so that it would display the resulting LaTex render somewhere in real time as you are typing the code... ...but I guess that would make life too easy.

Anyhow, it was all very helpful, and I just spent the time to walk through it carefully with little baby steps, and was able to follow just about all of it. I would still call my understanding of all that algebra very superficial as I would not be able to perform those steps myself, although they seem to make perfect sense following your representation.

Just couple of questions;


[math]\left\{\int\vec{\Psi}_2^\dagger \cdot \sum_i\vec{\alpha}_i \cdot \vec{\nabla}_i \vec{\Psi}_2 dV_2\right\}\vec{\Psi}_1.[/math]


Mark this as term #2. As I have said several times, this representation is slightly ambiguous but the intention can be taken from context as clear because of the integral (what is contained within the parenthesis takes precedence over any other operations). The differential operator does not operate on [imath]\vec{\Psi}_1[/imath] because what is in the parenthesis evaluates to an ordinary algebraic expression and does not end up being a differential operator.


I am a bit uncertain what you mean by "what is in the parenthesis evaluates to an ordinary algebraic expression and does not end up being a differential operator". Do you just mean that if one was to expand that sum into its explicit components, the differential operator would not be there to create any ambiguity? (Just that when doing that, one would have to explicitly know, from the context, that the differential operator is not meant to operate on [imath]\vec{\Psi}_1[/imath], otherwise the expanded expression would turn out wrong?)

Also I am a bit shaky with the term [imath]dV_2[/imath] appearing in places. I remember asking about its meaning before, and it had something to do with having to consider how large an area of possibilities we are considering... Still I am somewhat puzzled about what is it doing there, where ever I see it appearing...

Anyway, thank you for the detailed explanation, I can continue from here soon...

-Anssi

#9 Doctordick

Doctordick

    Explaining

  • Members
  • 1071 posts

Posted 03 August 2008 - 10:19 AM

Don't worry about the slow response. I have had a lot of things to do lately and I haven't had much time for the forum either.

It would probably be possible to implement the post editing so that it would display the resulting LaTex render somewhere in real time as you are typing the code... ...but I guess that would make life too easy.

From the way LaTex interpretation is being done now, a real time render would be impossible as many LaTex controles require the closing control to exist before they can be rendered (for example the variable size parenthesis). One would have to establish a default result for any and all such cases in order to define that real time render. I would not expect such a thing soon.

I am a bit uncertain what you mean by "what is in the parenthesis evaluates to an ordinary algebraic expression and does not end up being a differential operator".

As I said, the expression can be taken as ambiguous as to exactly what the differential operator is to operate on; however, things within parenthesis are always to be totally evaluated first so I think it reasonable to insist that, since it is enclosed in parenthesis, the [imath]\vec{\nabla}_i[/imath] can be seen as "not operating on [imath]\vec{\Psi}_1[/imath] as it is outside the parenthesis. And, yes, if you let [imath]\vec{\nabla}_i[/imath] operate on [imath]\vec{\Psi}_1[/imath] the result would be wrong.

Also I am a bit shaky with the term [imath]dV_2[/imath] appearing in places. I remember asking about its meaning before, and it had something to do with having to consider how large an area of possibilities we are considering... Still I am somewhat puzzled about what is it doing there, where ever I see it appearing...

As I explained to you somewhere earlier, that integral sign, [imath]\int[/imath], was originally a big “S” standing for a “sum”. The number of terms in the sum was allowed to go to infinity and the terms being summed (which are defined by some function) must individually be brought to zero as their number goes to infinity (otherwise the result would be infinite). For that reason, the terms in the sum are weighted by the “differential” (the variable being integrated over preceded by the letter “d”). The standard way of writing an integral is as follows:


[math] A=\int_a^bf(x)dx[/math]



An integral over many variables would normally be written.


[math] A=\int_{a_1}^{b_1}\int_{a_2}^{b_2} \cdots \int_{a_n}^{b_n} f(x_1,x_2, \cdots, x_n)dx_1dx_2 \cdots dx_n[/math]



but that just gets too complex when you are dealing with the number of arguments I am working with so I move to somewhat of a shorthand. I omit the limits on the integration (as the limits in all the cases of interest here are over all possible values: i.e., [imath]a=-\infty[/imath] and [imath]b=+\infty[/imath]) and, instead of all those “d”s I just use dV where “V” stands for volume. I am integrating over the entire abstract volume expressed by the entire collection of arguments. In the case you are pointing to, [imath]dV_2[/imath] refers to the entire abstract volume expressed by the numerical references called “set #2”. Finally in recognition of the differential abstract volume so represented, I only put one integral sign in the expression (essentially integrating over one variable: that one variable being the abstract volume being referred to by [imath]dV_2[/imath]. It just gets rid of a lot of algebraic expressions which really don't need to be there as they add a lot of repetition without actually adding to the clarity of the intended meaning. (That's my opinion anyway.)

Actually, the standard definition of the integral operation has some strong similarities to parenthesis. The “integral operator” (and it can be seen as a mathematical operator) consists of two symbols, the integral sign, [imath]\int[/imath] (with appropriate limits) and the differential “dx” which indicates the argument to be integrated over. These two symbols are placed respectively to the left and to the right of the function to be integrated. The interpretation is that the implied operation is to be completely performed as a unit, uninfluenced by what is outside those symbols. For example, were I to write [imath]\int dx f(x)[/imath] the standard interpretation would be that the answer is xf(x): i.e., f(x) is not to be integrated over as it is not “inside the integral”.

Oh, just one final note. The normal meaning of an integral sign with no limits is that one means to indicate what is called the “indefinite integral”. The “definite” integral has expressed limits and is defined to be the difference between the indefinite integral evaluated at the upper limit and the lower limit. My integrals are actually definite integrals as the limits are clear (I say they are integrated over all possibilities quite a bit) though, as they are written they could be interpreted to be indefinite integrals. Again, I do this for notational simplicity. Letting the reader know I mean these to be integrated over all possibilities is much more convenient than using LaTex to write in the limits everywhere.

I hope I have cleared a few things up.

Have fun -- Dick

#10 LaurieAG

LaurieAG

    Explaining

  • Members
  • 1317 posts

Posted 04 August 2008 - 12:39 AM

Hi Doctordick,

Very simple, there is nothing in my derivation which says anything about “a probability flux” or any “hopping”. The standard defense of Schroedinger's equation bears utterly no resemblance to my analytical derivation. LaurieAG's comment is simply not true.


I just look at common factors and divergences, forwards and backwards.

If your integral naturally mimics Schroedinger's 'probability flux' integral then wouldn't you expect equivalence?

#11 Doctordick

Doctordick

    Explaining

  • Members
  • 1071 posts

Posted 04 August 2008 - 02:12 PM

If your integral naturally mimics Schroedinger's 'probability flux' integral then wouldn't you expect equivalence?

I simply do not follow the purpose of your post. The issue here is that I am talking about derivation, not equivalence. I have provided a derivation of a required relationships from first principals which is not at all what was behind the original proposition of Schroedinger's equation. Neither Schroedinger nor the physics community has ever suggested that there was no possibility his equation was wrong. My presentation is a pure logical deduction and, without error in the logic, there can not be an error in the conclusion.

The fact that Schroedinger's equation is an approximate solution to my equation should be taken as rather astounding observation as it means that Schroedinger's equation is correct so long as the approximations presumed are correct. As such, it is true by definition and I doubt that you will find anyone in the physics community who will accede to that fact. They all consider it to be verified by experiment, not by definition.

Have fun -- Dick

#12 LaurieAG

LaurieAG

    Explaining

  • Members
  • 1317 posts

Posted 04 August 2008 - 05:07 PM

The fact that Schroedinger's equation is an approximate solution to my equation should be taken as rather astounding observation as it means that Schroedinger's equation is correct so long as the approximations presumed are correct. As such, it is true by definition and I doubt that you will find anyone in the physics community who will accede to that fact. They all consider it to be verified by experiment, not by definition.


Hi Doctordick,

I am not questioning Schroedingers equation I am questioning the hidden determinism behind your proof.

At this point, it is important to realize that set #2 consists of invalid ontological elements created for the purpose of constraining set #1 to what they actually were.
...
Under this picture, set #2 is certainly context as since they are invalid ontological elements, they can be anything so long as they are consistent with the explanation: i.e., the only requirement here is that they need to obey the fundamental equation.


BTW, to put things another way, if your set #2 is equivalent to Heisenbergs 'uncertainty' distribution then the set #1 you discard must be the deterministic equivalent of a 'certainty' distribution.

#13 Doctordick

Doctordick

    Explaining

  • Members
  • 1071 posts

Posted 04 August 2008 - 05:20 PM

I am not questioning Schroedingers equation I am questioning the hidden determinism behind your proof.

There is no determinism behind my proof. The determinism is entirely in the epistemological construct.

BTW, to put things another way, if your set #2 is equivalent to Heisenbergs 'uncertainty' distribution then the set #1 you discard must be the deterministic equivalent of a 'certainty' distribution.

Neither set #1 nor set #2 is "discarded". I don;t believe you have any idea as to what I am doing.

Have fun -- Dick

#14 LaurieAG

LaurieAG

    Explaining

  • Members
  • 1317 posts

Posted 04 August 2008 - 06:12 PM

There is no determinism behind my proof. The determinism is entirely in the epistemological construct.


Under this picture, set #2 is certainly context as since they are invalid ontological elements, they can be anything so long as they are consistent with the explanation: i.e., the only requirement here is that they need to obey the fundamental equation.


Deterministic
adjective
an inevitable consequence of antecedent sufficient causes

#15 Doctordick

Doctordick

    Explaining

  • Members
  • 1071 posts

Posted 05 August 2008 - 07:26 AM

Deterministic
adjective
an inevitable consequence of antecedent sufficient causes

As I said, you are apparently utterly blind to what I am doing. Do you understand this sentence,"the determinism is entirely in the epistemological construct"? Please don;t bother me until you at least have some understanding of the constraints on the proof.

No one seems to comprehend that what I have done is to set up a logical structure designed to provide a representation of any and all epistemological constructs such that they are guaranteed to be flaw free with regard to what is known (what is known being undefined).



#16 LaurieAG

LaurieAG

    Explaining

  • Members
  • 1317 posts

Posted 05 August 2008 - 04:58 PM

As I said, you are apparently utterly blind to what I am doing. Do you understand this sentence,"the determinism is entirely in the epistemological construct"?


Hi Doctordick,

When one half of your proof is based on a constrained deterministic epistemological construct then the remainder of your proof is deterministic because you can never get to your stated conclusions with any other set.

they can be anything so long as they are consistent with the explanation: i.e., the only requirement here is that they need to obey the fundamental equation.


If you could make your proof work without the above 'constraints' then you might have something, otherwise you are just proving that 1=1.

#17 Doctordick

Doctordick

    Explaining

  • Members
  • 1071 posts

Posted 05 August 2008 - 09:13 PM

... you are just proving that 1=1.

That is exactly what I have been saying from the word go! Maybe you have managed to figured it out; but somehow I doubt it.

Have a ball -- Dick