Jump to content
Science Forums

Is 'time' a measurable variable?


Recommended Posts

Hence the equation is not conserved, because of time reversal on the right hand-side, due to the presence of the imaginary "i"; except if you put a [math] anti-hermitean [/math] Time-operator : [math] \hat{T}*=-\hat{T} [/math], which restores the time sign on the right handside.
But there is no need for the equation to be "conserved" for conjugation of [math]\norm\psi(x,t)[/math], actually it is quite correct that the sign changes that way between time evolution of the one and of the other.
Link to comment
Share on other sites

He certainly has, like all natural philosophers before 1905, considering it "a perfectly natural thing to suppose" in lack of any contrary reason.
I guess I just have a higher opinion of Newton's intellect and his willingness to question things than you do. ;)

 

I would have asked him physically how two ideal clocks were to be set to be identical in a specific inertial rest frame. After he explained that, I would ask him how two ideal clocks were to be set to be identical in an inertial frame moving with respect to the first frame. All one need do at that point is show that they would be set differently. Newton knew the speed of light was finite and I thin he would have been bright enough to recognize the problem.

 

Have fun -- Dick

Link to comment
Share on other sites

  • 2 weeks later...

from the first post:

general theory of relativity predicts, and atomic clocks have confirmed, that clocks at higher elevations run slightly faster than do those closer to the ground. Given the current accuracy of clocks, this gravitational effect requires that researchers know the altitude of timekeeping laboratories to within a few meters. Ultimately, altitudes would have to be measured to within a centimeter.
Were the clocks in a container that would not be affected by atmospheric pressure? Also, are other standards affected as well, such as length?
Link to comment
Share on other sites

As someone who can only claim a very basic understanding of the implications of both relativity and quantum mechanics, I just wanted to thank everyone who has posted in this thread. It has really caused me to think long and hard about some of the issues raised.

 

DoctorDick, I haven't read your paper yet as my head hurts just from making my way through this thread. :cup: But I will try to go through it sometime soon.

 

Some of the rebuttals of both Erasmus000 and Qfwfq have also been illuminating.

 

There are obviously some people here with much stronger heads than mine!:)

Link to comment
Share on other sites

DoctorDick:

I guess I just have a higher opinion of Newton's intellect and his willingness to question things than you do.
And DD, if I had the power of a God, I'd bring people at Newton's level into this discussion so you'd have someone capable of challenging your abilities. If there are people here who are at that level ... and that's a big IF ... they are too busy holding on to their own worldview to consider someone else's. That, or they're playing to an audience that doesn't exist to gain .... what?

You know, one problem that you present here is that you require folks to lift their heads UP. Many are not used to doing that. :)

Link to comment
Share on other sites

DoctorDick:And DD, if I had the power of a God, I'd bring people at Newton's level into this discussion so you'd have someone capable of challenging your abilities. If there are people here who are at that level ... and that's a big IF ... they are too busy holding on to their own worldview to consider someone else's. That, or they're playing to an audience that doesn't exist to gain .... what?

You know, one problem that you present here is that you require folks to lift their heads UP. Many are not used to doing that. :)

Hey, I thought you were going to be gone for three weeks. You must be doing the same thing I do: when out of town, I take advantage of any opportunity to see who has posted to these forums. So you read these things even when you aren't at home. I will give you something to think about while you are away. I am quite sure you won't be able to follow all this but get your son to explain it to you. You might develop a little interest (one hopes anyway).

 

While you were gone, I spent a little time thinking about the issues I am trying to confront here. It seems to me that the central point here is that my equation constitutes a constraint on any possible explanation of anything and, as such, any competent scientist should be interested in the range of possibilities eliminated or allowed by such a constraint. That knowledge can only flow from examination of the solutions. Now the finding of solutions is far from a trivial problem. The value of those solutions flows from the fact that they are absolutely valid: i.e., nothing being presented by me is "theoretical", it's all pure unadulterated logic.

 

In the interest of your persuading your son to seriously examine my deductions (and explain them to you) I will show you the first step in in solving that fundamental equation. The equation we are trying to solve is:

[math] \left( \sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j} \beta_{ij} \delta( \vec{x}_i - \vec{x}_j ) \right) \vec{\Psi} = K \frac{\partial}{\partial t} \vec{\Psi} = iKm \vec{\Psi}[/math].

 

One thing anyone considering solving that equation must keep in mind is the fact that the arguments of [math]\vec{\Psi}[/math] (that would be the complete collection [math]\vec{x}_i[/math] and t) constitute the entirety of information available to us. Absolutely no information expressed beyond what is represented in those arguments has any bearing on the solution. (That is, except for the understanding of mathematics itself which I will treat as a well understood abstract language with which we can communicate.) The first thing you should recognize is that this is in total violation of all the ethics of science as commonly implemented. The first step in any common scientific attack on any problem is to limit that problem to issues which the scientist thinks can be well defined. The difficulty with the common attack is that the assumption that anything exists which can be well defined "a priori" is fundamentally invalid. We must begin with A as totally undefined!

 

As we approach the problem of solving that equation, it is important that we lay out exactly what has been defined and what has not been defined. First, what the set [math]\vec{x}_i[/math] stands for is totally undefined (the entire collection for all given t constitute everything we know and definition must come from our explanation). I have so far defined several mathematical processes for my own convenience which should be understood if you understood the derivation of my equation. I have defined the anti-commuting matrices [math]\vec{\alpha}_i[/math] and [math]\beta_{ij}[/math]; and I have also defined a specific reference frame which I call the "center of mass system" to be that system where [math]\kappa_x[/math] and [math]\kappa_\tau[/math] vanish.

 

Except for specifying the mathematical procedure to be used to generate probabilities,

[math]P(\vec{x},t) = \vec{\Psi}^\dagger (\vec{x},t) \vec{\Psi}(\vec{x},t) dv [/math],

 

and the introduction of the fictitious tau axis in order to keep the representation open to all possibilities, the only other definitions introduced revolve around C and D. C is, by definition, real (it consists of the actual information to be explained) and D is fictitious (D is part of the explanation and can be considered valid only in the same sense that the explanation itself is valid). Absolutely everything else is totally undefined.

 

It should be clear to anyone reading this that any solution which can be directly interpreted as applying to a realistic situation will involve such volumes of data in C (before one even adds in the collection D) as to be beyond conception. The absolute only way to attack the problem of solving the equation is to come up with some way of reducing the number of variables without constraining the generality of the representation. I will demonstrate a technique for accomplishing this result.

 

In order to show a number of interesting things, I will first show how to separate the variables into two sets consisting of those variables related to C and those related to D. I will call those variables directly representing C set number one and those related to D set number two.

 

It should be clear to anyone who understands probability that I can write down the following expression:

[math]P[/math](set #1 and set #2) = [math]P_1[/math](set #1) [math]\cdot P_2[/math](set #2 given set#1)

 

Exactly the same arguments I gave for the general representation of P([math]\vec{x}[/math],t) suffice to yield the fact that [math]P_1[/math] and [math]P_2[/math] may be represented by the functions [math]\vec{\Psi}_1[/math] and [math]\vec{\Psi}_2[/math]. That is,

[math]\vec{\Psi}(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n, \cdots,t) = \vec{\Psi}_1(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n,t)\vec{\Psi}_2(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n,\cdots,t)[/math]

 

If we substitute this representation of [math]\vec{\Psi}[/math] into the fundamental equation, left multiply by [math]\vec{\Psi}_2^\dagger[/math] and integrate over all [math]\vec{x}_i[/math] from set number two we will obtain the following expression (call this equation #1). [note: i and j will indicate set #1 while k and l will indicate set #2]

[math] \left(\sum_i^n \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}^n \beta_{ij}\delta(\vec{x}_i - \vec{x}_j) \right)\vec{\Psi}_1 + \left( \int_{set 2} \vec{\Psi}_2^\dagger \sum_i^n \vec{\alpha}_i \cdot\vec{\nabla}_i \vec{\Psi}_2dv_2 \right)\vec{\Psi}_1 [/math]

[math] +\left( 2\sum_{i k} \int_{set 2}\vec{\Psi}_2^\dagger \beta_{ik}\delta(\vec{x}_i - \vec{x}_k)\vec{\Psi}_2dv_2 + \int_{set2}\vec{\Psi}_2^\dagger \left[\sum_k \vec{\alpha_k}\cdot\vec{\nabla}_k + \sum_{k \neq l}\beta_{kl}\delta(\vec{x}_k - \vec{x}_l)\right] \vec{\Psi}_2dv_2 \right)\vec{\Psi}_1 [/math]

[math] = \kappa \frac{\partial}{\partial t}\vec{\Psi}_1 + \kappa \left(\int_{set2}\vec{\Psi}_2^\dagger \frac{\partial}{\partial t}\vec{\Psi}_2dv_2 \right)\vec{\Psi}_1[/math]

 

I made the separation into set one from C and set two from D because of a subtle difference between the two. The tau axis was introduced for the sole purpose of assuring that the existence of two identical elements would not invalidate the representation. You should note that this constraint need not exist on the hypothetical elements represented by D. Of importance here is the fact that the constraint is not implicitly included in the equation. If we allow solutions which violate that constraint on C then the purpose of the introduction of tau is totally defeated. The solution of the dilemma is quite simple: we can assure that no solutions which violate that constraint are considered by merely requiring [math]\vec{\Psi}_1[/math] be antisymmetric with respect to exchange of arguments. Anyone who does not understand that statement should investigate the "Pauli exclusion principle". If [math]\vec{\Psi}_1[/math] is antisymmetric with respect to exchange of arguments, then it will absolutely vanish any time two arguments are identical. It follows that the second term in the first parentheses of the equation above will vanish identically. But all we really need to remember is that all "real" references are represented by antisymmetric functions: only entities whose existence is defended by explanation (i.e., representations of D) can be represented by symmetric functions (actually it should be clear that the symmetry of elements arising from D is not constrained in any way).

 

But, let's get on with the problem of solving the equation. Clearly, if [math]\vec{\Psi}_2[/math] were known, the integrals indicated above could be explicitly done and the equation could be written in the form (call equation #2):

[math] \left( \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i + f(\vec{x}_1, \vec{x}_2, \cdots ,\vec{x}_n,t) \right) \vec{\Psi}_1 = \kappa \frac{\partial}{\partial t} \vec{\Psi}_1[/math]

 

Please note that the function "f" constitutes a linear weighted sum of alpha and beta matrices plus one term proportional to the identity matrix (which arises from the time derivative in the original equation). Note that each of these weights are functions of the arguments referred to as set one. One problem still exists. The number of elements [math]\vec{x}_i [/math] in the equation is still astronomical for any reasonable circumstance. That being the case, let us take the above procedure one step further: let us search for the for of equation which must be obeyed by a single element of A under this representation. We may clearly write:

[math]P_1[/math](set #1) = [math]P_0(\vec{x}_1,t) P_r[/math](remainder of set #1 given [math]\vec{x}_1[/math], t).

 

Once again, we can deduce that there exist algorithms capable of producing [math]P_0[/math] and [math]P_r[/math] given above; I will call these algorithms [math]\vec{\Psi}_0[/math] and [math]\vec{\Psi}_r[/math] respectively. I may thus write:

[math]\vec{\Psi}_1(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n,t) = \vec{\Psi}_0 (\vec{x}_1,t)\vec{\Psi}_r(\vec{x}_1,\vec{x}_2, \cdots, \vec{x}_n,t)[/math]

 

Make this substitution for [math]\vec{\Psi}_1[/math] in equation two, left multiply by [math]\vec{\Psi}_r^\dagger[/math] and integrate over all [math]\vec{x}_i[/math] except [math]\vec{x}_1[/math] and one obtains equation 3.

[math]\vec{\alpha}_i \cdot \vec{\nabla}_1 \vec{\Psi}_0 + \left(\int_{\vec{x} \neq \vec{x}_1}\vec{\Psi}_r^\dagger \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i + f \right] \vec{\Psi}_r dv_r \right) \vec{\Psi}_0 = \kappa \frac{\partial}{\partial t}\vec{\Psi}_0 + \left(\int_{\vec{x} \neq \vec{x}_1} \vec{\Psi}_r^\dagger \frac{\partial}{\partial t}\vec{\Psi}_r dv_r \right)\vec{\Psi}_0 [/math]

 

Equation three is an equation in one variable but it is not exactly in a form which I would call transparent. Nevertheless, there are some important aspects of that result which should be noticed. First, as I mentioned, it is a first order linear differential equation in one variable and, were the indicated integrals known functions, finding an actual solution would be a straight forward procedure (numerically if not in closed form). Essentially, what I am doing is not at all different from what all scientists do: in solving a specific problem, they always make the assumption that they understand everything not specifically represented in the details of that problem (i.e., they understand the issues providing their presumed boundary conditions). The only difference between what I am doing and what they do is that I am specifically including those boundary conditions in an exact mathematical formalism.

 

Now I really don't expect anyone here to have both the interest and the training to follow what I have just laid out; however, if what I have said does provoke anyone to spend some time thinking, I will do my best to explain anything you find confusing. Actually, to follow it requires very little training beyond introductory calculus. I got a kick out of a description of the field of calculus I read not long ago: the reviewer said it consisted of two ideas and a few thousand examples. I have a great tendency to agree with him.

 

Well, I am still here looking for someone to talk to. If there is anyone out there who understands what I have just done, let me know. The next step is to obtain a solution to a very specific circumstance. A step which yields some very interesting insights.

 

Have fun -- Dick

Link to comment
Share on other sites

  • 1 month later...

Well, I left you all with an exact equation for the behavior of one unknown, given that everything else in the universe was presumed to be known.

[math]\vec{\alpha}_i \cdot \vec{\nabla}_1 \vec{\Psi}_0 + \left(\int_{\vec{x} \neq \vec{x}_1}\vec{\Psi}_r^\dagger \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i + f \right] \vec{\Psi}_r dv_r \right) \vec{\Psi}_0 = \kappa \frac{\partial}{\partial t}\vec{\Psi}_0 + \left(\int_{\vec{x} \neq \vec{x}_1} \vec{\Psi}_r^\dagger \frac{\partial}{\partial t}\vec{\Psi}_r dv_r \right)\vec{\Psi}_0 [/math]

 

That would be equation number three from my previous post. As I said in that post, The next step is to obtain a solution to a very specific circumstance, "a step which yields some very interesting insights". So I thought I might lay it out as a tidbit to raise some interest. For this greatly restricted case, I will assume three very specific constraints will be approximately valid.

 

The first constraint is that the data point of interest is insignificant to the rest of the universe: that is, [math]P_r[/math] is, for practical purposes, not a function of [math]x_1[/math]. No one should find that to be an unacceptable approximation as it is an approximation made by every scientist on a daily basis: i.e., the rest of the universe trucks on mostly indifferent to the experiment being performed.

 

The second constraint is that the probability distribution of our expectations for the rest of the universe is not a function of time, [math]P_r[/math] is not a function of time (please note that time here is the t defined in my paper and not "time" as defined by the science community). Since that constraint is clearly false in the real world (with real time), it may seem a little more extreme than the first approximation but it really isn't that far from the ordinary approximations made by the scientific community: that is, there are a great many experiments performed every day where changes in the background behavior of the universe (it's time dependence) is presumed to be something one can ignore. At any rate, if the the probability distribution of our expectations, [math]P_r[/math], is stationary in time, so must be the time dependence of [math]P_2[/math] (the only difference is the fictional component due to our explanation of the universe: a stationary universe cannot require a dynamic set of constraining entities). It follows that the time dependence of [math]\vec{\Psi}_2[/math] must be of the form [math] e^{iS_2 t} [/math]. Anyone who knows any calculus should be aware of that fact.

 

Before going on to the third approximation, it is important to consider the impact of the these two constraints on the development of the function f so important to equation number three. If we carefully examine the integrations which produce the function f, we will discover that only one term is multiplied by the identity matrix (as mentioned in the derivation). That term arises from the integrations over the time derivative of [math]\vec{\Psi}_2[/math], the final term of equation one. This approximation allows one to explicitly perform the integration indicated in the final term of equation number one.

[math] \kappa \left(\int_{set2}\vec{\Psi}_2^\dagger \frac{\partial}{\partial t}\vec{\Psi}_2dv_2 \right)\vec{\Psi}_1 = i\kappa S_2 \vec{\Psi}_1[/math]

 

It follows that f may be written [math]f = f_0 -i \kappa S_2[/math] where [math]f_0 [/math] is entirely made up of a linear weighted sum of alpha an beta matrices, the identity matrix being absent. By the same token, the same argument applies to the final term of equation number two. From this, we know that, under this approximation, equation number three may be rewritten as follows (which I will call equation four):

[math]\vec{\alpha}_i \cdot \vec{\nabla}_1 \vec{\Psi}_0 + \left(\int_{\vec{x} \neq \vec{x}_1}\vec{\Psi}_r^\dagger \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i + f_0 \right] \vec{\Psi}_r dv_r \right) \vec{\Psi}_0 = \kappa \frac{\partial}{\partial t}\vec{\Psi}_0 + i\kappa \left(S_2+S_r \right)\vec{\Psi}_0 [/math]

 

The form of the final term of that equation suggests a very popular simplification of the differential equation we are trying to solve. Any competent mathematician would immediately redefine [math]\vec{\Psi}_0 [/math] via the expression [math]e^{-i \kappa \left(S_2 + S_r\right)}\vec{\Psi}[/math]. If one then defines the only remaining integral (the integral in parenthesis in equation four) to be [math]g(\vec{x}_1 )[/math], (which must be a linear weighted sum of orthogonal alpha and beta matrices: i.e., no identity matrix), then the resultant equation may be written in an extremely concise form. Note that we can drop the subscripts as only one variable remains. I will refer to this equation as equation five.

[math] \left(\vec{\alpha} \cdot \vec{\nabla} + g(\vec{x}) \right)\vec{\Psi} = \kappa \frac{\partial}{\partial t}\vec{\Psi}[/math]

 

Which implies the operator identity

[math] \left(\vec{\alpha} \cdot \vec{\nabla} + g(\vec{x}) \right) = \kappa \frac{\partial}{\partial t}[/math]

 

If we left multiply equation five by that identity, noting that multiplication of the matrices yields either one half for direct terms or zero for cross terms, and define the product [math]g \cdot g[/math] to be one half [math]G(\vec{x} )[/math], the equation of interest becomes

[math] \nabla^2\vec{\Psi}(\vec{x},t) + G(\vec{x}) \vec{\Psi}(\vec{x},t) = 2\kappa^2 \frac{\partial^2}{\partial t^2}\vec{\Psi}(\vec{x},t)[/math]

 

At this point it is reasonable to analyze the impact of the tau axis, a total creation of our own imagination and not at all a characteristic of the actual data defining our problem. Since we are interested in the implied probability distribution of x, we must (in the final analysis) integrate over the probability distribution of tau. Since tau is a complete fabrication of our imagination, the final [math]P(x,\tau,t)[/math] cannot possibly depend upon tau. It follows that the tau dependence of [math]\Psi[/math] must be of the form [math] e^{iq \tau}[/math]. For exactly the same reasons, we also know that [math]G(x,\tau)[/math] cannot be a function of tau. Thus we can perform the integration over tau and the equation we are interested in solving becomes

[math] \left(\frac{\partial^2}{\partial x^2}-q^2 + G(x) \right)\vec{\Psi}(x,t) = 2\kappa^2 \frac{\partial^2}{\partial t^2}\vec{\Psi}(\vec{x},t)[/math].

 

Notice that if [math]q^2[/math] is moved to the right side of the equal sign, we may factor that side and obtain:

[math] \left(\frac{\partial^2}{\partial x^2} + G(x) \right)\vec{\Psi}(x,t) = \left( \sqrt{2}\kappa \frac{\partial}{\partial t}-iq \right)\left( \sqrt{2}\kappa \frac{\partial}{\partial t}+ iq \right)\vec{\Psi}(\vec{x},t)[/math].

 

At this point I make my third and final constraint. I will concern myself only with cases where [math] \kappa \sqrt{2}\frac{\partial}{\partial t}\vec{\Psi} \approx -iq\vec{\Psi}[/math] to a high degree of accuracy. In this case, the first term on the right may be replaced by -2iq and , after division by 2q, one has,

[math] \left(\frac{1}{2q}\frac{\partial^2}{\partial x^2} + \frac{1}{2q}G(x) \right)\vec{\Psi}(x,t) = -i\left(\kappa \sqrt{2}\frac{\partial}{\partial t}+-iq \right)\vec{\Psi}(\vec{x},t)[/math].

 

Once again, the form of the equation suggests we redefine [math]\vec{\Psi}[/math] via an exponetial adjustment [math]\vec{\psi}(x,t) = \vec{\Psi}(x,t)e^{\frac{-iqt}{\kappa \sqrt{2}}[/math] thus eliminating the final iq term. The equation should begin to look very familiar. In fact, if we multiply through by [math]\hbar c [/math] and define the following quantities,

[math]m=\frac{q \hbar }{c}\:,\: c=\frac{1}{\kappa \sqrt{2}}\: and \: V(x)=- \frac{\hbar c}{2q} G(x)[/math],

 

it turns out that the equation (without the introduction of any free parameters) is exactly one of the fundamental equations of modern physics.

[math] \left[-\left(\frac{\hbar^2}{2m}\right) \frac{\partial^2}{\partial x^2} + V(x) \right]\vec{\psi}(x,t) = -i\hbar \frac{\partial}{\partial t}\vec{\psi}(\vec{x},t)[/math]

 

is exactly Schrodinger's equation in one dimension.

 

We have reached a truly astounding conclusion. Remember, there is utterly no physics here and the probability of seeing a particular number in a stream of totally undefined numbers is best estimated via Schrodinger's equation no matter how those numbers might have been generated is totally counter intuitive. It is extremely important that we check the meaning of the three constraints I placed on the problem in terms of the conclusion reached.

 

The first two are quite obvious. Recapping, they consisted of demanding that the data point under consideration had negligible impact on the rest of the universe and that the pattern representing the rest of the universe was approximately constant in time. These are both common approximations made when one goes to apply Schrodinger's equation: that is, we should not be surprised that these approximations made life convenient. What is important is that Schrodinger's equation is still applicable to physical situations where these constraints are considerably relaxed. In other words, the constraints are not required by Schrodinger's equation itself.

 

The serious question then is, what happens to my derivation when those constraints are relaxed. If one examines that derivation carefully, one will discover that the only result of these two constraints was to remove the identity matrix and time dependence from the linear weighted sum represented by the function called g. My central purpose was to remove the identity matrix from the left side of my opening equation of this post. If the identity matrix is left in, cross terms will remain in my deduced equation and the resulting G(x) will be complicated in three ways: first, the general representation must allow for time dependence; second, the representation must allow for terms proportional to [math]\frac{\partial}{\partial x}[/math] and, third, it will be a linear sum of alpha and beta matrices.

 

The time dependence creates no problems: V(x) merely becomes V(x,t). Terms proportional to [math]\frac{\partial}{\partial x}[/math] correspond to velocity dependent terms in V(x,t) and, finally, retention of the matrix structure essentially forces our deduced result to be a set of equations, each with its own V(x,t). All of these results are entirely consistent with Schrodinger's equation, they simply require interactions not commonly seen on the introductory level. Inclusion of these complications would only have served to obscure the fact that what was deduced was, in fact, Schrodinger's equation.

 

This brings us down to the final constraint I imposed: [math]\kappa \sqrt{2} \frac{\partial}{\partial t} \vec{\Psi} \approx -iq \vec{\Psi}[/math]. If we multiply this by [math]\frac{i \hbar}{\kappa \sqrt{2}}[/math], the definitions of m and c imply the constraint can be written:

[math] i \hbar \frac{\partial}{\partial t}\vec{\Psi} \approx \frac{q \hbar}{\kappa \sqrt{2}}\vec{\Psi} = \left(\frac{q \hbar}{c}\right)c^2\vec{\Psi}= mc^2 \vec{\Psi}[/math].

 

Certainly the term [math]mc^2[/math] should be familiar to everyone. The left side, [math] i \hbar \frac{\partial}{\partial t}[/math], should be recognized as the energy operator from standard quantum mechanics. Putting these two facts together, it is clear the the redefinition of [math]\vec{\Psi}[/math] to [math]\vec{\psi}[/math] is totally analogous to adjustment to non-relativistic energys. This step is certainly necessary as Schrodinger's equation is well known to be a non-relativistic approximation: i.e., Schrodinger's equation is known to be false if this approximation is not valid.

 

A very strange thing has happened: that the above approximation is necessary is not surprising; that it arose the way it did is astonishing. We have arrived at the expression [math]E=mc^2[/math] without even mentioning the concept of relativity. Furthermore, this result moves us to define a few more concepts (purely through the mathematical appearance of expressions of interest). (Notice that these terms have not yet been defined in my approach to the issue.) Let us define "Mass" (represented by m) to be the result of application of the "Mass Operator" [math]-i \frac{\hbar}{c}\frac{\partial}{\partial \tau}[/math]: i.e.,

[math]-i \frac{\hbar}{c}\frac{\partial}{\partial \tau_j}\vec{\Psi}= \mu_j \vec{\Psi} \: and \: -i \frac{\hbar}{c}\int \vec{\Psi}^\dagger \frac{\partial}{\partial \tau_j}\vec{\Psi}dv = m_j[/math].

 

A rather curious result follows from the above definition. Our mental model clearly allows for negative mass; however, it arises in a form very analogous to classical momentum: mass becomes momentum in the tau direction. A close examination of the derivation of the relationship leading to m will reveal that m in the expression [math]E=mc^2[/math] is actually the magnitude of the mass and not the signed value. If q is negative, replace q with -p. The derivation will be unchanged except that the division will be by 2p instead of 2q. Antiparticles would not have negative energy in our representation and there is no need of Dirac's "infinite sea of filled states" to prevent transitions to "the lower energy states" of antiparticles.

 

Clearly the concepts "Momentum" and "Energy" can be defined in a similar way. This leads to the conclusion that energy, as I have defined it, is conserved by definition (see the original fundamental equation deduced from "an explanation"). Finally, as nothing I have set forth has any bearing on reality (or physics for that matter), they constitute nothing more than concepts "convenient" to data analysis. This is interesting because of the fact that the term "momentum" is often used with regard to changes in data having absolutely nothing to do with physics.

 

If some piece of data is the same day after day, it is quite reasonable to presume it will be the same tomorrow. How does that compare to "things at rest will remain at rest". And when some piece of data is being incremented day after day, isn't it reasonable to presume that it will continue to be incremented tomorrow. How does that compare to "things in motion will continue in motion". And why not call what ever might change these tendencies a "force". What I am getting at is that the concepts of dynamic physics introduced by Newton are already being used to describe phenomena having nothing to do with physics. What I have pointed out in my derivation is that these concepts are important to any analysis of any data if you have a desire to explain that data.

 

Have fun -- Dick

Link to comment
Share on other sites

Well, people are still apparently reading this thread so I thought I would add a minor point.

 

I showed that any one variable solution to my fundamental equation could be approximated by Schrodinger's equation and it should be clear to anyone who understands modern physics that the validity of Newtonian physics can be deduced as a macroscopic approximation to Schrodinger's equation thus it might appear that classical mechanics and early quantum mechanics are simple approximations to my equation. However, that presumption is not entirely correct. My fundamental equation is for a one dimensional universe. Tau is fictitious and must be integrated out and t is no more than a parameter specifying a specific change in the information available to us thus the mathematical representation of the universe ends up being a collection of points on the x axis (it's a one dimensional construct).

 

A very simple generalization of that representation appears if one divides the information available to us into two independent sets: i.e., a collection of pairs of numbers. In that case, the model of B(t) can be a map of those pairs [math](x_i,y_i)[/math] into points on the real (x,y) plane. Only one tau axis will still serve its purpose as the only reason it was there was to insure that information was not lost in the representation and any pairs of points can still be separated with this one additional axis. If my deductions are followed in detail, it will be discovered that the net result is that the fundamental equation becomes one representing a two dimensional universe.

 

It should be seen that a one dimensional representation of reality is not very useful as the observer must be a complex construct within that universe and, outside of himself, his only macroscopic contact with the rest of the universe is one point upstream and one point downstream and the fact that the universe must obey classical mechanics doesn't provide much information. It is not a very valuable representation of the universe on a macroscopic scale.

 

Likewise, a two dimensional representation is highly constrained (anyone who has worked with designing two dimensional entities should be familiar with the limitations). Particularly in view of the fact that, in order for the embedded observer to study the properties of that universe, there must exist some force small enough not to upset the structure of macroscopic constructs and yet strong enough to maintain them in close proximity. Once a dimension is exhausted in providing that association, we are, in many respects, back to a one dimensional situation. Again, the fact that individual fundamental entities of the universe obey Schrodinger's equation and thus obey classical mechanics from a macroscopic perspective still doesn't provide much information. It is still not a very valuable representation of the universe on a macroscopic scale.

 

The whole situation changes when one divides the information available to us into three independent sets: i.e., a collection of triplet numbers. In that case, the model of B(t) can be a map of those triplets [math](x_i,y_i,z_i)[/math] into points on the real (x,y,z) space. Once again, only one tau axis will still serve its purpose as the only reason it was there was to insure that information was not lost in the representation and any pairs of points can still be separated with this one additional axis. If my deductions are followed in detail, it will once again be discovered that the net result is that the fundamental equation becomes one representing a three dimensional universe. Furthermore, a detailed continuation of my deduction of Schrodinger's equation will result in a the standard three dimensional version of his equation and implies the macroscopic universe obeys classical mechanics.

 

Now, the fact that this "three dimensional" representation of the universe will, on a macroscopic level, obey classical mechanics is a very valuable piece of information. All kinds of dynamic constructs can be designed and analyzed. I am quite confident of the fact that this is the exact reason why the mental model of the universe created by our subconscious minds is three dimensional: three is the lowest dimensional representation of any coherent collection of information which provides us with easy macroscopic explanations of our experiences (rough predictions of expectations are quite straight forward: things in motion remain in motion and things at rest remain at rest and only "forces" can change the relationships).

 

Have fun -- Dick

Link to comment
Share on other sites

  • 4 weeks later...

Well, enough people are apparently still reading this thread to imply interest still exists so I will make another post. The fundamental issue in this particular post will be relativity. Erasmus00 tried to discuss this issue with me without understanding either my definition of an explanation or the nature of my fundamental equation. He missed the issue that the conventional perspective and mine are quite different. What is important is identification with experiment. The telling issue is not reproduction of Einstein's specific mathematical model but rather reproduction of the experimental results Einstein's model was invented to explain. One can not discuss experimental results without associating aspects of my fundamental equation with physical reality.

 

I have, in the posts above, now shown how one can deduce that the behavior of single pseudo fundamental entities (essentially, the one body problem) in any explanation must obey Schrodinger's equation (and thus Newtonian mechanics). That deduction included identifying aspects of my fundamental equation with common concepts identified with physical reality; namely position, momentum, mass, energy and time. You should recognize that whereas the definitions of these concepts, as identified with physical reality, are subtle ideas acquired through experience and training with reality, the definitions of these concepts presented in my posts are "analytic" truths completely analogous to my original definition of "an explanation". That is to say, my definitions of these concepts lead one to the conclusion that these concepts can be associated with absolutely any collection of information one might wish to consider (an issue I hinted at in my earlier posts).

 

At this point, understanding those definitions of the various components of my fundamental equation and how they can be associated with the common vision of reality is enough to allow comprehension of how it is that special relativity becomes a necessity of any dynamic explanation (I am using the word "dynamic" to suggest that change with respect to time is involved) subject to my fundamental equation.

 

Two aspects of my fundamental equation must be recognized before any headway can be made on such a thing. First, anyone familiar with physics (i.e., used to categorizing equations) should recognize that my fundamental equation sans interaction (that would be ignoring the impact of the Dirac delta function) is of the form of a many body wave equation (each and every element is described by an independent wave propagating through the reference space). Secondly, it should be noticed that the Dirac delta function has value other than zero only when the size of the argument vanishes. It should be clear that this implies the equation contains no inherent scaling information even when interactions are included. It follows that all scaling must be established in terms of the solutions: i.e., a universe obeying that equation is scale invariant.

 

The fact that the universe so described is scale invariant is usually taken by most serious physicists as prima-facie evidence of error; however, any competent scientist should comprehend that units of measure don't arise outside the universe, they are defined in terms of macroscopic entities found in the real world: i.e., they must arise out of the solutions to the universe itself. That is to say, one should expect them to arise from coherent solutions to exactly the many body problem which the universe itself embodies. By the way, no competent scientist will even suggest that he knows how to solve the general many body problem; nonetheless, however this scaling factor may arise, it can be shown that my fundamental equation will require special relativity to be the correct mechanism to transform solutions in one frame to solutions in a frame moving with respect to that frame. That result can be deduced with a little careful logic.

 

The issue of relativity arises because of a very special constraint expressed in the deduction of my fundamental equation. If you were to examine appendix 3 carefully, you would find that I define what I call the "center of mass system" (also an "analytical truth) as the system where the sums over all the all of the kappas associated with all the entities in the universe vanish. Since momentum was associated with these kappas during the derivation of Schrodinger's equation (together with the definition of mass) I will refer to this system as the CM system (it maps perfectly well into what is commonly called the center of mass system). This constraint is imposed on momentum in the x, y and z directions and on momentum in the tau direction (which turns out to be identified with the sum total of all the rest mass energies in the universe). It follows from that appendix that my fundamental equation is valid only in this very specific frame of reference (in most every way, quite analogous to Newton's "inertial frame").

 

Consider the following possibility. Suppose there exists a collection of entities (fundamental things) who's behavior, at least in short term consideration, can be thought of as a universe unto itself: i.e., the existence of the rest of the universe can be ignored (at least as a serious approximation to the true case). It would follow that, in such a case, my equation would be valid only in the CM of that system. Now, let us say that we have a second system influenced by neither that system or the rest of the universe. Again, my equation would be valid only in the CM of this second system. In this case, we can conceive of a third system totally independent of the rest of the universe which consists of the sum of the two systems just brought up.

 

This circumstance seems to suggest a profoundly inadequate constraint. The CM system is the system where the total momentum of the system vanishes. If we have three systems, each of which must obey my equation in a frame where the total system has no momentum, it would seem that the two original collections cannot have any momentum with respect to one another. For anyone who is confused here, this problem arises because of the analytic "definition" of momentum established in the derivation of Schrodinger's equation (it's defined directly in terms of partial differentials). Conventionally speaking, this would certainly be a very difficult paradoxical suggestion.

 

The solution to the difficulty resides in the fact of the scale invariance of the fundamental equation. If the scale must arise out of the solutions to the universe itself and the two original collections can seriously be considered universes unto themselves, the scale factor cannot be influenced by the rest of the universe; however, it would certainly be reasonable that, whatever means was used to establish scale within these "different" unconnected universes, that means would be the same: i.e., it would arise from the same category of solutions to the fundamental equation. I would also like to point out a subtlety ignored by every scientist I have ever talked to. The presumption that the scales represented by the x, y and z axes of our real (justified by experiment) universe is just that, a presumption. There exists no experiment which can directly compare the scales of those three orthogonal axes. What gives us the impetus to think they are the same is that all experiments oriented differently with respect to these axes yield the same result: i.e., the scale in each of these directions is established by the same type of experimental solutions. And, of course this has to be true or our experiments could identify an absolute direction in the universe.

 

There is no logical reason to believe that the actual scale factors established in the two original systems (or in the combination of the two) will be the same. If we believe in logic, we know the fundamental equation must be correct because it is an analytical deduction from an analytic truth. The net effect of these facts is that there must exist a mathematical scale transformation between these various CM frames of reference. Either that or the universe can contain no such thing as entities which can be considered independent of each other, not even approximately and the implied general many body problem must be solved directly (and, as I mentioned, any trained physicist is well aware of the fact that he cannot do that). That is to say those approximate solutions can not possibly exist unless that mathematical transformation exists. So we should try to find it.

 

This is where the fact that the fundamental equation (sans interactions) is a simple wave equation comes into play. I should comment that scale itself is a concept used to relate measures between references in the absence of interactions as, if interactions are occurring, we cannot know the measure without knowing the interactions (ordinarily, a ruler is presumed to be a universe unto itself, uninfluenced by the rest of the universe: i.e., all internal interactions are ignored). At any rate, in the absence of interactions, my fundamental equation says that the probability distribution defining my expectations of any and all fundamental entities propagates into the future as an expanding sphere in the x, y, z, tau space.

 

This is exactly where the title of this thread, "is time a measurable variable", comes into play. All scientists who understand relativity will agree that clocks are devices which measure tau along their space time paths. As an aside, for those who are interested, I am willing to show that any clock in my system will indeed measure exactly the changes in tau for any observer. Meanwhile, t is an unmeasurable variable which underlies energy conservation (via the analytic truth which defined energy to be related to the partial of psi with respect to t). Possessing no means to measure t we will simply make the common presumption that a clock (which actually measures change in tau) measures t if it is at rest in our reference system. Except for the scale issue, this is entirely reasonable as tau is, by definition, orthogonal to x, y and z and "at rest" means neither x, y nor z is changing so the fundamental equation says the wave equation defining our expectations is traveling directly in the tau direction (only tau is changing) with a fixed velocity. That velocity is clearly change in tau divided by change in t so scale is the only issue.

 

Now momentum in the tau direction is associated with rest mass so, if our two "independent" frames are to have different momentums in the tau direction, the procedures used to develop the scale of pertinent "rest" masses must differ. This should be taken, at least when it comes to "special" relativity, to indicate we have no interest in relative velocities in the tau direction: i.e., tau will go directly into tau'. With regard to the relative motion in the remaining x, y and z directions, we can simply define the direction of relative motion to be the x direction and conclude that y and z go directly to y' and z' (conceptually, you can just visualize entities traveling at fixed y and z coordinates as defining coordinate lines parallel to the critical x axis (the axis which embodies the relative motion between the two frames).

 

Since the issue is establishing the scale in these two different "independent" frames, the scale cannot change from place to place in either frame. This means the relationship must be linear in the relative motion. Life is simplified quite a little by letting the origins of these two coordinate systems be the same at t'=t=0 (we know the origins of the two hypothetical coordinate systems can be freely reset without generating any problems in either frame). It follows that the most complex relationship which can exist cannot be a function y, z or tau and must be linear with respect to the coordinate of motion. It must therefore be of the form:

[math]x'=\alpha x - \beta t \: ; \: y'=y \: ; \: z'=z \: ; \: \tau '=\tau \: and \: t'= \gamma t - \delta x [/math]

 

and we can immediately conclude (by simply looking at the point x'=0, the origin of the primed frame) that the origin of the primed frame always obeys the expression,

[math]\alpha x - \beta t = 0 [/math]

 

which clearly sets beta if the apparent relative velocities of the two frames is defined as v=x/t

[math]\beta = \alpha v[/math]

 

The applicable constraint imposed by the fundamental equation is that the free expansion (the absence of interaction) of the function expressing the probability distribution must conform to a constantly expanding sphere in both reference spaces : i.e., a sphere where the radius is given by

[math]\sqrt{x^2+y^2+z^2+\tau^2}= V t[/math]

 

(where V is the wave velocity expressed in the fundamental equation) must transform explicitly into the expanding sphere

[math]\sqrt{x'^2+y'^2+z'^2+\tau '^2}= V t'[/math].

 

That is, the fundamental equation has exactly the same form in both frames. Substituting our explicit transformations, the second equation becomes

[math]\sqrt{(\alpha x - \beta t)^2+y^2+z^2+\tau^2}= V (\gamma t - \delta x)[/math].

 

Which clearly expands out to

[math]\alpha^2 x^2 -2\alpha \beta x t + \beta^2 t^2 +y^2+z^2+\tau^2=V^2 \left[\gamma^2 t^2 -2\gamma \delta xt + \delta^2 x^2 \right][/math]

 

or, collecting terms

[math](\alpha^2 -V^2 \delta^2) x^2 -2(\alpha \beta - V^2 \gamma \delta) xt +y^2 +z^2 +\tau^2 = V^2 \left(\gamma^2 -\frac{\beta^2}{V^2} \right) t^2[/math]

 

Since both equations are describing exactly the same facts, both equations must be identical. That gives us four equations in four unknowns. A system easily solved via high school algebra. The four equation are, quite simply,

[math]\beta = \alpha v \; ; \; \alpha^2 - V^2 \delta^2 = 1 \; ; \; \gamma^2 - \frac{\beta^2}{V^2}=1\; and \; \alpha \beta = V^2 \gamma \delta [/math]

 

Solving the last for an explicit representation of gamma and substituting the first expression for beta, we have

[math]\gamma = \frac{\alpha\beta}{V^2 \delta} = \frac{\alpha^2 v}{V^2 \delta}[/math]

 

This reduces the number of equations to three (having substituted the first expression for beta everywhere),

[math]\alpha^2 -V^2 \delta=1 \; ; \; \gamma^2 - \left(\frac{v}{V}\right)^2 \alpha^2 =1 \; and \; \gamma = \frac{\alpha^2 v}{V^2 \delta}[/math]

 

Since alpha is squared everywhere, we can substitute alpha squared from the first expression bringing us down to two equations.

[math]\gamma^2 - \left(\frac{v}{V}\right)^2(1+V^2 \delta^2) =1 \; and \; \gamma = \frac{(1+V^2 \delta^2) v}{V^2 \delta}[/math]

 

Finally, squaring the final expression and substituting for gamma squared in the first expression, we have a single equation for delta:

[math] \left( \frac{(1+V^2 \delta^2) v}{V^2 \delta}\right)^2 - \left(\frac{v}{V}\right)^2(1+V^2 \delta^2) =1 [/math]

 

or

 

[math] \frac{(1+V^2 \delta^2)^2}{V^2 \delta^2} \left( \frac{v}{V}\right)^2 - \left(\frac{v}{V}\right)^2(1+V^2 \delta^2) =1 [/math]

 

Multiplying through by [math]\frac{V^2 \delta^2}{(1+V^2 \delta^2)}[/math] yields:

[math](1+V^2 \delta^2)\left( \frac{v}{V}\right)^2 - V^2 \delta^2\left( \frac{v}{V}\right)^2 \equiv \left( \frac{v}{V}\right)^2 = \frac{V^2 \delta^2}{(1+V^2 \delta^2)}[/math]

 

or

 

[math](1 +V^2 \delta^2) \left( \frac{v}{V}\right)^2 =V^2 \delta^2 [/math]

 

which rearranges to

 

[math]\left( \frac{v}{V}\right)^2 + V^2 \delta^2 \left(\frac{v}{V}\right)^2=V^2 \delta^2 [/math]

 

Which is easily solved for delta

[math] \delta = \left(\frac{v}{V}\right) \frac{1}{V \sqrt{1-\left(\frac{v}{V}\right)^2}}[/math]

 

Since (from above)

[math]\alpha^2 = 1+ V^2 \delta^2 \; , \; \alpha^2= 1+ \left(\frac{v}{V}\right)^2\frac {1}{\left[1-\left(\frac{v}{V}\right)^2\right]} \equiv \frac{1}{\left[1-\left(\frac{v}{V}\right)^2\right]}[/math],

 

alpha is clearly given by

 

[math]\alpha = \frac{1}{\sqrt{1-\left(\frac{v}{V} \right)^2}}[/math]

 

Since (again from above) beta is alpha times v and alpha times v is identical to [math]V^2 \delta [/math], gamma must be identical to alpha and we can conclude that there exists but one possible valid transformation:

[math]x' = \frac{1}{\sqrt{1- \left(\frac{v}{V} \right)^2}}[x-vt] \;\; and \;\; t' = \frac{1}{\sqrt{1- \left(\frac{v}{V} \right)^2}}\left[t-\left(\frac{v}{V}\right)\left(\frac{x}{V}\right) \right] [/math]

 

Except for the wave velocity V, the above equations should be very familiar to anyone who understands special relativity. I did them out in detail only because I wanted to drive home the fact that the specific form of those transformations is required if any velocity is to remain the same in all frames in constant motion with respect to one another. These relations are exactly the standard Lorentz transformations Einstein's theory of special relativity was concocted to explain. Einstein interpreted these relations as requiring a particular geometry to describe the universe. This position is considerably in opposition to Poincareé's position that geometry is nothing more than a convenience in representation. It should be clear to any thinking person that Einstein's leap is far from necessary. All the transformations actually require is that the transformations themselves be valid: that is, all solutions to our fundamental equation must obey the above under any change in the coordinates. We are still as free as we ever were to choose any geometry we wish to represent our results.

 

The important difference, V instead of c, is a consequence of our failure to specify light as the significant noninteracting element being described by the fundamental equation. This is exactly where the title of this thread comes to bear. I hold that time is not a measurable variable thus the V indicated above can not be directly measured and is only obtained by inference (through uncertainty with regard to energy by the way, but that's another story for another day). The common position of the scientific community is that "clocks measure time"; but can only be used for this purpose when they are at rest in the frame of reference of interest (what they measure when they are in motion is "Einstein's invariant interval").

 

Let us look at what Einstein used to call a photon clock: that would be a photon bouncing between two fixed massive mirrors (the clock has a period of exactly 2L/c). Now in my perspective, the tau axis was introduced as a fictitious axis used to assure that no information would be lost in the "point space" means of representing references of the elements going to make up B(t). But, having been introduced, the deduced rules have to be the same for that axis as for any other (otherwise the explanation is inconsistent). The fact that the position of an element in the tau direction can not be known (the uncertainty must be infinite) requires that the momentum in the tau direction be quantized. This wipes out any ability to measure actual positions in tau; however, the fundamental equation still yields a wave velocity in that direction so, the distance any fundamental entity moves in that direction (in a specified period of time) is fixed even though both the initial and final positions are totally unknowable.

 

In the derivation of Schrodinger's equation, quantized momentum in the tau direction was interpreted to be mass. Thus it follows that the direction of motion of the photon wave (being massless) has no component in the tau direction whereas the massive mirrors (which are defined to be at rest) must be moving in the tau direction. Both elements of this "photon clock" are moving at the same speed (V, the wave velocity of the fundamental equation sans interactions) and can be seen as a photon (smeared out in the tau direction) bouncing between two massive mirrors also smeared out in the tau direction. While the photon completes a cycle, the mirrors can be seen as moving a distance of exactly 2L in the tau direction.

 

Now examine the situation again, doubling the value of V used in the original picture. Once again, while the photon moves a distance 2L in completing a "clock cycle", the mirrors also move a distance 2L in the tau direction. There is utterly no change in the deduced physical motion of the two entities. Since the scale of the tau dimension is identical to the scale of the x, y and z dimensions, the correct measure for tau should be the same. The fact that common physics views change in tau (all the geometric characteristics of my tau carry over to Einstein's invariant interval) as defining time, one would conclude that the correct value for the speed of light should be unity. The only reason light has the velocity "c" is that the scientific community define tau distances independently from spacial distances. And finally, using tau to specify time, defines the velocity of light in a vacuum (noninteracting massless entities), to be the temporal velocity of the fundamental equation running perpendicular to tau, and therefore, by definition, sets V above to be exactly the same as c.

 

In the final analysis, my analytical definition of an explanation requires special relativity to be the only possible instantaneous transformation between reference frames moving with respect to one another. Sorry this was so long; I was just trying to be clear.

 

Have fun -- Dick

Link to comment
Share on other sites

Dick, you have been finally getting somewhere, although a few things are still unclear to me and I haven't been able to eviscerate some details of the math. I'm beginning to suspect that, over the past decades, you might have had your ideas recognized as being a metaphysical (rather than phenomenological and geometrical) basis of quantum (and perhaps relativistic) physics, if only you had gone about it quite differently. Perhaps it still might help to work on some aspects, starting from antagonism and abrasivity, but certainly I think part of your trouble lies in misconcieving what "the common position of the scientific community" is. For instance:

The common position of the scientific community is that "clocks measure time"; but can only be used for this purpose when they are at rest in the frame of reference of interest (what they measure when they are in motion is "Einstein's invariant interval").
The common position of the scientific community is that clocks measure (there own) proper time (what you call "Einstein's invariant interval"). I'm not quite sure what you are stating, that differs from this.

 

Now examine the situation again, doubling the value of V used in the original picture. Once again, while the photon moves a distance 2L in completing a "clock cycle", the mirrors also move a distance 2L in the tau direction. There is utterly no change in the deduced physical motion of the two entities. Since the scale of the tau dimension is identical to the scale of the x, y and z dimensions, the correct measure for tau should be the same. The fact that common physics views change in tau (all the geometric characteristics of my tau carry over to Einstein's invariant interval) as defining time, one would conclude that the correct value for the speed of light should be unity. The only reason light has the velocity "c" is that the scientific community define tau distances independently from spacial distances. And finally, using tau to specify time, defines the velocity of light in a vacuum (noninteracting massless entities), to be the temporal velocity of the fundamental equation running perpendicular to tau, and therefore, by definition, sets V above to be exactly the same as c.
In any respectable literature using relativistic formalism, natural units are used which mean that c = 1. This makes it obvious that a measurement of c is really nothing but a comparison of the (huuuuuge) units people use for timelike intervals and those used for spacelike ones (rather small).

 

Since reading your last posts of July, I've been curious to know how your method could be used for Schrödinger equations having hamiltonians other than [math]p^2/2m + V[/math] and, possibly, for more general Langrangian cases such as Klein-Gordon and Dirac equations:

 

[math](\partial^2 + m^2)\psi = 0[/math]

 

[math](\cancel{\partial} - m)\psi = 0[/math]

Link to comment
Share on other sites

The common position of the scientific community is that "clocks measure time"; but can only be used for this purpose when they are at rest in the frame of reference of interest (what they measure when they are in motion is "Einstein's invariant interval").
The common position of the scientific community is that clocks measure (their own) proper time (what you call "Einstein's invariant interval"). I'm not quite sure what you are stating, that differs from this.
Not a thing! What I am complaining about is what they put forth as their definition of time ("clocks measure time") when, in fact, their physics is based upon clocks measuring something quite different. Their physics is based upon clocks measuring what they call "their own proper time" or, more importantly, "Einstein's invariant interval" which is not "time" but rather a space time construct, having components of both.

 

That is to say, clocks measure a very specific thing and that specific thing is not "time" in their physics. It is something different and it should be recognized as something different. Their failure to recognize this fact is the foundations of their problems trying to bring relativity and quantum mechanics into agreement with one another.

In any respectable literature using relativistic formalism, natural units are used which mean that c = 1. This makes it obvious that a measurement of c is really nothing but a comparison of the (huuuuuge) units people use for timelike intervals and those used for spacelike ones (rather small).
Yes, they do that all the time; without once considering that perhaps what clocks and rulers measure are the same thing. Strange how tunnel vision is like that.
Since reading your last posts of July, I've been curious to know how your method could be used ...
How my "method could be used"? My method is simple logic! You just don't seem to grasp that. You are operating from the perspective that I am putting forward a theory of some kind; I am not! I am simply showing you what can be deduced from my definition of "an explanation" and nothing more.
... for Schrödinger equations having hamiltonians other than [math]p^2/2m + V[/math] ...
I have no idea what you are talking about here. I suspect you are too involved in tunnel thinking to realize what you have said. Hamiltonian representations are representations via energy expressions (based on the idea that time differentials are the fundamental characteristic of total energy expressions). Are you suggesting that there is some other kind of energy besides kinetic and potential or that a linear summation might not be the correct representation? I suspect you are simply thinking that "V" represents something more constrained than "any possible potential energy". The only difference which exists between the "different" representations you have in mind are the ways to express the potential energy of interest. In my representation, the potential energy is the solution of the many body problem having influence on the single element of interest; in conventional physics, the potential energy is a presumed thing dreamed up out of that mental image of reality they believe to be correct.

[math]E = p^2/2m + V[/math]

 

is the very essence of the relationship embodied in any Schrodinger equation. The only issue remaining is the actual details of V. The Klein-Gordon equation is no more than an early attempt to put forth an equation of the Hamiltonian form consistent with special relativity and it certainly is no more than another approximation to my fundamental equation (but not a very valuable one).

 

Dirac's equation is another story. It is most certainly a relativistically correct approximation to my equation for a very limited case of the deduced one body equation I have already shown you. You should recognize my alpha and beta matrices as quite analogous to Dirac's spin matrices. It is not at all difficult to deduce Dirac's equation from my results. I will work up a post to demonstrate the procedure. It will take a little time as "latex" is not second nature to me and I have to look up the proper syntax.

 

Have fun -- Dick

Link to comment
Share on other sites

Anything correctly deduced from a true statement is a true statement; however, no defense save faith exists for the converse! The fact that a relationship is true cannot be used to justify the model from which it was deduced! Unfortunately, modern science has made exactly the same mistake made by the astrologers: they presume that their mental model of the world is valid because they "feel" it is valid. When questioned, they both give case after case of "correct" predictions, ignoring the fact that there could be an alternate explanation. Astrology also works fine, so long as you accept their rules for success; in fact, many people make good incomes as professional astrologers a thousand years after it was pretty well proved to be ad hoc baloney. What I am putting forward is a far more powerful position than either.

 

There is no need what so ever to justify my model as I have shown that it is entirely general: there exists no communicable concept of anything which cannot be analyzed from the perspective of my model. I need not argue that my view is the only rational view; I need only show that it provides a useful foundation from which real observations may be analyzed with confidence. If you are to show a flaw, you must either show me an explicit error in my deductions or you must show me a universe (a set of referenced concepts) which cannot be cast into the representation I have already presented. That being the case, what follows is fundamentally a proof of the validity of Dirac's equation.

 

In turning to the microscopic realm, it is interesting to note that the fundamental equation I deduced from my definition of an explanation,

[math] \left( \sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j} \beta_{ij} \delta( \vec{x}_i - \vec{x}_j ) \right) \vec{\Psi} = K \frac{\partial}{\partial t} \vec{\Psi} = iKm \vec{\Psi}[/math].

 

bears a striking structural similarity to Dirac's equation,

[math] i \hbar \frac{\partial \Psi}{\partial t} = \left(c \vec{\alpha} \cdot \left(\vec{p} - \frac{e}{c}\vec{A}\right) + \beta mc^2 +e \Phi \right) \Psi [/math].

 

As my equation is a many body equation which we know we cannot solve, let us instead direct our attention to the embedded interaction of two fundamental events. In order to simplify our analysis, let us say that these two events have an important influence on one another but that the impact of the rest of the universe is, at least with regard to the current issue, negligible: that is, let us write the Psi from my equation in the form,

[math]\vec{\Psi} = \vec{\Psi}_1\vec{\Psi}_2\vec{\Psi}_0[/math],

 

where the indices 1 and 2 refer to the two events of interest and 0 represents the rest of the universe, taken to have negligible influence on the events of interest. My fundamental equation can then be written in the form,

[math] \left\{\vec{\alpha}_1 \cdot \vec{\nabla}_1 + \vec{\alpha}_2 \cdot \vec{\nabla}_2 + \beta_{12} \delta (\vec{x}_1 -\vec{x}_2) + \beta_{21} \delta (\vec{x}_2 -\vec{x}_1)\right\}\vec{\Psi}_1\vec{\Psi}_2\vec{\Psi}_0[/math]

 

[math] \;\;\;\;\; + \left[ \left\{ \sum_{i=3}^\infty \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i>2,j>2} \beta_{ij} \delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}_0-K\frac{\partial}{\partial t}\vec{\Psi}_0\right]\vec{\Psi}_1\vec{\Psi}_2[/math]

 

[math]\;\;\;\;\;\;\;\;\; +\left\{\sum_{i=3}^\infty \beta_{1i}\delta(\vec{x}_1 -\vec{x}_i) + \beta_{i1}\delta(\vec{x}_i -\vec{x}_1) + \beta_{2i}\delta(\vec{x}_2 -\vec{x}_i) +\beta_{i2}\delta(\vec{x}_i -\vec{x}_2) \right\} \vec{\Psi}_1\vec{\Psi}_2\vec{\Psi}_0[/math]

 

[math]\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; = \vec{\Psi}_0 \left\{ K \frac{\partial}{\partial t} \vec{\Psi}_1\vec{\Psi}_2 \right\}[/math]

 

The term within the square brackets is clearly zero as it consists of exactly the fundamental constraint on the remainder of the universe when the two events of interest are insignificant (which is what we are presuming). The last term to the left of the equal sign is zero by our hypotheses that the interaction of the rest of the universe with our events of interest is negligible. If we now left multiply by [math]\vec{\Psi}_0^\dagger[/math] and integrate over the rest of the universe (sans our two elements of interest), we will obtain the fundamental constraint on exactly those two events of interest

[math] \left\{\vec{\alpha}_1 \cdot \vec{\nabla}_1 + \vec{\alpha}_2 \cdot \vec{\nabla}_2 + 2 \beta \delta (\vec{x}_1 -\vec{x}_2)\right\}\vec{\Psi}_1\vec{\Psi}_2 = K \frac{\partial}{\partial t}\vec{\Psi}_1\vec{\Psi}_2[/math]

 

If we multiply this equation through by the factor [imath]ic\hbar[/imath], noting the macroscopic concepts of mass, momentum and c which were expressly defined in my earlier post (where I generated Schrodinger's equation) we will obtain the expression:

[math]\left\{c \vec{\alpha}_1 \cdot \vec{p}_1 + c \vec{\alpha}_2 \cdot \vec{p}_2 -2i \hbar c \beta \delta(\vec{x}_1 -\vec{x}_2)\right\}\vec{\Psi}_1\vec{\Psi}_2 = - \frac{i \hbar}{\sqrt{2}} \frac{\partial}{\partial t}\vec{\Psi}_1\vec{\Psi}_2 \equiv -\frac{i\hbar}{\sqrt{2}}\left\{\vec{\Psi}_2 \frac{\partial}{\partial t}\vec{\Psi}_1 + \vec{\Psi}_1 \frac{\partial}{\partial t}\vec{\Psi}_2 \right\}[/math]

 

where the vector "p" represents a four dimensional momentum arising from the standard three dimensional momentum plus the momentum in the tau direction (which is defined to be mass). In this case (since we are concerned with Dirac's equation) the two elements we are interested in have very specific masses. The term which corresponds to Dirac's electron (which I will call element #1) is clearly a massive element: i.e., [imath] -i \hbar \frac{\partial}{\partial \tau} \vec{\Psi}_1 = m_e c \vec{\Psi}_1[/imath]. The other element must represent interaction with electromagnetic phenomena. Anyone who has any comprehension of what Dirac's equation is all about cannot fail to recognize that the element has to be what is commonly called a photon: i.e., a massless entity. Thus it is that we can conclude that the partial with respect to tau of the second entity must vanish. Furthermore, since we already know that [imath]E = \sqrt{(|p| c)^2+ (m c^2)^2}[/imath], if the mass vanishes, the magnitude of the three dimensional momentum is exactly the energy of the second element. It follows that momentum term and the energy term of element #2 exactly cancel. Note that only the interaction term is important to us as the dynamic behavior of the photon itself are pretty much outside our interest here. Thus it is that we are left with an equation constraining the dynamic behavior of an electron. These considerations mean that our equation (where the vector "p" now represents a three dimensional momentum) can be written,

[math]\left\{c \vec{\alpha}_1 \cdot \vec{p}_1 + \alpha_{1 \tau}m_e c^2 -2i \hbar c \beta \delta(\vec{x}_1 -\vec{x}_2)\right\}\vec{\Psi}_1\vec{\Psi}_2 = - \frac{1}{\sqrt{2}}\vec{\Psi}_2 i \hbar \frac{\partial}{\partial t}\vec{\Psi}_1. [/math]

 

If one now defines a new vector matrix [imath]\vec{\gamma}= \vec{\alpha}_1 \beta[/imath], (noting that this implies [imath]\beta=\frac{1}{2} \vec{\alpha}_1 \cdot \vec{\gamma}[/imath] ) we may left multiply by [imath]\vec{\Psi}_2^\dagger[/imath] and integrate over [imath]\vec{x}_2[/imath]. The result obtained is as follows:

[math]\left\{c \vec{\alpha}_1 \cdot \vec{p}_1 + \alpha_{1\tau}m_ec^2 - \vec{\alpha}_1 \cdot \left[i\hbar c \vec{\Psi}_2^\dagger (\vec{x}_1,t)\vec{\gamma}\vec{\Psi}_2(\vec{x}_1,t) \right]\right\}\vec{\Psi}_1 = - \frac{1}{\sqrt{2}}i \hbar \frac{\partial}{\partial t}\vec{\Psi}_1[/math]

 

At this point it is a trivial matter to identify my expression with Dirac's equation. My alpha matrices are [imath]-\frac{1}{\sqrt{2}}[/imath] times his, my alpha sub tau corresponds to his beta and the electromagnetic potentials are directly given by the expectation values of the vector matrix defined as gamma. And finally, my vector Psi goes into his Psi (remembering that the vector nature of Psi was there only to include all possible solutions: i.e., the scaler version amounts to a single special case). Making these identifications, the equation I have deduced becomes

[math] \left(c \vec{\alpha} \cdot \left(\vec{p} - \frac{e}{c}\vec{A}\right) + \beta mc^2 +e \Phi \right) \Psi = i \hbar \frac{\partial \Psi}{\partial t} [/math]

 

where [imath]\Phi[/imath] and [imath]\vec{A}[/imath] are given by:

[math]\Phi (\vec{x}_1,t) = i \sqrt{2}\frac{\hbar c}{e}\vec{\Psi}_2^\dagger (\vec{x}_1,t)\gamma_\tau \vec{\Psi}_2^(\vec{x}_1,t)[/math]

 

and

 

[math]\vec{A}(\vec{x}_1,t) = -i \sqrt{2}\frac{\hbar c^2}{e}\vec{\Psi}_2^\dagger (\vec{x}_1,t)\vec{\gamma} \vec{\Psi}_2^(\vec{x}_1,t)[/math]

 

(where the gamma vector consists of the x, y and z components of the four dimensional gamma vector defined above).

 

We have examined the interaction of two events in isolation from the rest of the universe. We allowed one to be massive and the other to be massless. In this particular case we discovered that the fundamental constraint was identical in form to the Dirac equation. It is important to note that by allowing the Dirac delta function to be non-zero, we have implicitly classified one of the two events to be a member of the set D: i.e., the two events cannot both be constrained by the asymmetry requirement discussed in derivation of my fundamental equation. One of these, which I will take it to be the massless element, must be a fictitious element required only by the required explanation: i.e., its existence is presumed and cannot be proved by any means other than, "the results predicted if it exists are consistent with the explanation" and inconsistent with the lack of its existence. That is to say, in spite of the overwhelming faith in the existence of photons, their existence is nonetheless a theory and not a fact (an issue any reasonable rational scientist would confirm). Another way to view this result is to recognize the fact that my fundamental equation requires that we cannot demand that both of the above events be fermions (a fermion being an element represented by an asymmetric wave function). This is an interesting consequence and the way it arose is surprising.

 

Certainly, the entire analysis is completely consistent with the argument that element #2 is a photon (after all, we are looking at Dirac's equation). If we accept that picture and assume photon-photon interactions are negligible, we may include as many as we choose, and, since the associated Psi is symmetric Bose statistics apply and we can write:

[math]\Phi (\vec{x}_1,t) = i \sqrt{2}\frac{\hbar c}{e}\sum_i \vec{\Psi}_i^\dagger (\vec{x}_1,t)\gamma_\tau \vec{\Psi}_i^(\vec{x}_1,t)[/math]

 

and

 

[math]\vec{A}(\vec{x}_1,t) = -i \sqrt{2}\frac{\hbar c^2}{e}\sum_i \vec{\Psi}_i^\dagger (\vec{x}_1,t)\vec{\gamma} \vec{\Psi}_i^(\vec{x}_1,t)[/math]

 

Note that, in the arguments leading up to this result, the fact that interaction with the rest of the universe was negligible allowed us to reduce the fundamental equation to a form where the wave function of the massive particle was totally determined by the form of the solution to the second event. Allowing the second event to be influenced by the rest of the universe does not change that result in any way so long as we know the actual solution: i.e., as long as we know what [imath]\Phi[/imath] and [imath]\vec{A}[/imath] (our expectations of gamma) happen to be.

 

What we have concluded is that Dirac's equation is nothing more than an approximation to the fundamental equation for a specific circumstance. We know that the fundamental equation is true by definition thus we also know that it is merely the definition of the circumstance which makes Dirac's equation true: i.e., it thus becomes obvious that Dirac's equation is true by definition.

 

The existence of Dirac particles does tell us something about the universe: it tells us that certain specific patterns of data exist in our universe. Just as the astrologer points to specific events which occurred together with certain astrological signs, the actual information content is that the events occurred and that the signs were there. That can not be interpreted as a defense that the astrologers world view is correct! Both presentations are nothing more than mechanisms for cataloging information. The apparent advantage of the classical scientific position is that no cases exist which violate his "catalog" or so he tells you. When it comes to actual fact, both the scientist and the astrologer have their apologies for failure ready (mostly that you don't understand the situation or, there are exigent circumstances). The astrologer says that there was a unique particular combination of signs the impact of which was not taken into account while the scientist says some new theory (another set of signs?) was not taken into account.

 

What is significant is that the existence of Dirac particles may add to our knowledge of the universe but it adds nothing to our understanding of the universe. This is an important point. The reader should realize that the object of all basic scientific research is to discover the rules which differentiate between all possible universes and the one we find ourselves in. Since my fundamental equation must be satisfied by any internally consistent explanation of any possible universes, only constraints not required by that equation tell us anything about the behavior of our universe. All my equation tells one about the universe is that the past is statistically consistent with its calculated expectations: i.e., it describes the requirements of an accurate data compression mechanism.

 

There is a valid complaint which can be made concerning the above deduction. That would be the fact that I have associated the electromagnetic potentials with the expectation values of a certain vector matrix. I have not yet proved to you that the expected behavior of the expectation values of gamma are indeed given by Maxwells equations. That is, we must examine exactly what the fundamental equation tell us about the equations [imath]\Phi[/imath] and [imath]\vec{A}[/imath] are required to obey.

 

I assure you that they do indeed obey Maxwell's equations; however, I will leave the deduction of that result for my next post as this post is already quite long.

 

Have fun -- Dick

Link to comment
Share on other sites

...which is not "time" but rather a space time construct, having components of both.
Quite central to SR, actually, I don't see how you can say that:
...they do that all the time; without once considering that perhaps what clocks and rulers measure are the same thing.
:confused: :shrug:

 

I have no idea what you are talking about here. I suspect you are too involved in tunnel thinking to realize what you have said. Hamiltonian representations are representations via energy expressions (based on the idea that time differentials are the fundamental characteristic of total energy expressions). Are you suggesting that there is some other kind of energy besides kinetic and potential or that a linear summation might not be the correct representation?
If you don't understand, look up any text of analytic mechanics (hamiltonian and lagrangian formulation) and then any real treatment of QM.

 

I am not "simply thinking that 'V' represents something more constrained than 'any possible potential energy'". And "ways to express the potential energy of interest" are not the only difference which exists between the 'different' representations" I have in mind.

 

In my representation, the potential energy is the solution of the many body problem having influence on the single element of interest; in conventional physics, the potential energy is a presumed thing dreamed up out of that mental image of reality they believe to be correct.
You appear to misunderstand the idea that "conventional physics" has of potential energy.

[math]E = p^2/2m + V[/math]

is not the very essence of the relationship embodied in any Schrodinger equation, it is the classic and non-relativistic hamiltonian of a single, pointlike particle interacting with it's surroundings in a manner which may be summed up by a potential. A potential isn't at all necessary (and is by no means always useful for representing an interaction) in order to write the Schrödinger equation of a system, the general form of which is:

 

[math]i\partial_t\Psi = H\Psi[/math] (Forgive! Natural units and what you will, the only really essential constant factor is the imaginary unit!)

 

The Klein-Gordon equation is no more than an early attempt to put forth an equation of the Hamiltonian form consistent with special relativity and it certainly is no more than another approximation to my fundamental equation (but not a very valuable one).
The Klein-Gordon equation is not of the Hamiltonian form and I don't know ow you can say it is, when it is plainly in quadratic form, which is the reason it can't describe a both conserved and positive-definite probability density. This, along with the shortcomings of the Lorentz-covariant hamiltonian form and the difficulties of the Dirac equation, when applied to a single particle, are what lead to the construction of Fock space and all that follows which uses both the Dirac and Klein-Gordon forms in the lagrangian, as the correct free terms for, respectively, massive (and chiral) fermions and massive bosons.

 

I will read your derivation of Dirac's equation on my homeward journey, as I left my usual material at home this morning, and I hope it will clear up a thing or two. I did recognize your alpha and beta matrices as being quite analogous to Dirac's spin matrices, actually it was among the first things I had noticed and made me hope you were able to derive RQFT from your model. Reading that of the Schrödinger equation I feared that it followed ex necessite (and hence only it) from your whole model, contrary to your claim.

Link to comment
Share on other sites

...which is not "time" but rather a space time construct, having components of both.
Quite central to SR, actually, I don't see how you can say that:
...they do that all the time; without once considering that perhaps what clocks and rulers measure are the same thing.
:confused: :shrug:
Yes, I see your confusion quite clearly. My position is that you are just too immersed in the conventional rationalizations of modern physics to see the problem I am trying to point out.
If you don't understand, look up any text of analytic mechanics (hamiltonian and lagrangian formulation) and then any real treatment of QM.
You see, my position is that you are simply far too immersed in the details of the conventional physics rhetoric to see the big picture. You seem to have a love for simple representations of complex ideas such as [math]i\partial_t\Psi = H\Psi[/math] which actually expresses very little sans the professional training sufficient to interpret the intention. I am moved to give you the "Hamiltonian principle" as I was taught it

[math]\delta \int_{t_1}^{t_2}(T-V)dt = 0 [/math]

 

Where delta means a path variation along the integration path. Of course, both T and V are presumed to be represented as path dependent variables. The common verbal reference for "T" and "V" are "kinetic" and potential "energy". What is important here is that the frame of reference need not be the standard Euclidean frame. One can work in all kinds of frames (commonly selected because they simplify the constraints on the problem of interest) and the resultant equations can become quite non-linear; but that has little to do with the fundamental concepts embedded in the idea. They are most commonly seen (by anybody used to working with them) as variations in the expression of V, including both time dependent and momentum dependent terms. However, the variations can also on occasion be seen as changes in T such as the effective kinetic energy which arises in superconducting circumstances. What is important here is that all of these alterations are mathematical conveniences which go to simplify the representation of the problem. In which case I would say that these cases in general represent something more constrained than "any possible potential energy" (or, in the same vein, any possible kinetic energy).

You appear to misunderstand the idea that "conventional physics" has of potential energy.
I would disagree with you quite strongly on that.
... the only really essential constant factor is the imaginary unit!
Now that is an oversimplification if I ever heard one.
The Klein-Gordon equation is not of the Hamiltonian form and I don't know how you can say it is, when it is plainly in quadratic form, which is the reason it can't describe a both conserved and positive-definite probability density.
Let us just say it was my oversimplification of what they were trying to do.

 

Initially the reference "Fock space" rang no bells in my mind so I googled it. However, when I saw the reference to the "Hartree-Fock method" it definitely did ring a bell. Fock was a Russian and I probably didn't know much about him because the competence of Russian research was not emphasized in the early sixties. I am sure a lot of changes have occurred since I left the profession (including the emphasis placed on particular ideas) the but they don't seem to have gotten any closer to understanding reality than they were then.

 

You commented about a graduate student who liked to research everything to death. I had my own compulsion when I was a graduate student. I was quite a fast and prolific reader and, whenever I had to read a journal reference, I usually read the whole volume rather than just the article which was being referenced. It lead to a rather unusual perspective of physics. If one just reads the referenced stuff, you get the impression physicists are smart people. If you read the whole journal volume, you get the impression that most of the published stuff isn't worth the paper it's printed on. After I got my Ph.D. I had no interest in publishing unless I had something worthwhile to say; I had better things to do with my life. I was certainly convinced the professionals had little to offer.

... made me hope you were able to derive RQFT from your model.
That would be "relativistic quantum field theory" which, in my head is rather an oxymoron.

 

It was long long ago but I know I read something written by Newton regarding the issue of "field theories" (and, after all, he was sort of the originator of the idea with his gravitational theory). He said something along the lines of "even though action at a distance is clearly an impossibility, the idea of a gravitational theory provided a very convenient mathematical mechanism". Now, though he never made it clear as to why he thought such a thing was impossible, I find it a very reasonable comment. By the way, that is one of the reasons I think he would have seen the necessity of special relativity had he happened to think about setting physically separate clocks (I am sure you remember the discussion we had earlier). I always thought Newton was a pretty sharp observer who managed to put together some rather diverse information others seemed able to ignore.

 

But, back to "field theory". It seems to me that exchange forces provide a much more general mechanism which one might expect to yield universal application. I don't believe "field theory" will ever be more than a mathematically convenient approximation. But, again, that is no more than an opinion and opinions are a dime a dozen.

 

Have fun -- Dick

Link to comment
Share on other sites

You seem to have a love for simple representations of complex ideas such as [math]i\partial_t\Psi = H\Psi[/math] which actually expresses very little sans the professional training sufficient to interpret the intention. I am moved to give you the "Hamiltonian principle" as I was taught it

[math]\delta \int_{t_1}^{t_2}(T-V)dt = 0 [/math]

 

Where delta means a path variation along the integration path. Of course, both T and V are presumed to be represented as path dependent variables. The common verbal reference for "T" and "V" are "kinetic" and potential "energy".

 

Just a quick note. You are confusing Hamilton's principle for lagrangian mechanics with the Hamiltonian. Your variational method above is a variation of the lagrangian (T-V).

 

The hamiltonian is defined by a legendred transformation of the lagrangian (L=T-V) We replace [math]\dot{q}[/math] with the momentum defined by

 

[math]p = \frac{\partial\mathcal{L}}{\partial \dot{q}} [/math]

 

And hence we get

 

[math]\mathcal{H}=\dot{q}p-\mathcal{L}[/math]

 

Here script L is the lagrangian, q is the coordinate, p is the momentum defined as above and script H is the hamiltonian. Your equations of motion then are

 

[math]\frac{\partial\mathcal{H}}{\partial p} = \dot{q}[/math]

 

[math]-\frac{\partial \mathcal{H}}{\partial q}= \dot{p} [/math]

 

The hamiltonian usually (but not always)comes out just the total energy, but it is defined in terms of two seperate coordinates p and q instead of q and the linked qdot.

 

This then opens up the idea of canonical transformations in analytical mechanics, because you can transform coordiantes and momentas independantly.

 

It is this Hamiltonian that becomes the operator in quantum mechanics that Q was referring to. I believe he as asking what happened to your equation in the cases when the Hamiltonian was not simply the total energy of a system.

-Will

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...