Sorry I have been so slow to respond to you Anssi. I have also wasted a lot of time trying to get Bombadil to understand what I am talking about in the relativity thread. Your post requires me to read almost the entirety of the “What can we know of reality” thread as I am aware of your problems being answered there but not exactly where to direct your attention. I think I could put it a lot clearer if I started over from scratch knowing what I know now as to what is actually unclear. I wonder how the powers that be would feel if I went back and edited those posts to clarify what I was trying to say: i.e., answering the problems people brought up before they brought them up. I might do that except for the fact that it would sort of remove the significance of the complaints which wouldn't be kind.

[math]\sum_i \vec{\alpha}_i\cdot \int\vec{\Psi}_2^\dagger \cdot \vec{\nabla}_i \vec{\Psi}_2dV_2[/math]

The integral itself is like a sum over infinitesimal changes in the input arguments of [imath]\vec{\Psi}_2[/imath] (covering the entire possibility space)

And that sum through index i [imath]\sum_i \vec{\alpha}_i\cdot ...[/imath] means multiple integrations are performed - just each multiplied by different alpha - and summed together.

That would be the correct operation here? I hope. At least it seems to make sense along with all the talk about "weighted sums"

I think you have that about right.

Incidentally, I did go back to refresh my memory on the original purpose of those alphas during my head-scratching session yesterday.

I looked at #42 of the "what can we know" thread;

http://hypography.co...728-post42.html

And then your explanation at the end of #72

http://hypography.co...057-post72.html

And explicit expansion of the sums in #89

http://hypography.co...607-post89.html

My understanding of it is little bit superficial, i.e. it's still hard to handle this in my mind, but let me try and walk through it, let me know if I got it somewhat right;

First of all, you need to be aware of my discussion with Qfwfq concerning that term [imath]iK\Psi[/imath]. Essentially what I eventually acceded to was the fact that he was correct (that K could be a function of the x

_{i}). But my counter to that assertion was that [imath]\vec{\Psi}[/imath] contains all such possibilities and that I only abstracted out the one I did, [imath]iK\vec{\Psi}[/imath], for convenience. What I then asserted was that the deduced differential relationship (that the sum over all differentials had to vanish for symmetry reasons) were as applicable to [imath]\Psi[/imath] as they were to the probability: i.e., there was no real basis for his problem.

Actually I am quite sorry that Qfwfq and Buffy have removed themselves from this conversation as I think they both have sufficient education to follow my thoughts. Their only real problem is that they don't comprehend the problem I am talking about. I was hoping your presence would lead them to realize what I was discussing but it apparently has not.

I also note that you didn't reference

post #83 which applies most directly to your difficulties.

Actually I'm not at all sure if it's valid to put that [imath]iK\psi[/imath] there... Hmm, actually, never came to think of this, but I don't know why isn't it just "iK" without the [imath]\psi[/imath](?) But that's how you have it in your individual shift symmetry equations in post #42...)[/i]

That has to do with the structure and behavior of solutions to differential equations; not a trivial subject. I really don't think it would be of benefit to go into that here as it would probably take a year or so to communicate a clear explanation to you. Perhaps we can get into the subject of differential equations down the road sometime.

And I understood this handles the symmetry requirement through the properties of those anticommuting elements;

I wouldn't say “handles the symmetry requirements”. They are only there because they allow me to write the separate constraints as if they are terms in a single equation: i.e., the fact that the process you are talking about can be done. That is, it is their existence in the fundamental equation which allows the recovery of the three original constraints: i.e., a solution to that fundamental equation must be a solution to the separate constraints.

You multiplied the equation through by some alpha element ([imath]\alpha_{qx}[/imath]), so accordingly to the commutation properties, one term in the sum ([imath]\vec{\alpha}_q \cdot \nabla_q[/imath]) will lose it's alpha. What's left is [imath] \frac{\partial}{\partial x_q}\vec{\psi}[/imath] from [imath]\nabla_q[/imath].

Then you sum the result over q to make all the terms with an [imath]\alpha_{qx}[/imath] to vanish, leaving just

[math] \sum_q \frac{\partial}{\partial x_q}\vec{\psi}[/math]

Hmmm, actually I'm not really sure about the mechanism behind that last step, summing over q... Maybe you could clarify it?

The important step is right there in

post 42.

A little algebra will show that any solution of that “fundamental equation” will satisfy the four constraints required by a flaw-free explanation **under the simple additional constraint** that:

[math]\sum_i \vec{\alpha}_i \vec{\psi} = \sum_{i \neq j}\beta_{ij} \vec{\psi} = 0.[/math]

There is some additional knowledge of physics which might be valuable here. Angular momentum of an entity is given by the radius times the momentum (object is going in a circle). In quantum mechanics where momentum is related to the partial with respect to position, angular momentum around the z axis (at the moment the object is on the x axis) is essentially x times the partial with respect to y. (We need to talk about a spherical coordinate system to do this correctly). When one looks at such things one of the things which occurs is that angular momentum operators anti-commute. The alpha operators here have a lot of characteristics of angular momentum and that simple additional constraint essentially becomes a constraint that the sum of the “spins” of all the elements in the universe is zero. That fact shows up when I derive Dirac's equation.

But back to exactly what I am doing. The fundamental equation consists of a long sum of terms, many of which are multiplied by alpha and beta operators. If I choose a particular specific operator and multiply the entire equation through by that operator and then commute that operator through the specific terms all that happens is that the term changes sign as that operator commutes through another alpha or beta operator (except for term which happens to contain exactly that operator). In that single term, the sign of the term is changed and an additional term is added which has no such operator. Thus one ends up with exactly the negative of every term (except for the time derivative which just doesn't change sign). The difference between what you started with and what you finish with is that [imath]\vec{\Psi}[/imath] has been replaced everywhere with either plus or minus [imath]Op_q\vec{\Psi}[/imath]

**plus the addition of one term containing no alpha or beta operator**. That additional term is exactly the term where [imath]Op_q[/imath] is exactly the alpha or beta operator acting in that term. When we sum over q, the factor [imath]\sum_qOp_q\vec{\Psi}[/imath] is zero by definition of the alpha or beta operator (for the beta operator we must sum over both indices).

...okie, using [imath]\alpha_{q\tau}[/imath] seems fairly clear, but I have to walk the usage of [imath]\beta_{ij}[/imath] through to make sure I understand it.

So we have:

[math]\sum_{i \neq j}\beta_{ij}\delta(x_i - x_j)\delta(\tau_i - \tau_j)\vec{\psi} = 0[/math]

No, what we have is

[math]\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\Psi} = K\frac{\partial}{\partial t}\vec{\Psi}.[/math]

If we multiply this through by [imath]\beta_{kl}[/imath] we have

[math]\beta_{kl}\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\Psi} = \beta_{kl}K\frac{\partial}{\partial t}\vec{\Psi}.[/math]

or, moving [imath]\beta_{kl}[/imath] to the right (it commutes with everything except the alpha and beta operators)

[math]\left\{\sum_i \beta_{kl}\vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{kl}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\Psi} = K\frac{\partial}{\partial t}\beta_{kl}\vec{\Psi}.[/math]

Since [imath]\beta_{kl}[/imath] anticommutes with all alpha and beta operators except the single beta operator where i=k and j=l, further commutation to the right only changes the sign except for that term were i=k and j=l where it adds in a single term [imath]\delta_{ik}\delta{jl}[/imath] (the [imath]\delta_{pq}=0[/imath] if p is not equal to q and one if p=q; see the

Kronecker delta).

Thus, after commutation to the right, we now have

[math]\left\{-\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i \beta_{kl} -\sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \beta_{kl} \right\}\vec{\Psi} + \delta(x_k-x_l)\delta(\tau_k-\tau_l)\vec{\Psi} = K\frac{\partial}{\partial t}\beta_{kl}\vec{\Psi}.[/math]

or, by simply rearranging terms,

[math]-\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\beta_{kl}\vec{\Psi} + \delta(x_k-x_l)\delta(\tau_k-\tau_l)\vec{\Psi}= K\frac{\partial}{\partial t}\beta_{kl}\vec{\Psi}.[/math]

If we now sum this whole thing over k and l (where k is not equal to l) we get:

[math]-\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\sum_{k\neq l}\beta_{kl}\vec{\Psi} + \sum_{k\neq l}\delta(x_k-x_l)\delta(\tau_k-\tau_l)\vec{\Psi}= K\frac{\partial}{\partial t}\sum_{k\neq l}\beta_{kl}\vec{\Psi}.[/math]

But the [imath]\sum_{k\neq l}\beta_{kl} = 0[/imath] (one of the specific constraints imposed when the alpha and beta operators were defined. Inserting those zeros into the above equation, we are left with,

[math] \sum_{k\neq l}\delta(x_k-x_l)\delta(\tau_k-\tau_l)\vec{\Psi}= 0.[/math]

Which is not exactly the same constraint you quote.

And the constraint we should end up with is:

[math]\sum_{i \neq j}\delta(x_i -x_j)\delta(\tau_i -\tau_j)[/math]

In fact, what you quote is not a constraint. It is an expression of a function; a constraint is a requirement that some relationship be enforced: i.e., the “=0” is the expression of the constraint.

And once again the last step is little bit shrouded in mystery to me. I'm guessing the result is summed over with [imath]\beta_{kl}[/imath] to lose all those elements that only had a sign change... But I don't really understand how that would work.

I hope it is a little clearer to you now. (By the way, those indices above are the two adjacent letters of the alphabet: i.e., “l” is not one!)

The alpha and beta operators are only there in order to facilitate the recovery of the original deduced constraints.

Sorry about being so slow. Hope I have made these things a little clearer.

Have fun -- Dick