# Deriving Schrödinger's Equation From My Fundamental Equation

144 replies to this topic

Understanding

• Members
• 1237 posts

Posted 07 August 2008 - 10:27 AM

Originally Posted by LaurieAG... you are just proving that 1=1.

That is exactly what I have been saying from the word go! Maybe you have managed to figured it out; but somehow I doubt it.

Yes, another way to look at what DD has done here, from a philosophic bent, is to understand that he has "derived" mathematically what was called by Aristotle the "Law of Identity", sometimes stated as A=A. Now, is this of interest ? ---well, it seems to me very nice to have a law of philosophy derived from pure logic using mathematics, for more commonly we derive laws from observation and experiment and theory (explanation). But, of course DD does not derive the "A" itself, only that once we accept that "A" exists, then A=A. I'll let DD respond if I have not correctly stated his case.

### #19 AnssiH

AnssiH

Understanding

• Members
• 793 posts

Posted 11 August 2008 - 01:59 PM

Originally Posted by LaurieAG... you are just proving that 1=1.

Yes, another way to look at what DD has done here, from a philosophic bent, is to understand that he has "derived" mathematically what was called by Aristotle the "Law of Identity", sometimes stated as A=A.

Well not exactly. It means Schrödinger's equation is a tautology from "not making undefendable assumptions about the meaning of any raw data".

I.e. you remember those symmetries that were discussed at the early stages of the deduction? Shift symmetry to the assignment of labels etc? Those symmetries are the source of Schrödinger's equation being valid. Your worldview doesn't actually need to make those symmetry assumptions, but in exchange there would always exist some undefendable assumptions (much like assuming an ontological "center point" for the universe). Modern physical models do make those symmetry assumptions (for the most part anyway), and that is why they look alike with DD's deductions.

Sorry that I've been away for some time. I was ill and then I was away from home. Hopefully I'll have proper time soon...

-Anssi

Understanding

• Members
• 1237 posts

Posted 14 August 2008 - 07:14 PM

Well not exactly. It means Schrödinger's equation is a tautology from "not making undefendable assumptions about the meaning of any raw data". I.e. you remember those symmetries that were discussed at the early stages of the deduction? Shift symmetry to the assignment of labels etc? Those symmetries are the source of Schrödinger's equation being valid. Your worldview doesn't actually need to make those symmetry assumptions, but in exchange there would always exist some undefendable assumptions (much like assuming an ontological "center point" for the universe). Modern physical models do make those symmetry assumptions (for the most part anyway), and that is why they look alike with DD's deductions.
-Anssi

Thanks, but what DD also clearly seems to be saying is that "his" equation is a tautology, thus of course since Schrödinger's equation is derived from DD equation, simply it also is a tautology. As DD states concerning Law of Identity (1=1, A=A, etc)...That is exactly what I have been saying from the word go!

Now, perhaps DD never linked concept of Law of Identity with his equation--but I do. So, taking your comments to heart, as I see it, the concept from DD of "shift symmetries" (as you say, not making undefendable assumptions about the meaning of ontological elements) provides the underlying mathematics that makes valid the 'Law of Identity" of Aristotle. Thus, Aristotle logically first shows that the "Law of Identity" is the philosophic formula that unites the concepts of existence and knowledge--that is that ["existence=identity" and "consciousness=identification"]--then 2300+ years latter along comes DD and gives the mathematical basis for the validity of unification--that it derives from shift symmetry of ontological elements--and takes the ultimate form of the DD equation. So, again as I see it, the DD equation provides most likely for the first time in human history the mathematical basis that unites existence (metaphysics) and consciousness (epistemology) via the Law of Identity. Now, perhaps this is not at all the understanding of DD ? So, here then I think we find understanding why physicists claim the DD equation is philosophy, philosophers claim it is mathematics, and mathematicians claim it is physics--all are correct.

### #21 Doctordick

Doctordick

Explaining

• Members
• 1092 posts

Posted 16 August 2008 - 06:09 PM

Hi Anssi, sorry you were ill; I hope you are feeling better now. I would have answered your post more quickly but I wanted to be as clear as possible so that my comments would not be misunderstood.

Well not exactly. It means Schrödinger's equation is a tautology from "not making undefendable assumptions about the meaning of any raw data".

This I would have to agree with; my deduction is very much based upon “not making undefendable assumptions”.

I.e. you remember those symmetries that were discussed at the early stages of the deduction? Shift symmetry to the assignment of labels etc? Those symmetries are the source of Schrödinger's equation being valid. Your worldview doesn't actually need to make those symmetry assumptions, but in exchange there would always exist some undefendable assumptions (much like assuming an ontological "center point" for the universe). Modern physical models do make those symmetry assumptions (for the most part anyway), and that is why they look alike with DD's deductions.

Here I wouldn't quite put it the way you did; I think you have things a little backward (of course I could be a little prejudice there ). I would instead say that my work indicates the relationships found by modern physicists have to be true (at least as a good approximation) and it shouldn't be too surprising that after three hundred years of comparing their expectations based on their epistemological constructs to actual results (their experiments) they should come up with some of these same relationships. And that would include their discovery of symmetry arguments; however, I have never heard of anyone even considering those symmetry relationships to be a more fundamental starting point then time and/or dimensionality itself.

I would comment that both Rade and LaurieAG are, to a great extent, trolls offering little consideration to what is actually being said. LaurieAG is new to me but there are others who agree. Rade has been on my back for many years already and I am tired of dealing with his contributions,

I will comment on his failure to include the single most important aspect of my comment in his quote above. I have included the important part in square brackets here

So, here then I think we find understanding why physicists claim the DD equation is philosophy [and outside their interest], philosophers claim it is mathematics [and outside their interest], and mathematicians claim it is physics [and outside their interest]--all are correct.

It is indeed, outside their interest and, I might say, outside their expertise.

That brings me to something I just read in the August 16 issue of “Science News”

Those discoveries that most change the way we think about nature cannot be anticipated... Beware of subtle unexplained behavior; don't dismiss it. Frequently nature does not knock with a very loud sound but rather a very soft whisper, and you have to be aware of subtle behavior which may in fact be a sign that there is interesting physics to be had.

Too much noise from professionals battening down the hatches against all attacks on established authority can easily drown out those subtle observations; like the fact that “clocks don't measure time” but rather measure Einstein's invariant interval.

I think Rade and LaurieAG add random noise for the sole purpose of disrupting discussions over their heads. I have a strong suspicion that Rade thinks the sole purpose of knowledge is to cover up stupidly. I have had the following as a sign on the wall of my office for many years.

Knowledge is Power
And the most popular abuse of that
power is to use it to hide stupidity.

I used to have that as part of my signiture.

Have fun -- Dick

### #22 AnssiH

AnssiH

Understanding

• Members
• 793 posts

Posted 17 August 2008 - 01:48 PM

Hi Anssi, sorry you were ill; I hope you are feeling better now.

Slowly getting there... Still been having some sort of flu aftermath and my head's been all sore, haven't had the energy to do much of anything... I kind of feel like I'm getting old with strange aches in my head... I press the top of my head and feel it at the right side of my upper jaw, what the hell?

Yeah I couldn't really figure out what Rade is saying... Anyway, I probably sound like a broken record player but I'll be trying to get to the topic soon

-Anssi

### #23 LaurieAG

LaurieAG

Explaining

• Members
• 1571 posts

Posted 19 August 2008 - 06:18 PM

I would comment that both Rade and LaurieAG are, to a great extent, trolls offering little consideration to what is actually being said. LaurieAG is new to me but there are others who agree.

When I was studying calculus and advanced maths at high school (we had to wait for proofs from first principles in first year uni) I came to the conclusion that if, during the working out of a solution, I proved that 1=1 or 0=0 then I realised that I did not have a solution for the stated problem due to errors in my process and would have to start again if I wanted to get the real answer.

Haven't you realised this yet?

Understanding

• Members
• 1237 posts

Posted 20 August 2008 - 05:06 AM

LOL, poor DD, does my name cause such a negative reaction that you cannot even comprehend a compliment when placed on the tip of your nose ?--clearly your 'sign-on-the-wall' is well placed.

### #25 AnssiH

AnssiH

Understanding

• Members
• 793 posts

Posted 25 August 2008 - 01:01 PM

I ran through post #7 again to refresh my memory and it still seemed to make sense. So to pick up from where I'm at with the OP...

Notice that [imath]\int \vec{\Psi}_2^\dagger \cdot\vec{\Psi}_2dV_2 [/imath] equals unity by definition of normalization. Furthermore, since the tau axis was introduced for the sole purpose of assuring that two identical indices associated with valid ontological elements existing in the same [imath](x,\tau)_t[/imath] would not be represented by the same point,

I suppose that should read "...assuring that two identical ontological elements, being referred to by the same "X" index, would not exist in the same point..." or something along those lines.

we came to the conclusion that [imath]\vec{\Psi}_1[/imath] must be asymmetric with regard to exchange of arguments.

So, I dug this up and I suppose it is the issue explained in post #180 of What can we know of reality:
http://hypography.co...06-post180.html

When we actually let the number of possibilities go to infinity and include all possibilities, we run into the circumstance where the difference between two indices can go to zero. Now, if we have two indices who differ by exactly zero, is it not true that they are the same? If they are the same then the two points which were to represent different noumena become a single point and the purpose for which the tau axis was created is no longer effective. I got around this difficulty by requiring [imath]vec{Psi}[/imath] to be asymmetric with respect to exchange; by driving the probability density to exactly zero, this will guarantee the difficulty never arises.

Seems to make sense.

If that is indeed the case (as it must be) then the second term in the above equation will vanish identically as [imath]\vec{x}_i[/imath] can never equal [imath]\vec{x}_j[/imath] for any i and j both chosen from set #1.

By "second term" you must refer to:
$\sum_{i \neq j (\#1)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)$

Seems to make sense that it would vanish.

So this is where we stand;

$\left\{\sum_{\#1} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#1)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\right\}\vec{\Psi}_1 + \left\{2 \sum_{i=\#1 j=\#2}\int \vec{\Psi}_2^\dagger \cdot \beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\vec{\Psi}_2 dV_2 \right. +$
$\left.\int \vec{\Psi}_2^\dagger \cdot \left[\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#2)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right]\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1 = K\frac{\partial}{\partial t}\vec{\Psi}_1+K \left\{\int \vec{\Psi}_2^\dagger \cdot \frac{\partial}{\partial t}\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1$

If the actual function [imath]\vec{\Psi}_2[/imath] were known (i.e., a way of obtaining our expectations for set #2 is known), the above integrals could be explicitly done and we would obtain an equation of the form:

$\left\{\sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right\}\vec{\Psi}_1 = K\frac{\partial}{\partial t}\vec{\Psi}_1.$

The function f must be a linear weighted sum of alpha and beta operators plus one single term which does not contain such an operator. That single term arises from the final integral of the time derivative of [imath]\vec{\Psi}_2[/imath] on the right side of the original representation of the result of integration:

$\int \vec{\Psi}_2^\dagger\cdot\frac{\partial}{\partial t}\vec{\Psi}_2dV_2.$

So here I need help. I do not quite understand how things unfold into that equation, and what or how the "function f" appears there, or what it means that a function is a "linear weighted sum of...", and how does that single term arise from the integral of the time derivative of [imath]\vec{\Psi}_2[/imath]...

I suppose I shouldn't plow onwards until I understand that step.

-Anssi

### #26 Doctordick

Doctordick

Explaining

• Members
• 1092 posts

Posted 25 August 2008 - 10:11 PM

I suppose I shouldn't plow onwards until I understand that step.

Yes sir; please stop the moment anything is not clear.

I suppose that should read "...assuring that two identical ontological elements, being referred to by the same "X" index, would not exist in the same point..." or something along those lines.

Actually, I kind of find the phrase, “exist in the same point” applying to the actual ontological elements a little bothersome. On rereading the original line, I suspect the real problem is the length of the sentence. I'll think about it and edit the paragraph to something better.

I take your comment, “Seems to make sense” to mean you understand the consequence of requiring asymmetry; your presumed reference was right on the money.

By "second term" you must refer to:
$\sum_{i\neq j (\#1)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)$

Seems to make sense that it would vanish.

That is correct.

So this is where we stand;

Not quite. I wouldn?t have included the term we just decided must vanish. I would have said, this is where we stand:

$\sum_{\#1} \vec{\alpha}_i \cdot \vec{\nabla}_i \vec{\Psi}_1 + \left\{2 \sum_{i=\#1 j=\#2}\int \vec{\Psi}_2^\dagger \cdot \beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\vec{\Psi}_2 dV_2 \right. +$
$\left.\int \vec{\Psi}_2^\dagger \cdot \left[\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#2)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right]\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1 = K\frac{\partial}{\partial t}\vec{\Psi}_1+K \left\{\int \vec{\Psi}_2^\dagger \cdot \frac{\partial}{\partial t}\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1$

So here I need help. I do not quite understand how things unfold into that equation, and what or how the "function f" appears there, or what it means that a function is a "linear weighted sum of...", and how does that single term arise from the integral of the time derivative of [imath]\vec{\Psi}_2[/imath]...

Maybe it will be a little clearer if I make a minor rewrite of the above equation. What is important is that the alpha and beta operators can be factored from the integrals, they operate on the terms of the sums over i and j, not the arguments of those terms.

$\sum_{\#1} \vec{\alpha}_i \cdot \vec{\nabla}_i \vec{\Psi}_1 + \left\{2 \sum_{i=\#1 j=\#2}\beta_{ij}\int \vec{\Psi}_2^\dagger \cdot \delta(\vec{x}_i -\vec{x}_j)\vec{\Psi}_2 dV_2 +\sum_i \vec{\alpha}_i\cdot \int\vec{\Psi}_2^\dagger \cdot \vec{\nabla}_i \vec{\Psi}_2dV_2 \right. +$
$\left.\sum_{i \neq j (\#2)}\beta_{ij}\int \vec{\Psi}_2^\dagger \cdot \delta(\vec{x}_i -\vec{x}_j)\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1 = K\frac{\partial}{\partial t}\vec{\Psi}_1+K \left\{\int \vec{\Psi}_2^\dagger \cdot \frac{\partial}{\partial t}\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1$

Now, all those integrals essentially yield numerical results which are functions of the arguments from set #1. All the arguments from set #2 have been integrated over (essentially those integrals can be seen as functional contributions arising from the probability distribution of indices taken from set #2 summed over all possibilities (remember that the integral is defined to be the result of a sum taken in the limit where the size of the elements goes to zero as the number of elements goes to infinity; [imath]\vec{\Psi}_2^\dagger \cdot \vec{\Psi}_2[/imath] is the density of the elements in that sum (as a function of the arguments) and dV_2 is the differential element which drives the net element to zero in the limit.

A “weighted sum” is a sum of terms where each term has a weight assigned to it: five of these, four of those, fiftytwo of a third thing, etc, etc. The sum above (after the integrals are done) is a simple weighted sum of alpha and beta operators where the integrals provide the weights (these weights will end up being functions of the arguments from set #1 (those from set #2 are integrated out). The adjective “linear” simply means that every term contains only one of those elements: there are no terms which contain a product of two such elements. The function “f” is no more than a symbol which stands for that sum: i.e., “the function f must be a linear weighted sum of alpha and beta operators” is exactly what I have just said.

And lastly, the term [imath]\int \vec{\Psi}_2^\dagger \cdot\frac{\partial}{\partial t}\vec{\Psi}_2 dV_2[/imath] is the only term arising from the integration operation which does not contain either a alpha or a beta operator. Thus we have “one single term which does not contain such an operator”. Thus it is that we know that the resulting equation above can be written in the form:

$\left\{\sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right\}\vec{\Psi}_1 = K\frac{\partial}{\partial t}\vec{\Psi}_1.$

The actual form of f is determined entirely by the means by which we obtain our expectations for the arguments of set #2 which, as I said earlier, are the fabricated references we created to insure that the arguments of set #1 are consistent with our experiences. So, what should we do? We should perform experiments to determine the form of f. Actually, that would be pretty much a waste of time because the equation we have is still a many body equation and there exists no method of solving it. That is why I proceed to the next step.

I hope that makes a little sense to you.

Have fun -- Dick

### #27 AnssiH

AnssiH

Understanding

• Members
• 793 posts

Posted 06 September 2008 - 02:19 PM

Hi, sorry I'm being slow again...

One question that just occurred to me. Since set #1 and set #2 obey the same flaw free explanation, aren't [imath]\vec{\Psi}_1[/imath] and [imath]\vec{\Psi}_2[/imath] the same function? Or is the issue rather that, if one could actually tell the elements of #1 & #2 apart, they would obey different [imath]\vec{\Psi}[/imath] functions?

I hope that makes a little sense to you.

Well it does make little sense to me... ...but I was hoping it'd make a lot of sense
Heh, seriously though, it was helpful, even though I could not really understand everything you said;

Maybe it will be a little clearer if I make a minor rewrite of the above equation. What is important is that the alpha and beta operators can be factored from the integrals, they operate on the terms of the sums over i and j, not the arguments of those terms.

$\sum_{\#1} \vec{\alpha}_i \cdot \vec{\nabla}_i \vec{\Psi}_1 + \left\{2 \sum_{i=\#1 j=\#2}\beta_{ij}\int \vec{\Psi}_2^\dagger \cdot \delta(\vec{x}_i -\vec{x}_j)\vec{\Psi}_2 dV_2 +\sum_i \vec{\alpha}_i\cdot \int\vec{\Psi}_2^\dagger \cdot \vec{\nabla}_i \vec{\Psi}_2dV_2 \right. +$
$\left.\sum_{i \neq j (\#2)}\beta_{ij}\int \vec{\Psi}_2^\dagger \cdot \delta(\vec{x}_i -\vec{x}_j)\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1 = K\frac{\partial}{\partial t}\vec{\Psi}_1+K \left\{\int \vec{\Psi}_2^\dagger \cdot \frac{\partial}{\partial t}\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1$

I see the alpha and beta operators have been moved outside of integrals, but I am not really sure what it means that they operate on the terms of the sums, but not the arguments of those terms, and I have no idea why that means that they can be factored from the integrals.

Now, all those integrals essentially yield numerical results which are functions of the arguments from set #1.

Meaning, the results of the integrals have a dependency on the arguments from set #1?

Even that final integral from the left side of the equation, which does not seem to contain any arguments from #1? I.e. [imath]\sum_{i \neq j (\#2)}\beta_{ij}\int \vec{\Psi}_2^\dagger \cdot \delta(\vec{x}_i -\vec{x}_j)\vec{\Psi}_2 dV_2[/imath]

All the arguments from set #2 have been integrated over (essentially those integrals can be seen as functional contributions arising from the probability distribution of indices taken from set #2 summed over all possibilities (remember that the integral is defined to be the result of a sum taken in the limit where the size of the elements goes to zero as the number of elements goes to infinity; [imath]\vec{\Psi}_2^\dagger \cdot \vec{\Psi}_2[/imath] is the density of the elements in that sum (as a function of the arguments) and dV_2 is the differential element which drives the net element to zero in the limit.

I'm afraid I understand almost nothing of the above
Can you explain it with more detail?

A “weighted sum” is a sum of terms where each term has a weight assigned to it: five of these, four of those, fiftytwo of a third thing, etc, etc. The sum above (after the integrals are done) is a simple weighted sum of alpha and beta operators where the integrals provide the weights (these weights will end up being functions of the arguments from set #1 (those from set #2 are integrated out). The adjective “linear” simply means that every term contains only one of those elements: there are no terms which contain a product of two such elements. The function “f” is no more than a symbol which stands for that sum: i.e., “the function f must be a linear weighted sum of alpha and beta operators” is exactly what I have just said.

And lastly, the term [imath]\int \vec{\Psi}_2^\dagger \cdot\frac{\partial}{\partial t}\vec{\Psi}_2 dV_2[/imath] is the only term arising from the integration operation which does not contain either a alpha or a beta operator. Thus we have “one single term which does not contain such an operator”. Thus it is that we know that the resulting equation above can be written in the form:

$\left\{\sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right\}\vec{\Psi}_1 = K\frac{\partial}{\partial t}\vec{\Psi}_1.$

So that "one single term which does not contain alpha nor beta operator" is simply one of the arguments of [imath]f[/imath]? (because I was wondering why I can't see it anywhere

I now have an incredibly vague idea about what the above means (which is a lot more than I earlier had), and how the equation could be expressed that way after the integrals are done. Very, uncomfortably vague. I could take it on faith for now and proceed to next step, unless you think this might bite me in the *** sooner or later?

-Anssi

ps. Wow, the LaTex in quotes works now! Excellent, thanks to whoever fixed that.

### #28 Doctordick

Doctordick

Explaining

• Members
• 1092 posts

Posted 07 September 2008 - 03:17 AM

Hi Anssi! Don't worry about being slow.

One question that just occurred to me. Since set #1 and set #2 obey the same flaw free explanation, aren't [imath]\vec{\Psi}_1[/imath] and [imath]\vec{\Psi}_2[/imath] the same function? Or is the issue rather that, if one could actually tell the elements of #1 & #2 apart, they would obey different [imath]\vec{\Psi}[/imath] functions?

The elements of the entire universe obey exactly the same flaw free explanation does not mean that the probability distribution (what is being calculated via [imath]\vec{\Psi}[/imath]) is the same for both sets. The probability distributions for some arguments are a function of the probability distributions of other events. The probability that you will have no problems driving to work is dependent upon the probability there is air in your tires. The whole finished result, [imath]\vec{\Psi}[/imath], is a coherent expression. When everything is included, [imath]\vec{\Psi}[/imath] must obey my fundamental equation. I have divided the entire set of arguments into two sets: set #1 and set #2. Having done that, I point out the following;

Having divided the arguments into two sets, a competent understanding of probability should lead to acceptance of the following relationship: the probability of #1 and #2 (i.e., the expectation that these two specific sets occur together) is given by the product of two specific probabilities: [imath]P_1[/imath](#1), the probability of set number one, times [imath]P_2[/imath](#2 given #1), the probability of set number two given set number one exists. The existence of set #1 in the second probability is necessary as the probability of set #2 can very much depend upon that existence. At this point, exactly the same argument used to defend [imath]\vec{\Psi}[/imath] as embodying a method of obtaining expectations (the probability distribution) for the entire collection of arguments can be used to assert that there must exist abstract vector functions [imath]\vec{\Psi}_1[/imath] and [imath]\vec{\Psi}_2[/imath] which will yield, respectively [imath]P_1[/imath] and [imath]P_2[/imath].

It should be clear that, under these definitions (representing the argument [imath](x,\tau)_i[/imath] as [imath]\vec{x}_i[/imath]),

$\vec{\Psi}(\vec{x}_1,\vec{x}_2,\cdots, t)=\vec{\Psi}_1(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n, t)\vec{\Psi}_2(\vec{x}_1,\vec{x}_2,\cdots, t).$

What is being said here is that $\vec{\Psi}$ can be written as a product of two functions, $\vec{\Psi}_1$ $\vec{\Psi}_2$. It is very important to realize that the actual arguments of $\vec{\Psi}_2$ must include both set #1 and set #2. When you break this into a product of two functions, one of the two must contain the arguments of the other (note the phrase “ the probability of set number two given set number one exists”) You can talk about the probability of one set sans information about the other set but once you establish that your situation demands set #1, the probability of the other set can depend upon what that first set consisted of. That fact has some very important consequences: the way I have laid it out, [imath]\vec{\Psi}_2[/imath] must be a function of both sets.

You could estimate the probability you will be late to work tomorrow and you could estimate the probability you would find your car with flat tires tomorrow. You could also estimate the probability you will be late to work tomorrow given you had flat tires. These are three entirely different probabilities. The probability you would have flat tires and be late to work is not the probability you will have flat tires times the probability you would be late to work but it is the probability you would have flat tires times the probability you would be late to work “given you have flat tires”. See how that other argument gets into the thing?

I see the alpha and beta operators have been moved outside of integrals, but I am not really sure what it means that they operate on the terms of the sums, but not the arguments of those terms, and I have no idea why that means that they can be factored from the integrals.

They are just operators which operate on [imath]\vec{\Psi}[/imath], not on the arguments of [imath]\vec{\Psi}[/imath]. They have an index upon them which indicates the term in the sum to which they are attached. Just think of them as something that term is multiplying: i.e., they are simple factors by definition.

Meaning, the results of the integrals have a dependency on the arguments from set #1?

The integrations are over all the arguments of set #2 but not over all the arguments of [imath]\vec{\Psi}_2[/imath] (the arguments of [imath]\vec{\Psi}_2[/imath] include the arguments of set #1 and we are not integrating over set #1). The value of the resulting integrals therefore depends upon the values arguments from set #1; the result of the integral is some function of the arguments of set #1. If we knew what the function [imath]\vec{\Psi}_2[/imath] was (i.e., we knew how to estimate our expectations for set #2) we would know how those expectations depended upon set #1 and we would thus know what that function obtained by integration would be.

Even that final integral from the left side of the equation, which does not seem to contain any arguments from #1? I.e. [imath]\sum_{i \neq j (\#2)}\beta_{ij}\int \vec{\Psi}_2^\dagger \cdot \delta(\vec{x}_i -\vec{x}_j)\vec{\Psi}_2 dV_2[/imath]

As I said, it is the function [imath]\vec{\Psi}_2[/imath] which contains the arguments from set #1; that is a completely different from the index over which that integral is being summed (the sum is over the arguments of set #2 and that sum must be there because a different result is obtained for every such pair; that Dirac delta function makes each of those integrals different).

I'm afraid I understand almost nothing of the above
Can you explain it with more detail?

Essentially, all I am saying is that integration over all arguments of set #2 is equivalent to a sum of the probabilities for set #2 over all possibilities so that the arguments from set #2 disappear from the representation.

So that "one single term which does not contain alpha nor beta operator" is simply one of the arguments of [imath]f[/imath]? (because I was wondering why I can't see it anywhere

No, it is not one of the arguments of f. Each and every integral we have done in the above expansion results in some function of the arguments from set #1; that would be a whole set of functions. One could call those functions [imath]G_k(\vec{x}_1,\vec{x}_2, \cdots, \vec{x}_n,t)[/imath] where k is a simple index on that collection of functions (which integral it came from). All of such functions which arose from integrations on the left side of the equation would include either an alpha operator or a beta operator (which we could call Opk as a factor but the one from the integration on the right side of the equation contains no such operator. Thus it is that

$f( \vec{x}_1,\vec{x}_2, \cdots, \vec{x}_n,t)=G_{right side}( \vec{x}_1,\vec{x}_2, \cdots, \vec{x}_n,t)+ \sum_k G_k( \vec{x}_1,\vec{x}_2, \cdots, \vec{x}_n,t)Op_k$

I was saying something quite simple; [imath]f(\vec{x}_1,\vec{x}_2, \cdots, \vec{x}_n,t)[/imath] is a linear weighted sum of alpha and beta operators (those Opk) plus a single term which does not contain an alpha or beta operator (that [imath]G_{right side}[/imath]).

I hope that is somewhat clearer.

Have fun -- Dick

### #29 AnssiH

AnssiH

Understanding

• Members
• 793 posts

Posted 13 September 2008 - 08:01 AM

You could estimate the probability you will be late to work tomorrow and you could estimate the probability you would find your car with flat tires tomorrow. You could also estimate the probability you will be late to work tomorrow given you had flat tires. These are three entirely different probabilities. The probability you would have flat tires and be late to work is not the probability you will have flat tires times the probability you would be late to work but it is the probability you would have flat tires times the probability you would be late to work “given you have flat tires”. See how that other argument gets into the thing?

Ah, right, of course...

They are just operators which operate on [imath]\vec{\Psi}[/imath], not on the arguments of [imath]\vec{\Psi}[/imath]. They have an index upon them which indicates the term in the sum to which they are attached. Just think of them as something that term is multiplying: i.e., they are simple factors by definition.

Hmm, after scratching my head a bit I think I understand what you are saying above... Still - though I may be getting bogged down to details a bit too much - my math knowledge is limited so I'm don't know why that means the alpha & beta operators can be moved outside of the integrals...

The integrations are over all the arguments of set #2 but not over all the arguments of [imath]\vec{\Psi}_2[/imath] (the arguments of [imath]\vec{\Psi}_2[/imath] include the arguments of set #1 and we are not integrating over set #1). The value of the resulting integrals therefore depends upon the values arguments from set #1; the result of the integral is some function of the arguments of set #1. If we knew what the function [imath]\vec{\Psi}_2[/imath] was (i.e., we knew how to estimate our expectations for set #2) we would know how those expectations depended upon set #1 and we would thus know what that function obtained by integration would be.

Right, that seems to make sense now.

No, it is not one of the arguments of f. Each and every integral we have done in the above expansion results in some function of the arguments from set #1; that would be a whole set of functions. One could call those functions [imath]G_k(\vec{x}_1,\vec{x}_2, \cdots, \vec{x}_n,t)[/imath] where k is a simple index on that collection of functions (which integral it came from). All of such functions which arose from integrations on the left side of the equation would include either an alpha operator or a beta operator (which we could call Opk as a factor but the one from the integration on the right side of the equation contains no such operator. Thus it is that

$f( \vec{x}_1,\vec{x}_2, \cdots, \vec{x}_n,t)=G_{right side}( \vec{x}_1,\vec{x}_2, \cdots, \vec{x}_n,t)+ \sum_k G_k( \vec{x}_1,\vec{x}_2, \cdots, \vec{x}_n,t)Op_k$

I was saying something quite simple; [imath]f(\vec{x}_1,\vec{x}_2, \cdots, \vec{x}_n,t)[/imath] is a linear weighted sum of alpha and beta operators (those Opk) plus a single term which does not contain an alpha or beta operator (that [imath]G_{right side}[/imath]).

Right, okay, I think I understand that now. No time to scratch my head more right now as I need to dash, but I should have time to continue from here tomorrow.

Thanks,
-Anssi

### #30 Doctordick

Doctordick

Explaining

• Members
• 1092 posts

Posted 13 September 2008 - 06:13 PM

Hmm, after scratching my head a bit I think I understand what you are saying above... Still - though I may be getting bogged down to details a bit too much - my math knowledge is limited so I'm don't know why that means the alpha & beta operators can be moved outside of the integrals...

Go back to the definition of an integral. Remember that integral sign started life as a large capital “S” standing for a sum where the number of terms in the sum became infinite. If that sum is to be finite (which is the majority of interesting cases) then the terms being summed must go to zero; that is why the differential factor is there (that dx or dz or, in my particular example, dV2). The differential term accommodates that characteristic so that we can speak of the function being integrated over as a non-shrinking function.

Or another way to look at it is like this. Suppose we say [imath]z=\int f(x)dx[/imath] where dz is a single element of that supposed sum: i.e., the sum can be written [imath]\int dz =z[/imath]. Then dz=f(x)dx; or, dividing by dx, we have [imath]\frac{dz}{dx}=f(x)[/imath] or f(x) is the differential of z and/or z is the “antidifferential” of f(x).

No matter how it is looked at, it is still essentially a sum over a bunch of terms. The point here being that each and every one of those terms (in any specific integral we are doing here) is multiplied by the same alpha (or beta) operator. Think of the operators as apples, oranges, peaches, grapes, etc.: i.e., what you actually get when you add them together is not defined (if you add 5 apples to 3 oranges, you have 5 apples and 3 oranges).

If you go back to the original deduction of my fundamental equation, you will see that these operators were inserted for a very specific purpose. If we need to go back over that purpose, I will present it again (perhaps more clearly than I did the first time). The central purpose is to represent three very different relationships in a single equation as if they were actually related to one another when they really are not.

Have fun -- Dick

### #31 AnssiH

AnssiH

Understanding

• Members
• 793 posts

Posted 14 September 2008 - 10:43 AM

Go back to the definition of an integral. Remember that integral sign started life as a large capital “S” standing for a sum where the number of terms in the sum became infinite. If that sum is to be finite (which is the majority of interesting cases) then the terms being summed must go to zero; that is why the differential factor is there (that dx or dz or, in my particular example, dV2). The differential term accommodates that characteristic so that we can speak of the function being integrated over as a non-shrinking function.

Or another way to look at it is like this. Suppose we say [imath]z=\int f(x)dx[/imath] where dz is a single element of that supposed sum: i.e., the sum can be written [imath]\int dz =z[/imath]. Then dz=f(x)dx; or, dividing by dx, we have [imath]\frac{dz}{dx}=f(x)[/imath] or f(x) is the differential of z and/or z is the “antidifferential” of f(x).

Okay, that makes sense.

No matter how it is looked at, it is still essentially a sum over a bunch of terms. The point here being that each and every one of those terms (in any specific integral we are doing here) is multiplied by the same alpha (or beta) operator.

Okay, so let me know if I got it right... I'm looking at that second integral;

$\sum_i \vec{\alpha}_i\cdot \int\vec{\Psi}_2^\dagger \cdot \vec{\nabla}_i \vec{\Psi}_2dV_2$

The integral itself is like a sum over infinitesimal changes in the input arguments of [imath]\vec{\Psi}_2[/imath] (covering the entire possibility space)

And that sum through index i [imath]\sum_i \vec{\alpha}_i\cdot ...[/imath] means multiple integrations are performed - just each multiplied by different alpha - and summed together.

That would be the correct operation here? I hope. At least it seems to make sense along with all the talk about "weighted sums"

(btw, sorry to all that this is bit of a math lecture right now, but feel free to chime in if you think you can clarify something to me

If you go back to the original deduction of my fundamental equation, you will see that these operators were inserted for a very specific purpose. If we need to go back over that purpose, I will present it again (perhaps more clearly than I did the first time). The central purpose is to represent three very different relationships in a single equation as if they were actually related to one another when they really are not.

Incidentally, I did go back to refresh my memory on the original purpose of those alphas during my head-scratching session yesterday.

I looked at #42 of the "what can we know" thread;
http://hypography.co...728-post42.html

And then your explanation at the end of #72
http://hypography.co...057-post72.html

And explicit expansion of the sums in #89
http://hypography.co...607-post89.html

My understanding of it is little bit superficial, i.e. it's still hard to handle this in my mind, but let me try and walk through it, let me know if I got it somewhat right;

So it's the first part of the fundamental equation that handles the shift symmetry constraint:

$\sum_i \vec{\alpha}_i \cdot \nabla_i \vec{\psi} = iK\psi$

(Actually I'm not at all sure if it's valid to put that [imath]iK\psi[/imath] there... Hmm, actually, never came to think of this, but I don't know why isn't it just "iK" without the [imath]\psi[/imath](?) But that's how you have it in your individual shift symmetry equations in post #42...)

And I understood this handles the symmetry requirement through the properties of those anticommuting elements;

You multiplied the equation through by some alpha element ([imath]\alpha_{qx}[/imath]), so accordingly to the commutation properties, one term in the sum ([imath]\vec{\alpha}_q \cdot \nabla_q[/imath]) will lose it's alpha. What's left is [imath] \frac{\partial}{\partial x_q}\vec{\psi}[/imath] from [imath]\nabla_q[/imath].

Then you sum the result over q to make all the terms with an [imath]\alpha_{qx}[/imath] to vanish, leaving just

$\sum_q \frac{\partial}{\partial x_q}\vec{\psi}$

Hmmm, actually I'm not really sure about the mechanism behind that last step, summing over q... Maybe you could clarify it?

I'll have to think through the purpose of the beta elements still once more...

-Anssi

### #32 AnssiH

AnssiH

Understanding

• Members
• 793 posts

Posted 21 September 2008 - 10:50 AM

I'll have to think through the purpose of the beta elements still once more...

Exactly the same analysis (using [imath]\alpha_{q\tau}[/imath] or [imath]\beta_{ij}[/imath] respectly) will yield the remaining constraints.

...okie, using [imath]\alpha_{q\tau}[/imath] seems fairly clear, but I have to walk the usage of [imath]\beta_{ij}[/imath] through to make sure I understand it.

So we have:

$\sum_{i \neq j}\beta_{ij}\delta(x_i - x_j)\delta(\tau_i - \tau_j)\vec{\psi} = 0$

(once again not sure about the right side of the equation)

And the constraint we should end up with is:

$\sum_{i \neq j}\delta(x_i -x_j)\delta(\tau_i -\tau_j)$

I suppose this definition is the important one here:
[imath][\beta_{ij} , \beta_{kl}] = \delta_{ik}\delta_{jl}[/imath]

So when the equation is multiplied through by a specific beta, say [imath]\beta_{kl}[/imath], and it is commuted through the alpha & beta elements, there occurs a sign change except when i=k and j=l

So $\delta(x_k - x_l)\delta(\tau_k - \tau_l)$ is picked out from the sum.

And once again the last step is little bit shrouded in mystery to me. I'm guessing the result is summed over with [imath]\beta_{kl}[/imath] to lose all those elements that only had a sign change... But I don't really understand how that would work.

$\beta_{kl}\sum_{i \neq j}\beta_{ij}\delta(x_i - x_j)\delta(\tau_i - \tau_j) = -\beta_{12}\beta_{kl}\delta(x_1 - x_2)\delta(\tau_1 - \tau_2)...$

Trying to figure out what happens to:

$-\beta_{12}\beta_{kl} + \beta_{kl}$

But I just seem to be too far lost to figure it out :I

-Anssi

### #33 Doctordick

Doctordick

Explaining

• Members
• 1092 posts

Posted 29 September 2008 - 02:31 PM

Sorry I have been so slow to respond to you Anssi. I have also wasted a lot of time trying to get Bombadil to understand what I am talking about in the relativity thread. Your post requires me to read almost the entirety of the “What can we know of reality” thread as I am aware of your problems being answered there but not exactly where to direct your attention. I think I could put it a lot clearer if I started over from scratch knowing what I know now as to what is actually unclear. I wonder how the powers that be would feel if I went back and edited those posts to clarify what I was trying to say: i.e., answering the problems people brought up before they brought them up. I might do that except for the fact that it would sort of remove the significance of the complaints which wouldn't be kind.

$\sum_i \vec{\alpha}_i\cdot \int\vec{\Psi}_2^\dagger \cdot \vec{\nabla}_i \vec{\Psi}_2dV_2$

The integral itself is like a sum over infinitesimal changes in the input arguments of [imath]\vec{\Psi}_2[/imath] (covering the entire possibility space)

And that sum through index i [imath]\sum_i \vec{\alpha}_i\cdot ...[/imath] means multiple integrations are performed - just each multiplied by different alpha - and summed together.

That would be the correct operation here? I hope. At least it seems to make sense along with all the talk about "weighted sums"

I think you have that about right.

Incidentally, I did go back to refresh my memory on the original purpose of those alphas during my head-scratching session yesterday.

I looked at #42 of the "what can we know" thread;
http://hypography.co...728-post42.html

And then your explanation at the end of #72
http://hypography.co...057-post72.html

And explicit expansion of the sums in #89
http://hypography.co...607-post89.html

My understanding of it is little bit superficial, i.e. it's still hard to handle this in my mind, but let me try and walk through it, let me know if I got it somewhat right;

First of all, you need to be aware of my discussion with Qfwfq concerning that term [imath]iK\Psi[/imath]. Essentially what I eventually acceded to was the fact that he was correct (that K could be a function of the xi). But my counter to that assertion was that [imath]\vec{\Psi}[/imath] contains all such possibilities and that I only abstracted out the one I did, [imath]iK\vec{\Psi}[/imath], for convenience. What I then asserted was that the deduced differential relationship (that the sum over all differentials had to vanish for symmetry reasons) were as applicable to [imath]\Psi[/imath] as they were to the probability: i.e., there was no real basis for his problem.

Actually I am quite sorry that Qfwfq and Buffy have removed themselves from this conversation as I think they both have sufficient education to follow my thoughts. Their only real problem is that they don't comprehend the problem I am talking about. I was hoping your presence would lead them to realize what I was discussing but it apparently has not.

I also note that you didn't reference post #83 which applies most directly to your difficulties.

Actually I'm not at all sure if it's valid to put that [imath]iK\psi[/imath] there... Hmm, actually, never came to think of this, but I don't know why isn't it just "iK" without the [imath]\psi[/imath](?) But that's how you have it in your individual shift symmetry equations in post #42...)[/i]

That has to do with the structure and behavior of solutions to differential equations; not a trivial subject. I really don't think it would be of benefit to go into that here as it would probably take a year or so to communicate a clear explanation to you. Perhaps we can get into the subject of differential equations down the road sometime.

And I understood this handles the symmetry requirement through the properties of those anticommuting elements;

I wouldn't say “handles the symmetry requirements”. They are only there because they allow me to write the separate constraints as if they are terms in a single equation: i.e., the fact that the process you are talking about can be done. That is, it is their existence in the fundamental equation which allows the recovery of the three original constraints: i.e., a solution to that fundamental equation must be a solution to the separate constraints.

You multiplied the equation through by some alpha element ([imath]\alpha_{qx}[/imath]), so accordingly to the commutation properties, one term in the sum ([imath]\vec{\alpha}_q \cdot \nabla_q[/imath]) will lose it's alpha. What's left is [imath] \frac{\partial}{\partial x_q}\vec{\psi}[/imath] from [imath]\nabla_q[/imath].

Then you sum the result over q to make all the terms with an [imath]\alpha_{qx}[/imath] to vanish, leaving just

$\sum_q \frac{\partial}{\partial x_q}\vec{\psi}$

Hmmm, actually I'm not really sure about the mechanism behind that last step, summing over q... Maybe you could clarify it?

The important step is right there in post 42.

A little algebra will show that any solution of that “fundamental equation” will satisfy the four constraints required by a flaw-free explanation under the simple additional constraint that:

$\sum_i \vec{\alpha}_i \vec{\psi} = \sum_{i \neq j}\beta_{ij} \vec{\psi} = 0.$

There is some additional knowledge of physics which might be valuable here. Angular momentum of an entity is given by the radius times the momentum (object is going in a circle). In quantum mechanics where momentum is related to the partial with respect to position, angular momentum around the z axis (at the moment the object is on the x axis) is essentially x times the partial with respect to y. (We need to talk about a spherical coordinate system to do this correctly). When one looks at such things one of the things which occurs is that angular momentum operators anti-commute. The alpha operators here have a lot of characteristics of angular momentum and that simple additional constraint essentially becomes a constraint that the sum of the “spins” of all the elements in the universe is zero. That fact shows up when I derive Dirac's equation.

But back to exactly what I am doing. The fundamental equation consists of a long sum of terms, many of which are multiplied by alpha and beta operators. If I choose a particular specific operator and multiply the entire equation through by that operator and then commute that operator through the specific terms all that happens is that the term changes sign as that operator commutes through another alpha or beta operator (except for term which happens to contain exactly that operator). In that single term, the sign of the term is changed and an additional term is added which has no such operator. Thus one ends up with exactly the negative of every term (except for the time derivative which just doesn't change sign). The difference between what you started with and what you finish with is that [imath]\vec{\Psi}[/imath] has been replaced everywhere with either plus or minus [imath]Op_q\vec{\Psi}[/imath] plus the addition of one term containing no alpha or beta operator. That additional term is exactly the term where [imath]Op_q[/imath] is exactly the alpha or beta operator acting in that term. When we sum over q, the factor [imath]\sum_qOp_q\vec{\Psi}[/imath] is zero by definition of the alpha or beta operator (for the beta operator we must sum over both indices).

...okie, using [imath]\alpha_{q\tau}[/imath] seems fairly clear, but I have to walk the usage of [imath]\beta_{ij}[/imath] through to make sure I understand it.

So we have:

$\sum_{i \neq j}\beta_{ij}\delta(x_i - x_j)\delta(\tau_i - \tau_j)\vec{\psi} = 0$

No, what we have is

$\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\Psi} = K\frac{\partial}{\partial t}\vec{\Psi}.$

If we multiply this through by [imath]\beta_{kl}[/imath] we have

$\beta_{kl}\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\Psi} = \beta_{kl}K\frac{\partial}{\partial t}\vec{\Psi}.$

or, moving [imath]\beta_{kl}[/imath] to the right (it commutes with everything except the alpha and beta operators)

$\left\{\sum_i \beta_{kl}\vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{kl}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\Psi} = K\frac{\partial}{\partial t}\beta_{kl}\vec{\Psi}.$

Since [imath]\beta_{kl}[/imath] anticommutes with all alpha and beta operators except the single beta operator where i=k and j=l, further commutation to the right only changes the sign except for that term were i=k and j=l where it adds in a single term [imath]\delta_{ik}\delta{jl}[/imath] (the [imath]\delta_{pq}=0[/imath] if p is not equal to q and one if p=q; see the Kronecker delta).
Thus, after commutation to the right, we now have

$\left\{-\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i \beta_{kl} -\sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \beta_{kl} \right\}\vec{\Psi} + \delta(x_k-x_l)\delta(\tau_k-\tau_l)\vec{\Psi} = K\frac{\partial}{\partial t}\beta_{kl}\vec{\Psi}.$

or, by simply rearranging terms,

$-\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\beta_{kl}\vec{\Psi} + \delta(x_k-x_l)\delta(\tau_k-\tau_l)\vec{\Psi}= K\frac{\partial}{\partial t}\beta_{kl}\vec{\Psi}.$

If we now sum this whole thing over k and l (where k is not equal to l) we get:

$-\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\sum_{k\neq l}\beta_{kl}\vec{\Psi} + \sum_{k\neq l}\delta(x_k-x_l)\delta(\tau_k-\tau_l)\vec{\Psi}= K\frac{\partial}{\partial t}\sum_{k\neq l}\beta_{kl}\vec{\Psi}.$

But the [imath]\sum_{k\neq l}\beta_{kl} = 0[/imath] (one of the specific constraints imposed when the alpha and beta operators were defined. Inserting those zeros into the above equation, we are left with,

$\sum_{k\neq l}\delta(x_k-x_l)\delta(\tau_k-\tau_l)\vec{\Psi}= 0.$

Which is not exactly the same constraint you quote.

And the constraint we should end up with is:

$\sum_{i \neq j}\delta(x_i -x_j)\delta(\tau_i -\tau_j)$

In fact, what you quote is not a constraint. It is an expression of a function; a constraint is a requirement that some relationship be enforced: i.e., the “=0” is the expression of the constraint.

And once again the last step is little bit shrouded in mystery to me. I'm guessing the result is summed over with [imath]\beta_{kl}[/imath] to lose all those elements that only had a sign change... But I don't really understand how that would work.

I hope it is a little clearer to you now. (By the way, those indices above are the two adjacent letters of the alphabet: i.e., “l” is not one!)

The alpha and beta operators are only there in order to facilitate the recovery of the original deduced constraints.

Sorry about being so slow. Hope I have made these things a little clearer.

Have fun -- Dick

### #34 AnssiH

AnssiH

Understanding

• Members
• 793 posts

Posted 05 October 2008 - 11:52 AM

Actually I am quite sorry that Qfwfq and Buffy have removed themselves from this conversation as I think they both have sufficient education to follow my thoughts. Their only real problem is that they don't comprehend the problem I am talking about. I was hoping your presence would lead them to realize what I was discussing but it apparently has not.

Yes, that's a shame...

I also note that you didn't reference post #83 which applies most directly to your difficulties.

Oh, somehow I had missed that post, thanks.

That has to do with the structure and behavior of solutions to differential equations; not a trivial subject. I really don't think it would be of benefit to go into that here as it would probably take a year or so to communicate a clear explanation to you. Perhaps we can get into the subject of differential equations down the road sometime.

Yup.

I wouldn't say “handles the symmetry requirements”. They are only there because they allow me to write the separate constraints as if they are terms in a single equation: i.e., the fact that the process you are talking about can be done. That is, it is their existence in the fundamental equation which allows the recovery of the three original constraints: i.e., a solution to that fundamental equation must be a solution to the separate constraints.

Yup.

The important step is right there in post 42.
There is some additional knowledge of physics which might be valuable here. Angular momentum of an entity is given by the radius times the momentum (object is going in a circle). In quantum mechanics where momentum is related to the partial with respect to position, angular momentum around the z axis (at the moment the object is on the x axis) is essentially x times the partial with respect to y. (We need to talk about a spherical coordinate system to do this correctly). When one looks at such things one of the things which occurs is that angular momentum operators anti-commute. The alpha operators here have a lot of characteristics of angular momentum and that simple additional constraint essentially becomes a constraint that the sum of the “spins” of all the elements in the universe is zero. That fact shows up when I derive Dirac's equation.

Okay, hmmm... I suppose I should have more physics knowledge to understand exactly what you mean (I don't understand why angular momentum of an entity is given in such and such manner in QM, or what it means exactly), but I also suppose it is not important at this stage...(?)

But back to exactly what I am doing. The fundamental equation consists of a long sum of terms, many of which are multiplied by alpha and beta operators. If I choose a particular specific operator and multiply the entire equation through by that operator and then commute that operator through the specific terms all that happens is that the term changes sign as that operator commutes through another alpha or beta operator (except for term which happens to contain exactly that operator). In that single term, the sign of the term is changed and an additional term is added which has no such operator. Thus one ends up with exactly the negative of every term (except for the time derivative which just doesn't change sign). The difference between what you started with and what you finish with is that [imath]\vec{\Psi}[/imath] has been replaced everywhere with either plus or minus [imath]Op_q\vec{\Psi}[/imath] plus the addition of one term containing no alpha or beta operator. That additional term is exactly the term where [imath]Op_q[/imath] is exactly the alpha or beta operator acting in that term. When we sum over q, the factor [imath]\sum_qOp_q\vec{\Psi}[/imath] is zero by definition of the alpha or beta operator (for the beta operator we must sum over both indices).

I think I understand that almost entirely, but right at the end there's just one strange ambiguity that I don't know how to interpret properly;

The chosen "particular specific operator" was denoted as [imath]\alpha_{qx}[/imath], I suppose if it was explicitly stated it could be, say, [imath]\alpha_{3x}[/imath], i.e. it just refers to one specific x in the input arguments.

But if it's a single specific index, then I don't understand what does it mean to "sum over q", i.e. how does one do a "sum over one specific x"? I'd expect to see just one term in that sum.

I assume from all the posts where you are trying to explain this that:

$\sum_q\frac{\partial}{\partial x_q}\vec{\Psi}=0$

is essentially exactly the same as:

$\sum_i\frac{\partial}{\partial x_i}\vec{\Psi}=0$

That makes sense in that I undersand that it shouldn't matter whether you use "i" or "q" to indicate the summation index, but where I see the ambiguity that keeps confusing me is that on the other hand "q" refers to a specific index, but then on the other hand it also refers to a summation index? Where am I getting it wrong?

No, what we have is

$\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\Psi} = K\frac{\partial}{\partial t}\vec{\Psi}.$

If we multiply this through by [imath]\beta_{kl}[/imath] we have

$\beta_{kl}\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\Psi} = \beta_{kl}K\frac{\partial}{\partial t}\vec{\Psi}.$

or, moving [imath]\beta_{kl}[/imath] to the right (it commutes with everything except the alpha and beta operators)

$\left\{\sum_i \beta_{kl}\vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{kl}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\Psi} = K\frac{\partial}{\partial t}\beta_{kl}\vec{\Psi}.$

Since [imath]\beta_{kl}[/imath] anticommutes with all alpha and beta operators except the single beta operator where i=k and j=l, further commutation to the right only changes the sign except for that term were i=k and j=l where it adds in a single term [imath]\delta_{ik}\delta{jl}[/imath] (the [imath]\delta_{pq}=0[/imath] if p is not equal to q and one if p=q; see the Kronecker delta).
Thus, after commutation to the right, we now have

$\left\{-\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i -\sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \beta_{kl} \right\}\vec{\Psi} + \delta(x_k-x_l)\delta(\tau_k-\tau_l)\vec{\Psi} = K\frac{\partial}{\partial t}\beta_{kl}\vec{\Psi}.$

Is that first term in the above equation missing [imath]\beta_{kl}[/imath] by accident or where did it vanish? (I suspect it's missing by accident since your rearrangement implies it was supposed to be there... if I know my math at all... which I may not )

or, by simply rearranging terms,

$-\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\beta_{kl}\vec{\Psi} + \delta(x_k-x_l)\delta(\tau_k-\tau_l)\vec{\Psi}= K\frac{\partial}{\partial t}\beta_{kl}\vec{\Psi}.$

If we now sum this whole thing over k and l (where k is not equal to l) we get:

$-\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\sum_{k\neq l}\beta_{kl}\vec{\Psi} + \sum_{k\neq l}\delta(x_k-x_l)\delta(\tau_k-\tau_l)\vec{\Psi}= K\frac{\partial}{\partial t}\sum_{k\neq l}\beta_{kl}\vec{\Psi}.$

But the [imath]\sum_{k\neq l}\beta_{kl} = 0[/imath] (one of the specific constraints imposed when the alpha and beta operators were defined. Inserting those zeros into the above equation, we are left with,

$\sum_{k\neq l}\delta(x_k-x_l)\delta(\tau_k-\tau_l)\vec{\Psi}= 0.$

I think I understand that too now, except for the little strangeness regarding k & l referring to specific elements first, but then them being used to perform a sum where they appear as summation indices suddenly... If I didn't know to look at them as a summation indices suddenly (from all our conversations), I'd see a very short explicit sum in my mind :/

Yeah, I think I have a pretty decent idea of this step, I just hope you can still clear out my uneasyness with those "sums over specific indices" or how should I put it... :I

-Anssi