Jump to content
Science Forums

What can we know of reality?


Recommended Posts

It is the consequences of that shift symmetry which yield the fact that, when you divide the universe into two parts (the element you are interested in and the context: i.e., the rest of the universe) you get some rather simple relationships. Simple universal relationships which appear in your world-view as “principals”.

 

Could not this first division be more aptly described as a duality.

 

Opposite but equal complementary forces that expand expediential into ever more complex dualities ?

 

This seems to me the first fundamental “principal” of reality.

Link to comment
Share on other sites

This seems to me the first fundamental “principal” of reality.
That's nice; but I don't think it has anything to do with what I am talking about.
Okay, yeah definitely I need to understand the math better to be able to see how this unfolds...
Yeah, I think you have hit the nail on the head. I have no complaints about this particular post. So, back to that massive response I have been working on.
I am not sure what you mean when you say "probability of set number two given set number one exists". I guess it's not the same as just including both sets to the arguments of [imath]P_2[/imath]. Does it mean that the probability of set #2 cannot be known until we know if set #1 actually became true?
You need a little background in probability theory here. In selecting a collection of things from a source, the probability of getting a specific collection (using numerical labels for the references to the items) can be written [imath]P(x_1,x_2,x_3,\cdots, x_n)[/imath]. Now, if you know the probability of getting any specific element you can calculate the probability of getting that set via the procedure of picking one item at a time: i.e.,

[math]P(x_1,x_2,x_3,\cdots, x_n)=P(x_1)P(x_2)P(x_3)\cdots P(x_n)[/math].

 

That procedure is fine so long as picking an item does not have any effect on the probability of getting the next item. If the probability of getting the item [imath]x_2[/imath] changes when [imath]x_1[/imath] has been selectively picked, then the correct expression should be of the form:

[math]P(x_1,x_2,x_3,\cdots, x_n)=[/math]

 

[math]P(x_1)P(x_2,[/math]given [imath]x_1[/imath] has been picked[math])P(x_3,[/math]given [imath]x_1[/imath] and [imath]x_2[/imath] have been picked[math])\cdots[/math]

 

[math] P(x_n,[/math]given all the others have been picked).

 

In other words, in order to be absolutely correct, one must include the possibility that the probability of getting set #2 can depend upon whether or not you already picked the set #1 for which you want the probability. Another way to see the situation is the fact that [imath]\vec{\Psi}(x_1,x_2)[/imath] can not in general be presumed to be a product function: i.e., one cannot presume that [imath]\vec{\Psi}(x_1,x_2)=\vec{\Psi}_1(x_1)\vec{\Psi}_2(x_2)[/imath]. On the other hand, since we are dealing with probabilities, one can presume the probability of getting set #1 is a definable concept and thus the probability of getting set #1 and set #2 (at the same time) can be expressed as a product function; however, if you are going to do that, you must use the probability of getting set #2 given set #1 has been obtained. Otherwise, you are not allowing for the existence of set #1 to influence that probability of getting set #2. The whole thing can be written the other way around: i.e., the probability of getting set #1 and set #2 is equal to the probability of getting set #2 times the probability of getting set #1 given set #2 has been obtained. It is just a question of being analytically correct. I have a suspicion you understand the circumstance.

There doesn't seem to be any mention about "the probability of set #2 depending on the existence of set #1" in that equation, or does that expression actually mean that both sets are included as arguments to both Psis?
You missed a subtle point. The equation you quote is

[math]\vec{\Psi}(\vec{x}_1,\vec{x}_2,\cdots, t)=\vec{\Psi}_1(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n, t)\vec{\Psi}_2(\vec{x}_1,\vec{x}_2,\cdots, t).[/math].

 

Notice that the arguments of [imath]\vec{\Psi}_1[/imath] are specifically the n arguments associated with set #1 whereas the arguments of [imath]\vec{\Psi}_2[/imath] specifically end with “[imath]\cdots[/imath]” implying the number of arguments is infinite: i.e., the arguments include those associated with set #2 and set #1.

I pretty much have to take that on faith at this time... :I
You shouldn't have to; it's pretty simple algebra. The original equation was

[math]\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}=K\frac{\partial}{\partial t}\vec{\Psi}[/math]

 

If we substitute [imath]\vec{\Psi}_1\vec{\Psi}_2[/imath] for [imath]\vec{\Psi}[/imath] we get

[math]\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}(\vec{\Psi}_1\vec{\Psi}_2)=K\frac{\partial}{\partial t}(\vec{\Psi}_1\vec{\Psi}_2)[/math]

 

Analyzing that expression term by term, the first sum over i (the term with the [imath]\vec{\nabla}_i[/imath]) can be divided into two sums: one over those “i”s associated with set #1 and the other over those “i”s associated with set #2. The sum over the Dirac delta functions can be divided into three sums: one where i and j are both selected from set #1, one where i and j are both selected from set #2 and the third sum where i and j are selected from different sets. The sum is over all i and j so that third sum can actually be represented by two sums: one where i is selected from set #1 and j is selected from set #2 and one where i is selected from set #2 and j is selected from set #1; however, since i and j are mere indices and the Dirac delta function is symmetric under exchange of i for j, both those sums must be identical. So they can be included by twice the sum of one: i.e., two times the sum where i is selected from set #1 and j is selected from set #2. I like to call that term a “cross” term since it involves both sets. At this point, all I did is rewrite the terms, enclosing the terms involving set #1 in curly brackets the cross term in curly brackets and, finally, the terms involving set #2 in curly brackets. I did nothing with the right hand side of the equation. The result is explicitly,

[math]\left\{\sum_{\#1} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#1)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}_1\vec{\Psi}_2 + 2\left\{ \sum_{i=\#1 j=\#2}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\ \right\}\vec{\Psi}_1\vec{\Psi}_2+[/math]

 

[math] \left\{\sum_{\#2} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#2)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}_1\vec{\Psi}_2 = K\frac{\partial}{\partial t}(\vec{\Psi}_1\vec{\Psi}_2).[/math]

 

...and now I'm lost :I
I would prefer if you stopped and posted your confusion when you felt that way. If you don't then your problems get too extensive and my answers get so long that they are confusing in their own right. I think I said earlier that this has to be taken one step at a time. Remembering the details of the process is just too difficult. Plus that explaining things to you is valuable to me as I find errors in my presentation. It is quite evident that right here I have omitted one term in the resultant expression. It is not a very important error but it is an error none the less. I will try to fill in the steps you are missing.

 

First, let us left multiply the above equation by [imath]\vec{\Psi}_2^\dagger[/imath]. The result will be

[math]\vec{\Psi}_2^\dagger \cdot \left\{\sum_{\#1} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#1)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}_1\vec{\Psi}_2 + 2\vec{\Psi}_2^\dagger \cdot \left\{ \sum_{i=\#1 j=\#2}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\ \right\}\vec{\Psi}_1\vec{\Psi}_2+[/math]

 

[math]\vec{\Psi}_2^\dagger \cdot\left\{\sum_{\#2} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#2)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}_1\vec{\Psi}_2 = \vec{\Psi}_2^\dagger \cdot K\frac{\partial}{\partial t}(\vec{\Psi}_1\vec{\Psi}_2).[/math]

 

Now, term by term, we have to move that factor to the right, essentially factoring out (to the left) any terms which can be factored (in order to simplify the expression as much as possible). The thing which disallows moving that factor to the right is difference in meaning of the following kind of expressions.

[math] \vec{\Psi}_2^\dagger \cdot \vec{\nabla}_i \vec{\Psi}_1\vec{\Psi}_2[/math] and the expression [math]\vec{\nabla}_i \vec{\Psi}_2^\dagger \cdot \vec{\Psi}_2\vec{\Psi}_1[/math]

 

The [imath]\vec{\nabla}_i[/imath] is a differential operator (it takes derivatives of the functions following it) and, in the first expression it only operates on [imath]\vec{\Psi}_1\vec{\Psi}_2[/imath] (it does not operate on [imath]\vec{\Psi}_2^\dagger)[/imath] whereas in the second expression it operates on all three functions. Essentially, we must examine each term for this problem as we move that factor to the right (when it is abreast of [imath]\vec{\Psi}_2[/imath] we can integrate over all the arguments in set #2 and get unity). Going back to the expression where we multiplied by [imath]\vec{\Psi}_2^\dagger[/imath] you will see that I have divided it into three terms shown in curly brackets. Each of those will be handled in a slightly different manner. With regard to the first term, since [imath]\vec{\Psi}_2[/imath] is a function of set #1, the [imath]\vec{\nabla}_i[/imath] operator for i taken from set #1 will produce two sums via the chain rule for product functions (the second term is the one I have erroneously omitted).

[math]\vec{\nabla}_i \vec{\Psi}_1\vec{\Psi}_2= \left\{\vec{\nabla}_i\vec{\Psi}_1\right\}\vec{\Psi}_2 +\vec{\Psi}_1\left\{\vec{\nabla}_i\vec{\Psi}_2\right\}[/math]

 

When we left multiply by [imath]\vec{\Psi}_2^\dagger[/imath] the first of the two terms does not prevent our moving that expression over against [imath]\vec{\Psi}_2[/imath] as the [imath]\vec{\nabla}_i[/imath] is enclosed in brackets (which means it does not operate outside those brackets); however, [imath]\vec{\nabla}_i[/imath] in the second term cannot be factored out as it operates directly on [imath]\vec{\Psi}_2[/imath] which contains arguments from set #1.

 

The second term in that first set of curly brackets is the sum over the Dirac delta functions over arguments from set #1. Since there are no arguments there from set #2, that term can be factored out. The final result (prior to integration) for the first set of curly brackets is then:

[math] \left\{\sum_{\#1} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#1)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}_1\vec{\Psi}_2^\dagger \cdot\vec{\Psi}_2 +\vec{\Psi}_2^\dagger \cdot\left\{ \sum_{\#1}\vec{\alpha}_i\cdot \vec{\nabla}_i \vec{\Psi}_2\right\}\vec{\Psi}_1[/math]

 

You ought to be able to do the algebra for the remaining portions of the original equation. Because the rest all contain either operators which operate on arguments from set #2 or expressions which depend on those arguments (i.e., they cannot be factored from the integral we will do next). The single term on the right side of the equal sign (involving the partial with respect to time) must also be handled by the chain rule and generates two term the first of which can be factored from the integral. The final result, after we integrate over all arguments from set #2 can be written in the form:

[math]\left\{\sum_{\#1} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#1)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\right\}\vec{\Psi}_1 + \left\{2 \sum_{i=\#1 j=\#2}\int \vec{\Psi}_2^\dagger \cdot \beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\vec{\Psi}_2 dV_2 \right. +[/math]

 

[math]\int\vec{\Psi}_2^\dagger\cdot \sum_{\#1} \vec{\alpha}_i\cdot \vec{\nabla}_i \vec{\Psi}_2dV_2 +[/math]

 

[math] \left.\int \vec{\Psi}_2^\dagger \cdot \left[\sum_{\#2} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#2)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right]\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1 = K\frac{\partial}{\partial t}\vec{\Psi}_1+K \left\{\int \vec{\Psi}_2^\dagger \cdot \frac{\partial}{\partial t}\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1[/math]

Does that essentially mean that the integral of "all the possibilities" is 1?
Yes that is correct. When we integrate over all arguments the result will be unity if the entity under the integral sign is [imath]\vec{\Psi}^\dagger\cdot\vec{\Psi}dV[/imath] as that is defined to be the probability of seeing a particular set of arguments. The “integral” is essentially an infinite sum of those probabilities over all possibilities. By definition, the sum of the probability of anything summed over all possibilities is one.
We did? Oh, but then it was [imath]vec{Psi}_2[/imath] that could be symmetric to exchange of arguments (i.e. the invalid elements)? Looks like I will need to refresh my memory regarding that issue... For now I take it on faith.
Go back to post #171 or look at my later explanation to Bombadll post 180 which covers the same issue.
And here I got lost again.
I decided to quit trying to get you through the problems you have brought up. As I said earlier, you should stop the moment anything bothers you and we should be covering these things one step at a time. This post has already gotten far too long and may very well contain things which do not really satisfy your problems: i.e., the the volume of problems easily gets excessive. Suppose we make sure what I have said above is understood before we go on with the rest of your problems. We should be able to get to it all eventually.

 

Sorry to cut you off.

 

Have fun -- Dick

Link to comment
Share on other sites

I have essentially stated that one's world-view is built from a mixture of things: things which are true representations of reality and things which are figments of your imagination; now, if you are going to suggest that your world-view is built only from things you know are true, I will assert that you are both closed minded and deluded. What I have said is that, no matter what the “true facts” are, there exists a collection of “figments” which will make my equation valid (for both the “true facts” and the “figments”).

 

I would have to agree with this and I would say that it is probably the only world-view that can be logically defended but it is not the act of forming the fundamental equation that I’m wondering about but rather the act of interpreting that equation. I have noticed that you have started to form somewhat of an interpretation of the equation and while I know that you have a defense of such statements it seems to me that any such defense must be in the solutions. I think that this is what you are referring to with the statement.

 

This is what I am talking about when I complain about people bringing too much baggage to the discussion. Until we have a specific solution to the fundamental equation, we have no justification for any interpretation of the equation itself. The equation is no more than a statement of a requirement demanded by self consistency; the implications of that constraint are so complex (in their entirety) that those consequences are essentially beyond comprehension.

 

I see very little that we can interpret directly from the fundamental equation until we have at least a specific solution such as the one that you are referring to.

 

Why can't you change that sentence to, “now what I have seen seems to tell me something about what a logical explanation of anything must satisfy”? Is not mathematics the essence of logic?

 

I would have to agree with this. I see no reason why we can’t solve any problem that can be solved by use of logic by use of math, although any such problem must be put in a form that math can be applied to it. In fact I would say that these two things are interchangeable.

 

These constraints have certainly limited the solutions to a very distinct set; however, I have proved that there exists no set of “true facts” which cannot be reproduced as a solution to my equation. The problem is that the proof is somewhat abstract and that bothers a lot of people; they want the consequences (which are, as I said, essentially beyond comprehension) to be clearly comprehended.

 

I’m not quite sure what you are referring to here. Are you talking about the deriving of the fundamental equation, driving the schroedinger equation or something that you have not yet shown how to do? It sounds like something that we have not gotten to yet.

Link to comment
Share on other sites

I’m not quite sure what you are referring to here. Are you talking about the deriving of the fundamental equation, driving the schroedinger equation or something that you have not yet shown how to do? It sounds like something that we have not gotten to yet.
The answer to that question is, “yes and no”. I have already commented, in this thread, about the fact that my fundamental equation can be seen as the dynamic description of a collection of infinitesimal dust motes. It should be clear to anyone who has had much experience with differential equations of many variables that the dust mote solution is a valid solution but, as I said, “the consequences are essentially beyond comprehension”. I doubt anyone here would conclude that what we see as "the universe" could be the emergent consequences of such a dust mote solution.

 

That they are the emergent consequences is the central issue of that deduction of Schroedinger's equation. What I have shown is that the Schroedinger's equation is an approximate solution for the behavior of one “dust mote” in the context of a large number of dust motes whose behavior is known. You should note that I said, “whose behavior is known” and made no constraint on what that behavior was; a very important issue.

 

Finally, there is another perspective on the problem which yields a correct solution for the entire collection of variables. Go look at my thread, “A simple geometric proof with profound consequences”. Turtle seemed to have some difficulty understanding what I was proving so I made a drawing of the case where n=3.

 

The suggestion that the universe “IS” a three dimensional projection of a n-dimensional equilateral polyhedron with unit edges is about the most easily defended TOE which exists. As I said two years ago,

Thus, an excellent explanation of the universe is, it IS “a rotating n dimensional equilateral polyhedron with unit edges projected on a three dimensional space”.
There exists no experiments I am aware of to disprove that assertion. Once again, we are only talking about solutions to my “fundamental equation”.

 

If you want to discuss that proof, comment on that thread and I will respond.

 

Have fun -- Dick

Link to comment
Share on other sites

You need a little background in probability theory here.

.

.

.

i.e., the probability of getting set #1 and set #2 is equal to the probability of getting set #2 times the probability of getting set #1 given set #2 has been obtained. It is just a question of being analytically correct. I have a suspicion you understand the circumstance.

 

Yeah I think I understand it now, I was little bit uncertain earlier.

 

 

====== QUOTE #240 ======

You missed a subtle point. The equation you quote is

 

[math]\vec{\Psi}(\vec{x}_1,\vec{x}_2,\cdots, t)=\vec{\Psi}_1(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n, t)\vec{\Psi}_2(\vec{x}_1,\vec{x}_2,\cdots, t).[/math].

 

Notice that the arguments of [imath]\vec{\Psi}_1[/imath] are specifically the n arguments associated with set #1 whereas the arguments of [imath]\vec{\Psi}_2[/imath] specifically end with “[imath]\cdots[/imath]” implying the number of arguments is infinite: i.e., the arguments include those associated with set #2 and set #1.

======

 

Oh I see... (I probably would have never picked that up myself :P)

 

====== QUOTE #240 ======

Analyzing that expression term by term, the first sum over i (the term with the [imath]\vec{\nabla}_i[/imath]) can be divided into two sums: one over those “i”s associated with set #1 and the other over those “i”s associated with set #2. The sum over the Dirac delta functions can be divided into three sums: one where i and j are both selected from set #1, one where i and j are both selected from set #2 and the third sum where i and j are selected from different sets. The sum is over all i and j so that third sum can actually be represented by two sums: one where i is selected from set #1 and j is selected from set #2 and one where i is selected from set #2 and j is selected from set #1; however, since i and j are mere indices and the Dirac delta function is symmetric under exchange of i for j, both those sums must be identical. So they can be included by twice the sum of one: i.e., two times the sum where i is selected from set #1 and j is selected from set #2.

======

 

Okay thanks, one of my first questions about that equation, once I'd start to try and understand it, would have been why is there that multiplication by 2.

 

====== QUOTE #240 ======

I like to call that term a “cross” term since it involves both sets. At this point, all I did is rewrite the terms, enclosing the terms involving set #1 in curly brackets the cross term in curly brackets and, finally, the terms involving set #2 in curly brackets. I did nothing with the right hand side of the equation. The result is explicitly,

 

[math]\left\{\sum_{\#1} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#1)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}_1\vec{\Psi}_2 + 2\left\{ \sum_{i=\#1 j=\#2}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\ \right\}\vec{\Psi}_1\vec{\Psi}_2+[/math]

 

[math] \left\{\sum_{\#2} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#2)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}_1\vec{\Psi}_2 = K\frac{\partial}{\partial t}(\vec{\Psi}_1\vec{\Psi}_2).[/math]

======

 

Okay, that seems valid to me. Thank you for the clarifications, everything in your post seems pretty clear up to this point. I am running out of time so I need to continue from here, hopefully tomorrow already.

 

-Anssi

Link to comment
Share on other sites

Thank you for the clarifications, everything in your post seems pretty clear up to this point. I am running out of time so I need to continue from here, hopefully tomorrow already.
Thank you for taking the trouble to understand what I am saying. Actually, every step is roughly as straight forward as what you have already managed to follow; it just takes a little time to follow it. The problem is that there always exists possible points of confusion in any conversation so I need to know when that confusion occurs, otherwise, I have no way of knowing what the problem is.

 

Thanks again for talking to me -- Dick

Link to comment
Share on other sites

I'm immediately struggling when trying to understand how that [imath]\vec{\Psi}_2^\dagger[/imath] moves through the equation;

 

====== QUOTE #240 ======

First, let us left multiply the above equation by [imath]\vec{\Psi}_2^\dagger[/imath]. The result will be

 

[math]\vec{\Psi}_2^\dagger \cdot \left\{\sum_{\#1} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#1)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}_1\vec{\Psi}_2 + 2\vec{\Psi}_2^\dagger \cdot \left\{ \sum_{i=\#1 j=\#2}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\ \right\}\vec{\Psi}_1\vec{\Psi}_2+[/math]

 

[math]\vec{\Psi}_2^\dagger \cdot\left\{\sum_{\#2} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#2)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}_1\vec{\Psi}_2 = \vec{\Psi}_2^\dagger \cdot K\frac{\partial}{\partial t}(\vec{\Psi}_1\vec{\Psi}_2).[/math]

 

Now, term by term, we have to move that factor to the right, essentially factoring out (to the left) any terms which can be factored (in order to simplify the expression as much as possible). The thing which disallows moving that factor to the right is difference in meaning of the following kind of expressions.

 

[math] \vec{\Psi}_2^\dagger \cdot \vec{\nabla}_i \vec{\Psi}_1\vec{\Psi}_2[/math] and the expression [math]\vec{\nabla}_i \vec{\Psi}_2^\dagger \cdot \vec{\Psi}_2\vec{\Psi}_1[/math]

 

The [imath]\vec{\nabla}_i[/imath] is a differential operator (it takes derivatives of the functions following it)

s================

 

...and sums them? Or multiplies? Or whatever operations have been placed between the functions? (but then if there's a dot product... doesn't make sense to me :( )

 

I do not even understand how something like [math] \vec{\Psi}_2^\dagger \cdot \vec{\nabla}_i \vec{\Psi}_1\vec{\Psi}_2[/math] would be carried out really. There's a dot product between [math] \vec{\Psi}_2^\dagger [/math] and what?

 

I guess you cannot first take the derivatives of [math] \vec{\Psi}_1 [/math] and [math] \vec{\Psi}_2 [/math] since a dot product between a vector and a derivative is not possible... no?

 

I've been scratching my head for a while now but I am missing some critical bit of mathematical knowledge here clearly...

 

(Sorry to everybody reading this thread about it turning into bit of a math lecture again :P

 

-Anssi

Link to comment
Share on other sites

Hi Anssi, sorry about taking so long to answer but we have had house guests again. It is a good excuse to do a good house cleaning and I have spent the last week cleaning the place up. We need guests every once in a while to move us to clean things up a bit. It's been a while and some things really needed to be done. It is nice to see the place beautiful again. So now I can loaf around and do nothing for a while.

Sorry to everybody reading this thread about it turning into bit of a math lecture again :P
I think that's the cost of doing business; at least you ask when what is being said is not clear. Mathematics is a language with subtle expressions and it is easy for someone fluent in mathematics to overlook the complexity of what is being said. The problem here is that what is actually being said has been so compressed by the notation that one not fluent in mathematics can't see the expanded expression.

 

First of all, there are two totally independent vector spaces being expressed. One is the abstract space expressed in the notation [imath]\vec{\Psi}[/imath] which, written in detail would be [imath]\Psi_{x_1}\hat{x}_1+\Psi_{x_2}\hat{x}_2 + \cdots \Psi_{x_m}\hat{x}_m[/imath] where [imath]\hat{x}_i[/imath] is a unit vector pointing in the i direction in that abstract space. In this case, [imath]x_i[/imath] has absolutely nothing to do with the [imath]x_i[/imath] used elsewhere as a reference to an ontological element. [imath]\vec{\Psi}[/imath] could just as well have been written [imath]\Psi_{p_1}\hat{p}_1+\Psi_{p_2}\hat{p}_2 + \cdots \Psi_{p_m}\hat{p}_m[/imath] except for the fact that it is convention to use x (or x,y,z) for directions in a space. In essence we simply have two different ways of being confusing; writing the expression as [imath]\vec{\Psi}[/imath] essentially avoids the problem of facing this issue. If one is fluent in mathematics, it is sufficient to say that [imath]\vec{\Psi}[/imath] is a vector in an abstract space.

 

Oh, and the other vector space of interest is the so called x, tau space; where x is the coordinate on which our points standing for the references to ontological elements [imath]x_i[/imath] are laid out. In this case there is but two axes (x and tau) and i stands for a specific ontological element and not a specific axes as it does in the expression [imath]\Psi_{p_1}\hat{p}_1+\Psi_{p_2}\hat{p}_2 + \cdots \Psi_{p_m}\hat{p}_m[/imath] above.

 

We have dot products expressed in both of these vector spaces and these are totally independent operations. These notations amount to an extreme compression which, if written out in its entirety would be overwhelmingly complex. What is important is that the reader be able to comprehend the meaning of the notation in such a way as to understand exactly what kinds of problems arise if the problem were written out in detail. When written in detail, we are dealing with little more than simple algebra. For example, let us look at the expression

[math] \vec{\Psi}_2^\dagger \cdot \vec{\nabla}_i \vec{\Psi}_1\vec{\Psi}_2[/math]

 

To begin with, I omitted the factor [imath]\vec{\alpha}_i[/imath] because it is a mere constant and yields no consequences with regard to the commutation of [imath]\vec{\Psi}_2^\dagger[/imath]; however, that omission sort of removes the meaning of the second dot product there so, to properly expand the expression, I will put that factor back in. We then have:

[math] \vec{\Psi}_2^\dagger \cdot \vec{\alpha}_i \cdot \vec{\nabla}_i \vec{\Psi}_1\vec{\Psi}_2[/math]

 

Before we can write out the expanded version, we need to express all the factors in the above expression. I don't think I should actually write the full expanded expression out because it will take up entirely too much room but I will write out the individual parts in detail. I will start from the right with [imath]\vec{\Psi}_2[/imath], using the “p” notation for the axes in its abstract space to avoid confusion.

[math]\vec{\Psi}_2 =\Psi_{p_1}\hat{p}_1+\Psi_{p_2}\hat{p}_2 + \cdots \Psi_{p_m}\hat{p}_m[/math]

 

however, we are concerned with the action of [imath]\vec{\nabla}_i[/imath] which operates on the functions of [imath]\vec{x}_i[/imath] (our representations of those ontological elements) so our detailed example must explicitly include those arguments. That is, the above should be written

[math]\vec{\Psi}_2 =\Psi_{p_1}(\vec{x}_1,\vec{x}_2 \cdots , t) \hat{p}_1+\Psi_{p_2}(\vec{x}_1,\vec{x}_2 \cdots , t)\hat{p}_2 + \cdots \Psi_{p_m}(\vec{x}_1,\vec{x}_2 \cdots , t)\hat{p}_m[/math]

 

Of course, every [imath]\vec{x}_i[/imath] should be expanded into the two explicit arguments [imath]x_i[/imath] and [imath]\tau_i[/imath] so that we can see the detail of the action of [imath]\vec{\nabla}_i[/imath] which expands to

[math]\vec{\nabla}_i = \frac{\partial}{\partial x_i}\hat{x} +\frac{\partial}{\partial \tau_i}\hat{\tau}[/math]

 

But this is actually reduced by the fact that the actual expression is a dot product with [imath]\vec{\alpha}_i[/imath] so we might as well insert that result instead.

[math]\vec{\alpha}_i \cdot \vec{\nabla}_i = \alpha_{x_i}\frac{\partial}{\partial x_i} +\alpha_{\tau_i}\frac{\partial}{\partial \tau_i}[/math]

 

At this point, we need to expand the representation of [imath]\vec{\Psi}_1[/imath] into an explicitly detailed representation. That would look just like the expansion of [imath]\vec{\Psi}_2[/imath] except for the fact that the abstract vector space is totally independent of that we have represented by the unit vectors [imath]\hat{p}_k[/imath]. I suppose we could use “[imath]\hat{q}_t[/imath]” to represent unit vectors in that space.

 

What we will end up with is a massive expression so involved that it would be difficult to write down. Remember, it is a product of at least four different sums which need to be expanded; when that product is algebraically written out, the number of terms doubles with every multiplication. Furthermore, what we are representing is but a single term in those sums expressed in

[math]\vec{\Psi}_2^\dagger \cdot \left\{\sum_{\#1} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#1)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}_1\vec{\Psi}_2 + 2\vec{\Psi}_2^\dagger \cdot \left\{ \sum_{i=\#1 j=\#2}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\ \right\}\vec{\Psi}_1\vec{\Psi}_2+[/math]

 

[math]\vec{\Psi}_2^\dagger \cdot\left\{\sum_{\#2} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#2)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}_1\vec{\Psi}_2 = \vec{\Psi}_2^\dagger \cdot K\frac{\partial}{\partial t}(\vec{\Psi}_1\vec{\Psi}_2).[/math]

 

What I am trying get across here is that what we are working with is so extensive and complex that actually writing everything out in detail is next to impossible. What is important is that we comprehend that, whatever the details of that expanded perspective are, they will actually be no more than an algebraic sum of specific products of functions and differential operators. What we need to handle are the consequences of migrating that [imath]\vec{\Psi}_2^\dagger[/imath] through that massive collection of terms. The central issue seems to be one of commutation of terms but, except for the differential operators, that really isn't the issue. Except for the alpha and beta operators (which are not being moved) and the differential operators, everything is just ordinary algebra where everything commutes.

 

Eventually, we want to integrate over all the arguments of set #2 and the real issues is, what can be factored out of that integration. Essentially, we can look at the consequences of the expansion of the various factors given above, see what kind of results the differential operator produces on individual elements and then reconstruct the original factors either inside the integration (when the factors depend on arguments in set #2) or outside the integration (when the factors do not depend upon the arguments in set #2. As far as the differential operators are concerned, we are dealing with the differential of a product function and the consequence of that differential yields a sum of two terms via the chain rule of differentiation of products,

[math]\frac{\partial}{\partial x_i}\Psi_1(x_1,x_2,\cdots,x_n)\Psi_2(x_1,x_2,\cdots,x_n, x_{n+1}, \cdots)=\left\{\frac{\partial}{\partial x_i}\Psi_1\right\}\Psi_2 + \Psi_1\left\{\frac{\partial}{\partial x_i}\Psi_2 \right\}[/math]

 

What you need to realize is the fact that these two terms only exist if the argument referred to as “i” is in both functions. If “i” is taken from set #1, one obtains the two terms expressed above; however, if “i” is taken from set #2, those arguments do not exist in [imath]\Psi_1[/imath] so the differential operator in the first term yield zero and we only get one term. Fundamentally, when one reconstructs the factors expressed in the vector notation, this yields that extra expression you find in

Post #240

[math]\int\vec{\Psi}_2^\dagger\cdot \sum_{\#1} \vec{\alpha}_i\cdot \vec{\nabla}_i \vec{\Psi}_2dV_2 [/math]

 

Notice that the sum inside that integral is over the arguments of set #1 whereas the integral is over the arguments of set #2. The reason that sum over [imath]\vec{\nabla}_i[/imath] cannot be factored out is that [imath]\vec{\Psi}_2[/imath] depends upon the arguments of set #1 so the value of the partials with respect to set #1 are not necessarily proportional to [imath]\vec{\Psi}_2[/imath]. Please note that, if there is no algebraic function inside that integral, [imath]\int\vec{\Psi}_2^\dagger\cdot \vec{\Psi}_2dV_2 [/imath] is unity even with any internal dependence on set #1 as it amounts to a sum over all possibilities for set #2 and that is normalized to unity by definition (the sum over all probabilities for all possibilities is one by the definition of probability). If you go back to the original expression in post #240

[math]\left\{\sum_{\#1} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#1)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\right\}\vec{\Psi}_1 + \left\{2 \sum_{i=\#1 j=\#2}\int \vec{\Psi}_2^\dagger \cdot \beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\vec{\Psi}_2 dV_2 \right. +[/math]

 

[math]\int\vec{\Psi}_2^\dagger\cdot \sum_{\#1} \vec{\alpha}_i\cdot \vec{\nabla}_i \vec{\Psi}_2dV_2 +[/math]

 

[math] \left.\int \vec{\Psi}_2^\dagger \cdot \left[\sum_{\#2} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#2)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right]\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1 = K\frac{\partial}{\partial t}\vec{\Psi}_1+K \left\{\int \vec{\Psi}_2^\dagger \cdot \frac{\partial}{\partial t}\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1[/math]

 

you should see that it essentially consists of five terms. The first term involves nothing but set #1; the second term involves the interaction between set #1 and set #2; the third term is the term generated by the chain rule of differentiating products; the fourth term is simply an integral over all possibilities for the fundamental equations as obeyed by set #2 and the fifth term (on the other side of the equal sign) is the chain rule result of differentiating the product function by t. That is to say that the portion of the fifth term under the integral sign is exactly the integral of the right side of the fundamental equation if set #1 didn't exist.

 

Essentially, I have mathematically divided the universe into two portions (which together must obey the fundamental equation) each of which obey the fundamental equation on their own as separate entities plus those two terms (the Dirac delta function between sets #1 and #2 and the differential with respect of set #1 of [imath]\vec{\Psi}_2[/imath] which express the interaction required between the two different pieces of the universe in order to fulfill the required symmetries of the entire collection.

...and sums them? Or multiplies? Or whatever operations have been placed between the functions? (but then if there's a dot product... doesn't make sense to me :( )
With regard to the “dot” products, all you have to remember is that every time I use the vector notation that vector is being expressed in some abstract space (it is nothing more than a collection of independent variables each of which is a different orthogonal direction in that abstract space). Even the x. tau space can be seen as an abstract space within which one can express a set of numbers as a vector in that space. Essentially, this defines what we mean by “space”. What you need to understand is that any collection of information can be represented as a set of points in an abstract space. In order to know what the “dot” product symbolizes (in terms of the actual numbers being multiplied together) you need to establish which of these abstract spaces you are referring to. The only way to understand that is through the idea that the vector notation above something indicates that the thing being represented is in its own vector space. In order to define a “dot” product, you need two vectors in the same vector space.
I guess you cannot first take the derivatives of [math] vec{Psi}_1 [/math] and [math] vec{Psi}_2 [/math] since a dot product between a vector and a derivative is not possible... no?
If you go back and look at the definitions of [imath]\vec{\nabla}_i[/imath] and [imath]\vec{\alpha}_i[/imath] you will discover that they are in the same abstract x, tau vector space. Likewise any time I talk about the probability being given by a dot product (some [imath]\vec{\Psi}^\dagger \cdot \vec{\Psi}[/imath]) those two vectors must be in the same abstract vector space so the ”dot” products being referred to are pretty straight forward.

 

I know full well that some of what I have just said is going to be confusing to you. Please ask questions so I can know exactly how and why you are confused.

 

Have fun -- Dick

Link to comment
Share on other sites

DoctorDick: This article might interest you: PhilSci Archive - Where do the laws of physics come from?
I have read the paper referenced in that article and I don't really don't think the ideas he wants to talk about are well presented. It seems to me he is overlooking a number of important issues. Do you really see his presentation easier to understand than mine? :thumbs_up:turtle:

 

Have fun -- Dick

Link to comment
Share on other sites

Hi Anssi, sorry about taking so long to answer but we have had house guests again. It is a good excuse to do a good house cleaning and I have spent the last week cleaning the place up. We need guests every once in a while to move us to clean things up a bit. It's been a while and some things really needed to be done. It is nice to see the place beautiful again. So now I can loaf around and do nothing for a while.

 

Heh, yeah that happens to us as well, or actually the previous party we had was so extempore that I was just tidying up a bit while people were already here, since I arrived with them myself :) Oh well...

 

===== QUOTE #246 =====

First of all, there are two totally independent vector spaces being expressed. One is the abstract space expressed in the notation [imath]\vec{\Psi}[/imath] which, written in detail would be [imath]\Psi_{x_1}\hat{x}_1+\Psi_{x_2}\hat{x}_2 + \cdots \Psi_{x_m}\hat{x}_m[/imath] where [imath]\hat{x}_i[/imath] is a unit vector pointing in the i direction in that abstract space.

================

 

And that's the vector space for that vector whose magnitude correlates to the probability of seeing the input arguments in the x,tau,t-space...?

 

===== QUOTE #246 =====

In this case, [imath]x_i[/imath] has absolutely nothing to do with the [imath]x_i[/imath] used elsewhere as a reference to an ontological element. [imath]\vec{\Psi}[/imath] could just as well have been written [imath]\Psi_{p_1}\hat{p}_1+\Psi_{p_2}\hat{p}_2 + \cdots \Psi_{p_m}\hat{p}_m[/imath] except for the fact that it is convention to use x (or x,y,z) for directions in a space. In essence we simply have two different ways of being confusing; writing the expression as [imath]\vec{\Psi}[/imath] essentially avoids the problem of facing this issue. If one is fluent in mathematics, it is sufficient to say that [imath]\vec{\Psi}[/imath] is a vector in an abstract space.

 

Oh, and the other vector space of interest is the so called x, tau space; where x is the coordinate on which our points standing for the references to ontological elements [imath]x_i[/imath] are laid out. In this case there is but two axes (x and tau) and i stands for a specific ontological element and not a specific axes as it does in the expression [imath]\Psi_{p_1}\hat{p}_1+\Psi_{p_2}\hat{p}_2 + \cdots \Psi_{p_m}\hat{p}_m[/imath] above.

 

We have dot products expressed in both of these vector spaces and these are totally independent operations.

================

 

I feel I have a pretty good handle of what abstract vector spaces mean and why a dot product requires the vectors to be in the same vector space, but still I find one part of [math] \vec{\Psi}_2^\dagger \cdot \vec{\alpha}_i \cdot \vec{\nabla}_i \vec{\Psi}_1\vec{\Psi}_2[/math] quite confusing.

 

I think I get the dot product between [math]\vec{\alpha}_i[/math] and [math]\vec{\nabla}_i[/math]now that you explained it, but then the dot product placed between [math]\vec{\Psi}_2^\dagger[/math] and [math]\vec{\alpha}_i[/math], what happens there? The [math]\vec{\Psi}_2^\dagger[/math] is a vector in the "probability vector space" and the [math]\vec{\alpha}_i[/math] is a vector in the x,tau,t-space, right?

 

So I reckon that means it's a dot product between [math]\vec{\Psi}_2^\dagger[/math] and [math]\vec{\Psi}_1[/math] and/or [math]\vec{\Psi}_2[/math] ? I guess that makes sense since you are trying to migrate it next to [math]\vec{\Psi}_2[/math], in order to arrive to that definition of unity:

 

[imath]\int\vec{\Psi}_2^\dagger\cdot \vec{\Psi}_2dV_2 [/imath]

 

Am I on the right track at all? (Still have things to digest in that post)

 

-Anssi

Link to comment
Share on other sites

Does this mean that you're both presenting the same idea?
No, there are similarities between my thoughts and his but I don't think he has seen what I have seen. That comment had to do with the form of his presentation. I don't think Stenger has comprehended the essence of what I have discovered. What he appears to have recognized is the significance of symmetry in our understanding of the universe; however, I would tend to judge the arguments he puts forward as emotional and not intellectual. He is trying to convince his audience by argument, not analytical analysis. Essentially, I think he has seen the consequence without seeing the absolute formulation of the necessity.
Am I on the right track at all? (Still have things to digest in that post)
You are right on the money except for one simple point. The probability P(set #2) has been expressed as the squared magnitude of a vector ([imath]\vec{\Psi}_2[/imath]) in an abstract vector space. That vector space has nothing to do with either the vector space of x and tau nor the abstract vector space used to express P(set #1) as the squared magnitude of [imath]\vec{\Psi}_1[/imath].

 

Perhaps the issue can be clarified with a simple set of parenthesis. Instead of writing [imath] \vec{\Psi}_2^\dagger \cdot \vec{\alpha}_i \cdot \vec{\nabla}_i \vec{\Psi}_1\vec{\Psi}_2[/imath] as I did, suppose we instead write it as [imath] \vec{\Psi}_2^\dagger \cdot \left\{ \vec{\alpha}_i \cdot \vec{\nabla}_i \vec{\Psi}_1\right\}\vec{\Psi}_2[/imath]. As far as the vector [imath]\vec{\Psi}_2^\dagger[/imath] is concerned, what is enclosed in the parenthesis is nothing more than an algebraic operator which operates on each vector component of [imath]\vec{\Psi}_2[/imath]. Essentially, it is very important that the factor [imath]\vec{\alpha}_i \cdot \vec{\nabla}_i \vec{\Psi}_1[/imath] does not have the same effect on every component of [imath] \vec{\Psi}_2[/imath].

 

Please don't fail to notice this comment

Furthermore, what we are representing is but a single term in those sums expressed in

[math]\vec{\Psi}_2^\dagger \cdot \left\{\sum_{\#1} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#1)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}_1\vec{\Psi}_2 + 2\vec{\Psi}_2^\dagger \cdot \left\{ \sum_{i=\#1 j=\#2}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\ \right\}\vec{\Psi}_1\vec{\Psi}_2+[/math]

 

[math]\vec{\Psi}_2^\dagger \cdot\left\{\sum_{\#2} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#2)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}_1\vec{\Psi}_2 = \vec{\Psi}_2^\dagger \cdot K\frac{\partial}{\partial t}(\vec{\Psi}_1\vec{\Psi}_2).[/math]

 

It is integration over all arguments of set #2 which is used to eliminate those arguments from the problem. In order to do that, we must uncover the relationships from the fundamental equation which can be factored from the original relationship. This means we need to examine the consequences of the differential operator (which I have explicitly given) and the consequences of the Dirac delta function (which only comes into play with the integration itself). The final purpose of this endeavor will be to obtain a functional relationship which explains the behavior of one element. This first step was to elucidate the analytical procedure which, through integration (essentially a sum over all possibilities), is capable of removing a set of variables (in this case, set #2) from my fundamental equation.

 

What I think you need to know is that this kind of thing essentially plays no part in current physics. My fundamental equation is what many physicists would call a microscopic many body equation, something they simply consider to be insoluble. My equation is valid only when the entire universe is included in the data represented by the collection of [imath]\vec{x}_i[/imath] arguments. Normal physics never dreams of including the rest of the universe in a physics expression; essentially, they presume the behavior of the bodies of interest does not depend upon what the rest of the universe is doing. In many respects, what I am doing here is demonstrating that such a step is reasonable (at least so long as our expectations for the rest of the universe can be expressed).

 

One thing anyone familiar with differential equations will realize is that it is the boundary conditions which set the solutions of a differential equation to specific functions. A differential equation tells one about relationships but seldom limits a solution to one possibility. From an analytical perspective, boundary conditions are conditions imposed on the differential equation by, “the rest of the universe”. That is to say, it is exactly what impact the rest of the universe has on the behavior of a body which actually sets that behavior. These integrals (which remove the specific arguments from my fundamental equation) represent exactly what the impact of the rest of the universe is (at least, exactly what our expectations of that impact is; a subtle but important point). What I am getting at is the fact that the issues I am closing in on here are issues which modern physics totally ignores (and that includes Stenger's publication ughaibu brought up).

 

I hope that helps!

 

Have fun -- Dick

Link to comment
Share on other sites

You've posted your idea on several boards, about ten threads here at Hypography, I think it might be worth considering the possibility that your presentation isn't the most accessible.
Oh, then you find Stenger's paper easier to understand? If that is the case why did you ask if we were presenting the same thing.
Does this mean that you're both presenting the same idea?
I can only conclude that you don't understand either of us. :)

 

Actually, I have no idea as to what I have said you do not understand. If you would explain your problems, perhaps I could help. At least Anssi points out what he finds confusing.

 

Have fun -- Dick

Link to comment
Share on other sites

Oh, then you find Stenger's paper easier to understand?
Where the hell did I say that?
If you would explain your problems, perhaps I could help
Quite. It might help if your presentation were such as to generate enough interest for me to find out whether or not I have problems.
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...