Jump to content
Science Forums

What can we know of reality?


Recommended Posts

Hi Anssi, I think the biggest problem with almost any attempt to communicate my thoughts to anyone is the fact that they are all looking for solutions and I, at least when I began thinking along these lines, was seriously concerned with exactly what can we actually say with confidence concerning the true nature of reality. I think you expressed the exact nature of my difficulty communicating my thoughts to anyone with the comment you made to Rade above, ”unfortunately, it appears that most people never seriously think about this issue, and rather just tacitly assume "perception comes first", and by doing that they just cling onto naive realism without realizing it”. You are a very special person as you were seriously thinking about that problem before we ever began talking to one another. I have never met another who even takes the question as meaningful.

 

With regard to Bose-Einstein/Fermi statistics, I only brought that up because the consequences of antisymmetric results under exchange symmetry are only discussed in explanations of Bosons and Fermions. I just presumed that not mentioning the subject would probably have brought forth irrelevant complaints from a lot of people. As a mathematical operation, it is generally seen by the physics community as having utterly no other purpose. All I am really concerned with here is the fact that, if any conceivable function is asymmetric under exchange of any specific pair of its arguments (i.e., changes sign), then that function absolutely must vanish if those arguments have the same value: f(a,a)=-f(a,a) requires f(a,a)=0 as zero is the only number equal to its negative. You don't have to know the first thing about Fermi statistics to know that!

I just want to spend some time to really understand what you are saying, and it's a time consuming process when you are referring to many concepts that I just don't happen to be too fluent with. It takes time to think things through.
Take all the time you want.
If you want, you can start the discussion of solving the differential equation, but I think I will be replying to that older post first as soon as I have enough time to really look at it properly.
I will let everything stand for a while. The issue of solving differential equations is comprehending exactly what the equation says and expressing that comprehension the language of mathematics. Not an easy task but at least we are working with a first order linear differential equation: i.e., there are no second derivatives there, the equation is a sum of terms with no products or squares of those differentials and the function upon which the differential operators (or algebraic operators) operate appears once in every term. That makes life a lot simpler than it could otherwise be.

But I could be reading him wrong, in which case I don't think there's reason to argue about his views in this thread, too much noise. :(
Yes, I have noticed that “noise” is a major problem on most all forums but I figure it's worth the bother because it might inspire someone else to think a little. And, speaking of someone thinking a bit, Bombadil appears to be asking some excellent questions.

 

Hi Bombadil, it's nice to hear from you again.

Then is this something that we have to do to make the function antisymmetric?
Exactly; any function of many variables may be made “antisymmetric” respect to exchange of any subset of those variables by exactly the procedure I have described. The important issue is that the function, after being made 'antisymmetric” is still a solution to the differential equation. We have that case here because the differential equation is a first order linear differential equation and it can be proved that any sum of solutions is also a solution.
What I’m trying to ask is this, suppose we made an explanation based on a set of elements that we know were not all valid.
You can be fairly confident that this is exactly the case unless you can prove your thesis is correct (which is well known to be impossible).
Now while there is no way to tell if an element is valid it seems that it is possible to tell if an element can’t be valid for instance if an element is not antisymmetric it can’t be valid although it seems that there is no way to tell which element is not valid, so it seems that all elements must be antisymmetric.
You are confusing the meaning of “invalid” as I am using it. If you were to read all of my posts you would discover that I have used a number of different words in my attempt to identify the duality I am referring to here. When I first started talking about the issue I referred to the categories as “knowable” (true things that can be known) and “unknowable” (things assumed to be true which cannot be proved to be required in the absence of the proffered explanation). I have also used the terms “real” and “imagined” (and I may have made some other attempts to communicate the idea). I have great difficulty communicating this issue as the existence of such a “duality” is not even considered by the common scientific community; instead, they consider everything required by their theories to be valid unless a flaw can be found in that theory. My position is that their attitude is fundamentally a “head in the sand” assertion without any rational support.

 

The common “scientific” position is that what I am talking about is philosophical s***t having nothing at all to do with “science” (you can interpret that as epistemology). In general their response is to tag me as a “Solipsist”: i.e., utterly refusing to see that a flaw-free theory can be based upon a mix of real and imagined ontological elements. What I pointed out to Anssi was that our ability to create “invalid” (unknowable/imagined) ontological elements is the very source of our ability to create a mechanism capable of explaining what we know of reality. Please read the following post very carefully:

 

11. The use of “invalid” ontological elements for the purpose of solving the problem posed by trying to explain a set of valid ontological elements.

 

In the “paradigm” I am presenting, valid and invalid ontological elements are expressly different things and, yes, in that paradigm, anything the expectation of which is to be given by a [imath]\vec{\Psi}[/imath] symmetric under exchange of the reference labels identifying those elements is an invalid ontological element. But that does not mean the explanation does not require that element! Note that, on the other hand, there may very well exist "invalid" ontological elements which are antisymmetric under exchange of the reference labels identifying those elements. All these elements (valid or invalid) are required by the explanation being expressed by [imath]\vec{\Psi}[/imath].

What I’m asking is, in the absence of an explanation is there any way to leave out the invalid elements? I see no way that we can, due to the fact that in the absence of an explanation there is no difference between the two sets and so not only is there no way to tell valid from invalid if there was there would be no way to leave invalid elements out of the explanation. This seems to suggest to me that all elements in an explanation must appear valid although it seems we had already agreed on this earlier.
”In the absence of an explanation”, you can not even discuss these ontological elements. They are no more than “undefined” reference labels. Essentially, you are saying that, since there is “no way to leave invalid elements out of the explanation”, all possible explanations must always include those “invalid elements”: i.e., you have already placed yourself in some world view. That is exactly what Anssi was referring to here:
Oh it's automatical, that's nice. And all this time I thought it is a rather complex process to interpret sensory data.

 

Seriously though, did you really think that I claimed that all perception requires conscious mental effort?

 

When you "perceive an entity", is that not an entity that you have defined in your worldview? When you perceive any feature at all, is that not a feature that has a definition in your worldview? Do you realize how many definitions/assumptions are required before ANY "spatial/temporal pattern" inside a cortex can be seen as carrying any meaning at all? Sure it does not require conscious mental effort from your part, so what? And btw, it makes absolutely no difference whether some of those definitions have come to exist due to biological evolution already (I suspect very few have for humans; that is why it takes so long for us to do anything sensible at all after we are born), and some due to an organism building a worldview.

That is why I started my discussion with the presumption that the initial problem to be solved started with “valid” ontological elements only: i.e., to get logically on the other side of that issue and start without any presumptions (see my reference #6).
What I’m wondering is just what are we saying when we say that we can label the elements in any way without changing the solution?
What I am saying is actually quite simple. First, behind any explanation are some “things”, “ontological elements”, “real noumena”; whatever you choose to call the source of this thing we refer to as “reality”. Man has been working at explaining reality for thousands of years on both a rational level and a biological level. What do our explanation provide us with if it is not “expectations”: when we expect something we experience to “make sense” (be consistent with our expectations).

 

Now look at the labels. It is only after you have your explanation that you actually give meaning to the labels you want to use to refer to these “things”, “ontological elements”, “real noumena” which you believe to be information upon which your explanation rests. Likewise, your expectations are to be described in terms of exactly these same labels. And don't forget, the very meanings of these labels is embodied in that explanation: i.e., the explanation includes defining the meaning of these labels. It should be quite clear to you that these actual labels can be simple numerical references and what labels are actually used to refer to these “things”, “ontological elements”, “real noumena” can not have the slightest impact upon the solution to your problem (your explanation). As I said to Buffy a long time ago, if your actual label makes a difference, you better come up with a way of establishing the “correct labels”.

 

I hope this clears up some of your problems.

 

Have fun -- Dick

Link to comment
Share on other sites

... zero is the only number equal to its negative....
Just a side comment. There is no "negative zero" distinct from "zero" because − 0 = 0 = + 0, both -0 and +0 represent the exact same number in mathematics. Thus it would be more correct for you to say ...positive zero (+0) is the only number equal to its negative (-0), they both = 0...
Link to comment
Share on other sites

Just a side comment. There is no "negative zero" distinct from "zero" because − 0 = 0 = + 0, both -0 and +0 represent the exact same number in mathematics. Thus it would be more correct for you to say ...positive zero (+0) is the only number equal to its negative (-0), they both = 0...
That is exactly the problem with common language. You are making the assumption that the meanings of English expressions are as exact as mathematics: i.e., that there exists only one "correct" way to say things in English.

 

There is generally more than one way to express a relationship even in mathematics. In fact, from a logical perspective "antisymmetric under exchange of arguments" contains all the information implied and nothing more need be said. (Goes to my statemnent that all proofs are no more than comprehending the information embedded in one's axioms; something the human mind cannot accomplish without the help of mathematical/logical constructs.)

 

All such a comment really does is add to the confusion engendered by the use of English as a means of communicating logical constructs. Just great for setting up "mock battles" but rather worthless otherwise. ;) B)

 

Have fun -- Dick

Link to comment
Share on other sites

In the “paradigm” I am presenting, valid and invalid ontological elements are expressly different things and, yes, in that paradigm, anything the expectation of which is to be given by a vec{Psi} symmetric under exchange of the reference labels identifying those elements is an invalid ontological element. But that does not mean the explanation does not require that element! Note that, on the other hand, there may very well exist "invalid" ontological elements which are antisymmetric under exchange of the reference labels identifying those elements. All these elements (valid or invalid) are required by the explanation being expressed by vec{Psi}.

 

Won’t any ontological element that we can tell if it is invalid invalidate the explanation even if the item only exists in the explanation? (which the way I understand it no item is in the explanation that we have not added to it, that is, the explanation can’t tell us any thing about new elements that have not yet been added to our explanation)? Due to if the item is found then we cant say if it is real or not while if it is not found we have to conclude that it is in fact an invalid element which would invalidate the explanation, unless it makes no difference to us if there are invalid elements in the explanation or the items that are required by the explanation aren’t considered ontological objects.

 

Now look at the labels. It is only after you have your explanation that you actually give meaning to the labels you want to use to refer to these “things”, “ontological elements”, “real noumena” which you believe to be information upon which your explanation rests. Likewise, your expectations are to be described in terms of exactly these same labels. And don't forget, the very meanings of these labels is embodied in that explanation: i.e., the explanation includes defining the meaning of these labels. It should be quite clear to you that these actual labels can be simple numerical references and what labels are actually used to refer to these “things”, “ontological elements”, “real noumena” can not have the slightest impact upon the solution to your problem (your explanation). As I said to Buffy a long time ago, if your actual label makes a difference, you better come up with a way of establishing the “correct labels”.

 

So, before we have an explanation all that the ontological elements are is a set of points in the coordinate system and have no properties at all; when we solve the fundamental equation the coordinates of the elements turn into properties of the elements. So in solving the fundamental equation, do we have to define the meaning of the coordinate system or will they be defined by solving for an explanation?

 

Does this mean that an explanation is more fundamental then the elements in it and that when we have a solution to the fundamental equation the act of adding elements is comparable to initial value problems in solving differential equations?

 

So the symmetry’s of the fundamental equation only exist while we are solving for the explanation and after we have an explanation we can’t just add a new element without solving for the explanation again or knowing how the coordinates have been defined?

 

Does this mean that there are an infinite number of equations built into the fundamental equation that we can generate from it by defining a coordinate system and then simplifying the resulting equation, all solutions to which must satisfy the fundamental equation?

Link to comment
Share on other sites

Won’t any ontological element that we can tell if it is invalid invalidate the explanation even if the item only exists in the explanation?
In a word, NO! You are not comprehending my definition of “invalid”.
Due to if the item is found then we cant say if it is real or not while if it is not found we have to conclude that it is in fact an invalid element which would invalidate the explanation, unless it makes no difference to us if there are invalid elements in the explanation or the items that are required by the explanation aren’t considered ontological objects.
You are clearly trying to use the common definition of “invalid”. That is one of the problems of using English as a means of communicating logical arguments; the terms are simply not defined with the kind of care needed for extended abstract logic. In my discussion, “invalid” means no more than “it is not actually an element of reality”; it is merely required by the specific explanation of reality being used to provide you with your expectation. That is also why I introduced the term “flaw-free”. What you are referring to as an “invalid explanation” is what I would classify as a “flawed” explanation. All defined ontological elements are part and parcel of an explanation. In the absence of an explanation no ontological element has any definition or any qualities of any kind. In order to understand anything, we must have an explanation.
So, before we have an explanation all that the ontological elements are is a set of points in the coordinate system and have no properties at all;
Before we have an explanation, the ontological elements are not even “a set of points in the coordinate system”. What I am doing is constructing an explanation which invalidates (and here I am using the word “invalidate” for its common meaning) no possible explanation. The final explanation so constructed is what I later refer to as the ”what is”, is “what is” explanation. No element in that representation has any meaning of any kind. As I have said earlier, the ”what is”, is “what is” explanation is the only explanation which defines nothing.

 

What is important here is the fact that any explanation can be seen as defined by a specific ”what is”, is “what is” explanation. What I am trying to point out to you is that, in order for me to understand your explanation (whatever it happens to be) you have to explain it to me. The problem I must solve in order to understanding you is exactly the same problem which I face in trying to understand anything and that process itself can be put in the form of a ”what is”, is “what is” explanation.

when we solve the fundamental equation the coordinates of the elements turn into properties of the elements. So in solving the fundamental equation, do we have to define the meaning of the coordinate system or will they be defined by solving for an explanation?
I am going to post my opening procedure in my analysis of the solutions to that equation. I suspect that post will clear things up much more than I can here. As far as the meanings of the coordinate system, in my paradigm, they are no more than a coordinate system for representing that ”what is”, is “what is” explanation.
Does this mean that an explanation is more fundamental then the elements in it and that when we have a solution to the fundamental equation the act of adding elements is comparable to initial value problems in solving differential equations?
I do not understand what you have in mind here. I certainly would not think of adding invalid elements as equivalent to an initial value problem in solving differential equations.

 

Rather than make an attempt to answer the rest of your post, I think I will just go ahead and post my opening analysis of that fundamental equation and its implications.

 

Sorry about that -- Dick

Link to comment
Share on other sites

Well, under advice from Anssi, I have decided to post the first step of my analysis of the solutions to what I call the fundamental differential equation. I want to make it clear to anyone who reads this that the issue is not really a solution of that equation but rather an examination of possible solutions. By definition, [imath]\vec{\Psi}[/imath] is a mathematical representation of our expectations. Those expectations are the result of a flaw free explanation of reality. The explanation itself is a epistemological construct which provides a consistent and flaw free explanation of the past. As such, I have no real interest in the actual solution or how it was achieved; my only interest is in the fact that such a solution exists: i.e., you do in fact have expectations.

 

There are two facts extant here: first, a function (a method of obtaining one's expectations from a given set of known elements: i.e., [imath]\vec{\Psi}[/imath]) exists and that function must be a solution to my fundamental equation. Furthermore, if I understand that flaw-free explanation, the method of obtaining the appropriate expectations is known to me. It is very important here to remember that [imath]\vec{\Psi}[/imath] is a mathematical representation of our expectations and is not necessarily a correct representation of the future. What I am trying to point out is that our expectations are never necessarily correct (see Kriminal99's post on induction); what is being enforced is that the known past is consistent with those expectations,not the future. The future is a totally unknown issue. Our only defense of our expectations is that the volume of information which goes to make up the past is far far in excess of the next “present” (from our perspective): i.e., it would be rather ridiculous to conclude that anything in the next “present” would be sufficiently significant to be a major alteration to the net past (that would be “all the information we are trying to make sense of”).

 

With that in mind, the equation of interest is

[math]\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\Psi} = K\frac{\partial}{\partial t}\vec{\Psi}.[/math]

 

This expression is quite analogous to a differential equation describing the evolution of a many body system which, as anyone competent in physics knows, is not an easy thing to solve. What we would like to do is to reduce the number of arguments to something which can be handled: i.e., we want to know the nature of the equations which must be obeyed by a subset of those variables. In an interest towards accomplishing that result, my first step is to divide the problem into two sets of variables: set number one will be the set referring to our “valid” ontological elements (together with the associated tau indices) and set number two will refer to all the remaining arguments. I will refer to these sets as #1 and #2 respectively. (You should comprehend that #1 must be finite and that #2 can possibly be infinite.) Now, when we started this whole thing, I defined the probability of specific expectations to be given by the squared magnitude of [imath]\vec{\Psi}[/imath] under the argument that such a notation (that abstract vector) can represent absolutely any method of getting from one set of numbers to another: i.e., there exists no operation capable of yielding one's expectations which cannot be represented by such a structure.

 

Having divided the arguments into two sets, a competent understanding of probability should lead to acceptance of the following relationship: the probability of #1 and #2 (i.e., the expectation that these two specific sets occur together) is given by the product of two specific probabilities: [imath]P_1[/imath](#1), the probability of set number one, times [imath]P_2[/imath](#2 given #1), the probability of set number two given set number one exists. The existence of set #1 in the second probability is necessary as the probability of set #2 can very much depend upon that existence. At this point, exactly the same argument used to defend [imath]\vec{\Psi}[/imath] as embodying a method of obtaining expectations (the probability distribution) for the entire collection of arguments can be used to assert that there must exist abstract vector functions [imath]\vec{\Psi}_1[/imath] and [imath]\vec{\Psi}_2[/imath] which will yield, respectively [imath]P_1[/imath] and [imath]P_2[/imath].

 

It should be clear that, under these definitions (representing the argument [imath](x,\tau)_i[/imath] as [imath]\vec{x}_i[/imath]),

[math]\vec{\Psi}(\vec{x}_1,\vec{x}_2,\cdots, t)=\vec{\Psi}_1(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n, t)\vec{\Psi}_2(\vec{x}_1,\vec{x}_2,\cdots, t).[/math]

 

Substituting this result into our fundamental equation, what we obtain can be written

[math]\left\{\sum_{\#1} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#1)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}_1\vec{\Psi}_2 + 2\left\{ \sum_{i=\#1 j=\#2}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\ \right\}\vec{\Psi}_1\vec{\Psi}_2+[/math]

[math] \left\{\sum_{\#2} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#2)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}_1\vec{\Psi}_2 = K\frac{\partial}{\partial t}(\vec{\Psi}_1\vec{\Psi}_2).[/math]

 

At this point, it is important to realize that set #2 consists of invalid ontological elements created for the purpose of constraining set #1 to what they actually were. I often used to ask the question, “how does one tell the difference between an electron and a Volkswagen?” No one except Anssi seemed to ever grasp the essence of that question. The answer is of course: “context”. In my original proof, arbitrary invalid ontological elements were added until one achieved the state where knowing the specific indices of any n-1 elements associated with a given t index would guarantee that the index of the missing element could be determined. Under this picture, set #2 is certainly context as since they are invalid ontological elements, they can be anything so long as they are consistent with the explanation: i.e., the only requirement here is that they need to obey the fundamental equation. Thus it is that I will take the position that, if we know a flaw-free explanation, we know the method of obtaining our expectations for set #2: i.e., we know [imath]\vec{\Psi}_2[/imath]. If we left multiply the above equation by [imath]\vec{\Psi}_2^\dagger[/imath] (forming the inner or dot product with the algebraically modified [imath]\vec{\Psi}_2[/imath]) and integrate over the entire set of arguments referred to as set #2, we will obtain the following result:

[math]\left\{\sum_{\#1} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#1)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\right\}\vec{\Psi}_1 + \left\{2 \sum_{i=\#1 j=\#2}\int \vec{\Psi}_2^\dagger \cdot \beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\vec{\Psi}_2 dV_2 \right. +[/math]

[math] \left.\int \vec{\Psi}_2^\dagger \cdot \left[\sum_{\#2} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#2)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right]\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1 = K\frac{\partial}{\partial t}\vec{\Psi}_1+K \left\{\int \vec{\Psi}_2^\dagger \cdot \frac{\partial}{\partial t}\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1[/math]

 

Notice that [imath]\int \vec{\Psi}_2^\dagger \cdot\vec{\Psi}_2dV_2 [/imath] equals unity by definition of normalization. Furthermore, since the tau axis was introduced for the sole purpose of assuring that two identical indices associated with valid ontological elements existing in the same [imath](x,\tau)_t[/imath] would not be represented by the same point, we came to the conclusion that [imath]\vec{\Psi}_1[/imath] must be asymmetric with regard to exchange of arguments. If that is indeed the case (as it must be) then the second term in the above equation will vanish identically as [imath]\vec{x}_i[/imath] can never equal [imath]\vec{x}_j[/imath] for any i and j both chosen from set #1.

 

If the actual function [imath]\vec{\Psi}_2[/imath] were known (i.e., a way of obtaining our expectations for set #2 is known), the above integrals could be explicitly done and we would obtain an equation of the form:

[math] \left\{\sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right\}\vec{\Psi}_1 = K\frac{\partial}{\partial t}\vec{\Psi}_1. [/math]

 

The function f must be a linear weighted sum of alpha and beta operators plus one single term which does not contain such an operator. That single term arises from the final integral of the time derivative of [imath]\vec{\Psi}_2[/imath] on the right side of the original representation of the result of integration:

[math]\int \vec{\Psi}_2^\dagger\cdot\frac{\partial}{\partial t}\vec{\Psi}_2dV_2.[/math]

 

The above is an example of the kind of function the indices on our valid ontological elements must obey; however, it is still in the form of a many body equation and is of little use to us if we cannot solve it. In the interest of learning the kinds of constraints the equation implies, let us take the above procedure one step farther and search for the form of equation a single index must obey (remember the fact that we added invalid ontological elements until the index on any given element could be recovered if we had all n-1 other indices). We may immediately write [imath]P_1[/imath](set #1) = [imath]P_0(\vec{x}_1,t)P_r[/imath](remainder of set #1 given [imath]\vec{x}_1[/imath],t). Note that [imath]\vec{x}_1[/imath] can refer to any index of interest as order is of no significance. Once again, we can deduce that there exist algorithms capable of producing [imath]P_0[/imath] and [imath]P_r[/imath]; I will call these functions [imath]\vec{\Psi}_0[/imath] and [imath]\vec{\Psi}_r[/imath] respectively. It follows that [imath]\vec{\Psi}_1[/imath] may be written as follows:

[math]\vec{\Psi}_1(\vec{x}_1,\vec{x}_2, \cdots, \vec{x}_n, t)= \vec{\Psi}_0(\vec{x}_1,t)\vec{\Psi}_r(\vec{x}_1,\vec{x}_2, \cdots, \vec{x}_n, t).[/math]

 

If I make this substitution in the earlier equation for [imath]\vec{\Psi}_1[/imath], I will obtain the following relationship:

[math]\left\{\sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right\}\vec{\Psi}_0\vec{\Psi}_r = K\frac{\partial}{\partial t}(\vec{\Psi}_0\vec{\Psi}_r). [/math]

 

Once again I point out that [imath]\vec{\Psi}_r[/imath] constitutes the context for [imath]\vec{\Psi}_0(\vec{x}_1,t)[/imath]. Once again, I will take the position that, if we know dthe flaw-free explanation represented by [imath]\vec{\Psi}[/imath], we know our expectations for the set of indices two through n, set “r”,: i.e., we know [imath]\vec{\Psi}_r[/imath] (the context). As before, if we now left multiply the above equation by [imath]\vec{\Psi}_r^\dagger[/imath] (forming the inner or dot product with the algebraically modified [imath]\vec{\Psi}_r[/imath]) and integrate over the entire set of arguments referred to as set “r” (the remainder after [imath]\vec{x}_1[/imath] has been specified), we will obtain the following result:

[math]\vec{\alpha}_1\cdot \vec{\nabla}_1\vec{\Psi}_0 + \left\{\int \vec{\Psi}_r^\dagger\cdot \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right] \vec{\Psi}_r dV_r\right\}\vec{\Psi}_0 = K\frac{\partial}{\partial t}\vec{\Psi}_0\ + K\left\{\int \vec{\Psi}_r^\dagger \cdot \frac{\partial}{\partial t}\vec{\Psi}_r dV_r \right\}\vec{\Psi}_0. [/math]

 

Notice once again that [imath]\int \vec{\Psi}_r^\dagger \cdot\vec{\Psi}_rdV_2 [/imath] equals unity by definition of normalization. Notice also that the term [imath]\vec{\alpha}_1\cdot \vec{\nabla}_1[/imath] appears both standing alone and inside the integral over the indices represented by the set “r”; this occurs because [imath]\vec{\Psi}_r[/imath] is a function of [imath]\vec{x}_1[/imath] and the chain rule applies to differential operation on the product function [imath]\vec{\Psi}_0\vec{\Psi}_r[/imath].

 

Now, this resultant may be a linear differential equation in one variable but it is not exactly in a form one would call “transparent”. In the interest of seeing the actual form of possible solutions allow me to discuss an approximate solution discovered by setting three very specific constraints to be approximately valid. The first of these three is that the data point of interest, [imath]\vec{x}_1[/imath], is insignificant to the rest of the universe: i.e., [imath]P_r[/imath] is, for practical purposes, not much effected by any change in the actual form of [imath]\vec{\Psi}_0[/imath]: i.e., feed back from the rest of the universe due to changes in [imath]\vec{\Psi}_0[/imath] can be neglected. The second constraint will be that the probability distribution describing the rest of the universe is stationary in time: that would be that [imath]P_r[/imath] is, for practical purposes, not a function of t. If that is the case, the only form of the time dependence of [imath]\vec{\Psi}_r[/imath] which satisfies temporal shift symmetry is [imath]e^{iS_rt}[/imath].

 

At this point, we must carefully analyze the development of the function f created when we integrated over set #2 in our earlier example. As mentioned at the time, f was a linear weighted sum of alpha and beta operators except for one strange term introduced by the time derivative of [imath]\vec{\Psi}_2[/imath]. Please note that, if [imath]P_r[/imath] is insensitive to [imath]\vec{\Psi}_0[/imath] and stationary in time then so is [imath]P_2[/imath]. This follows directly from the fact that [imath]P_2[/imath] is the probability distribution of the “invalid” ontological elements required to constrain the “valid” ontological elements to what is to be explained. There is certainly no required time dependence if the set to be explained has no time dependence, nor can there be any dependence upon [imath]\vec{\Psi}_0[/imath] if the set “r” can be seen as uninfluenced by [imath]\vec{\Psi}_0[/imath]. This leads to the conclusion that

[math]K\left\{\int \vec{\Psi}_2^\dagger \frac{\partial}{\partial t}\vec{\Psi}_2dV_2\right\}\vec{\Psi}_1=iKS_2\vec{\Psi}_1[/math]

 

and that the function “f” may be written [imath]f=f_0 -iKS_2[/imath] where [imath]f_0 is entirely made up of a linear weighted sum of alpha and beta operators. So long as the above constraints are approximately valid, our differential equation for [imath]\vec{\Psi}_0(\vec{x}_1,t)[/imath] may be written in the following form.

[math]\vec{\alpha}_1\cdot \vec{\nabla}\vec{\Psi}_0 + \left\{\int \vec{\Psi}_r^\dagger\cdot \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f_0(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right] \vec{\Psi}_r dV_r\right\}\vec{\Psi}_0 = K\frac{\partial}{\partial t}\vec{\Psi}_0\ + iK\left(S_2+S_r\right)\vec{\Psi}_0. [/math]

 

For the simple convenience of solving this differential equation, this result clearly suggests that one redefine [imath]\vec{\Psi}_0[/imath] via the definition [imath]\vec{\Psi}_0 = e^{-iK(S_2+S_r)t}\vec{\Phi}[/imath]. If one further defines the integral within the curly braces to be [imath]g(\vec{x}_1)[/imath], [imath]\vec{x}_1[/imath] being the only variable not integrated over, the equation we need to solve can be written in an extremely concise form:

[math]\left\{\vec{\alpha}\cdot \vec{\nabla} + g(\vec{x})\right\}\vec{\Phi} = K\frac{\partial}{\partial t}\vec{\Phi}, [/math]

 

which implies the following operational identity:

[math]\vec{\alpha}\cdot \vec{\nabla} + g(\vec{x}) = K\frac{\partial}{\partial t}. [/math]

 

That is, as long as these operators are operating on the appropriate [imath]\vec{\Phi}[/imath] they must yield identical results. If we now multiply the original equation by the respective sides of this identity, recognizing that the multiplication of the alpha and beta operators yields either one half (for all the direct terms) or zero (for all the cross terms) and defining the resultant of [imath]g(\vec{x})g(\vec{x})[/imath] to be [imath]\frac{1}{2}G(\vec{x})[/imath] (note that all alpha and beta operators have vanished), we can write the differential equation to be solved as

[math] \nabla^2\vec{\Phi}(\vec{x},t) + G(\vec{x})\vec{\Phi}(\vec{x},t)= 2K^2\frac{\partial^2}{\partial t^2}\vec{\Phi}(\vec{x},t).[/math]

 

At this point we must turn to analysis of the impact of our [imath]\tau[/imath] axis, a pure creation of our own imagination and not a characteristic of the actual data defining the collection of referenced elements we need to explain. Since we are interested in the implied probability distribution of x, we must (in the final analysis) integrate over the probability distribution of tau. Since tau is a complete fabrication of our imagination, the final [imath]P(x.\tau,t)[/imath] certainly cannot depend upon tau. It follows directly from this observation that the dependence of [imath]\vec{\Phi}[/imath] on tau must (at worst) be of the form [imath]e^{iq\tau}[/imath]. It follows directly from this observation that the differential equation can be written.

[math] \left\{\frac{\partial^2}{\partial x^2} - q^2 + G(x)\right\}\vec{\Phi}(x,t)= 2K^2\frac{\partial^2}{\partial t^2}\vec{\Phi}(x,t).[/math]

 

Notice that, if the term [imath]q^2[/imath] is moved to the right side of the equal sign, we may factor that side and obtain,

[math] \left\{\frac{\partial^2}{\partial x^2} + G(x)\right\}\vec{\Phi}(x,t)=\left\{\sqrt{2}K\frac{\partial}{\partial t}- iq\right\}\left\{\sqrt{2}K\frac{\partial}{\partial t}+iq\right\}\vec{\Phi}(x,t).[/math]

 

At this point, I will invoke a third approximation. I will concern myself only with cases where [imath]K\sqrt{2}\frac{\partial}{\partial t}\vec{\Phi} \approx -iq\vec{\Phi}[/imath] to a high degree of accuracy. In this case, the first term on the right may be replaced by -2iq and, after devision by 2q, we have

[math]\left\{\frac{1}{2q}\frac{\partial^2}{\partial x^2}+\frac{1}{2q}G(x)\right\}\vec{\Phi}(x,t)= -i\left\{\sqrt{2}K \frac{\partial}{\partial t} + iq \right\}\vec{\Phi}(x,t).[/math]

 

Once again, the form of the equation suggests we redefine [imath]\vec{\Phi}[/imath] via an exponential adjustment [imath]\vec{\Phi}(x,t)=\vec{\phi}(x,t)e^{\frac{-iqt}{K\sqrt{2}}}[/imath], thus simplifying the differential equation by removing the final iq term. To anyone familiar with modern physics, the equation should be beginning to look very familiar. In fact, if we multiply through by [imath]-\hbar c[/imath] (which clearly has utterly no impact on the solution as it multiplies every term) and make the following definitions directly related to constants already defined,

[math]m=\frac{q\hbar}{c}[/math] , [math]c=\frac{1}{K\sqrt{2}}[/math] and [math]V(x)= -\frac{\hbar c}{2q}G(x)[/math]

 

it turns out that the equation of interest (without the introduction of a single free parameter: please note that no parameters not defined in the derivation of the equation have been introduced) is exactly one of the most fundamental equations of modern physics.

[math]\left\{-\left(\frac{\hbar^2}{2m}\right)\frac{\partial^2}{\partial x^2}+ V(x)\right\}\vec{\phi}(x,t)=i\hbar\frac{\partial}{\partial t}\vec{\phi}(x,t)[/math]

 

This is, in fact, exactly Schroedinger's equation in one dimension.

 

This is a truly astounding conclusion. The fact that the probability of seeing a particular number in a stream of totally undefined numbers can be deduced to be found via Schroedinger's equation, no matter what the rule behind those numbers might be, is totally counter intuitive. It is extremely important that we check the meaning of the three constraints I placed on the problem in terms of the conclusion reached.

 

The first two are quite obvious. Recapping, they consisted of demanding that the data point under consideration had negligible impact on the rest of the universe and that the pattern representing the rest of the universe was approximately constant in time. These are both common approximations made when one goes to apply Schroedinger's equation: that is, we should not be surprised that these approximations made life convenient. What is important is that Schroedinger's equation is still applicable to physical situations where these constraints are considerably relaxed. In other words, the constraints are not required by Schroedinger's equation itself.

 

The serious question then is, what happens to my derivation when those constraints are relaxed. If one examines that derivation carefully, one will discover that the only result of these constraints was to remove the time dependent term from the linear weighted sum expressed by g(x). If this term is left in, G(x) will be complicated in three ways: first, the general representation must allow for time dependence; second, the representation must allow for terms proportional to [imath]\frac{\partial}{\partial x}[/imath] and, finally, the resultant V(x) will be a linear sum of the alpha and beta operators.

 

The time dependence creates no real problems: V(x) merely becomes V(x,t). The terms proportional to [imath]\frac{\partial}{\partial x}[/imath] correspond to velocity dependent terms in V and, finally, retention of the alpha and beta operators essentially forces our deductive result to be a set of equation, each with its own V(x,t). All of these results are entirely consistent with Schroedinger's equation, they simply require interactions not commonly seen on the introductory level. Inclusion of these complications would only have served to obscure the fact that what was deduced was, in fact, Schroedinger's equation.

 

That brings us down to the final constraint, [imath]K\sqrt{2}\frac{\partial}{\partial t}\vec{\Phi}\approx -iq\vec{\Phi}[/imath]. If we multiply this relationship through by [imath] i\hbar[/imath] and divide by [imath]K\sqrt{2}[/imath] the definitions given for m and c above imply the constraint can be written

[math]i\hbar\frac{\partial}{\partial t}\vec{\Phi}\approx q\hbar c \vec{\Phi}= \left( \frac{q\hbar}{c}\right) c^2\vec{\Phi} = mc^2\vec{\Phi}.[/math]

 

The term [imath]mc^2[/imath] should be familiar to everyone and the left hand side, [imath]i\hbar\frac{\partial}{\partial t}[/imath], should be recognized as the energy operator from the standard Schroedinger representation of quantum mechanics. Putting these two facts together, it is clear that the redefinition of [imath]\vec{\Phi}[/imath] to [imath]\vec{\phi}[/imath] in the above deduction was completely analogous to adjusting the zero energy point to non-relativistic energies. This step is certainly necessary as Schroedinger's equation is well known to be a non-relativistic approximation: i.e., Schroedinger's equation is known to be false if this approximation is not valid.

 

A very strange thing has happened: that the above approximation is necessary is not surprising; that it arose the way it did is rather astonishing as we have arrived at the expression [imath]E=mc^2[/imath] without even mentioning the concept of relativity. This certainly implies that at least some aspects of relativity seem to be embedded in the paradigm I am presenting. That will turn out to be exactly correct and will become overtly evident a few posts from here.

 

Meanwhile, the fact that the Schroedinger equation is an approximate solution to my equation leads me to put forth a few more definitions. Note to Buffy: there is no presumption of reality in these definitions; they are no more than definitions of abstract relationships embedded in the mathematical constraint of interest to us. That is, these definitions are entirely in terms of the mathematical representation and are thus defined for any collection of indices which constitute references to the elements the function [imath]\vec{\Psi}[/imath] was defined to explain.

 

First, I will define ”the Energy Operator” as [imath]i\hbar\frac{\partial}{\partial t}[/imath] (and thus, the conserved quantity required by the fact of shift symmetry in the t index becomes “energy”: i.e., energy is conserved by definition). A second definition totally consistent with what has already been presented is to define the expectation value of “energy” to be given by

[math]E=i\hbar\int\vec{\Psi}^\dagger\cdot\frac{\partial}{\partial t}\vec{\Psi}dV.[/math]

 

I am putting this forward as a definition of the expectation value of energy for the sole reason that the concept is then applicable to the various functions I have proceeded through in deducing the Schroedinger equation above. What is important here is that the energy so defined is not conserved in the approximations used above (when the individual individual reference indices of ontological elements are examined) but rather that, when the entire collection of indices referring to these elements is represented by the appropriate function, total energy so defined will be conserved.

 

In addition, the comparison with Schroedinger's equation also suggests the definition of another mathematical operator which can, via exactly the same analogy, be called "the Momentum Operator" as [imath]-i\hbar\frac{\partial}{\partial x}[/imath] (and thus, the conserved quantity required by the fact of shift symmetry in the “x” index becomes “momentum”: i.e., the total momentum of the entire collection of referrences to our ontological elements will be conserved via the constraint [imath]\sum\frac{\partial}{\partial x_i}\vec{\Psi}=0[/imath]). Once again, a second definition total consistent with what has already been presented is to define the expectation value of “momentum” to be given by

[math]P=-i\hbar\int\vec{\Psi}^\dagger\cdot\frac{\partial}{\partial x}\vec{\Psi}dV.[/math]

 

Once again, this says nothing about the conservation of an individual indices “momentum”. The momentum of an individual index is a function of actual [imath]\vec{\phi}[/imath] describing the expectation of the element referranced by that index. Nevertheless, it does imply that the total momentum of all the referrence indices will be conserved.

 

Finally, I would like to introduce a third operator defended by exactly the same analysis provided above. This third operator is completely fictional as it arises from shift symmetry in the fictional axis tau. I will call this operator "the Mass Operator" and define it as [imath]-i\frac{\hbar}{c}\frac{\partial}{\partial \tau}[/imath]. Likewise, this leads to a second definition: the expectation value of “mass” to be given by

 

[math]m=-i\frac{\hbar}{c}\int\vec{\Psi}^\dagger\cdot\frac{\partial}{\partial \tau}\vec{\Psi}dV.[/math]

 

Once again, I have managed to define a term (a mathematical operator) applicable to each and every referrence index to every element in the entire collection. The relationship between referrence indices implied here is a little more involved than energy and momentum. The fact that tau is a totally fictional axis requires not only shift symmetry (which yields conservation of mass when summed over the entire collection) but also yields conservation of mass on the referrence index level as nothing can actually be a function of tau in the final analysis. That is, not only do we have shift symmetry (which yields total mass as a conserved quantity) but we also have the fact that no details of the final result cannot possibly be a function of tau. This leads to the conclusion that the “mass” of individual referrences to valid ontological elements cannot be a function of tau.

 

I'll see what kinds of objections that presentation leads to before I will go on. As a comment to Buffy, this is still a completely abstract paradigm and there is utterly no implied relationship to reality. All I have done is show that there always exists a paradigm designed to yield expectations from a set of numbers which can see those numbers as elements approximately obeying Schroedinger's equation: i.e., time, position, mass, momentum and energy are all terms which can be defined for any collection of numerical indices to be analyzed. Once upon a time (back in the mid eighties) an economics professor asked me what what I was doing had to do with economics and I composed a paper for him showing exactly how all the above concepts could be mapped directly into economic theory. Not only that, but most all the economists already knew most of it; they already use terms like “energy” and “momentum” in their own discussions of trends and what kinds of changes one should expect. These are quite well defined universal concepts applicable to any numerical analysis.

 

Have fun -- Dick

Link to comment
Share on other sites

So, Doctordick, are you saying above that you have discovered a mathematical way to "derive" the Schroedinger equation ? It was my understanding that QM holds the Schroedinger's equation to be a "fundamental postulate", with no derivation from other principles required--same as Newton's second law, f = ma, is a fundamental postulate of classical mechanics and derived from nothing more fundamental. But if I read you correctly, you claim the opposite, eg., you claim that your "fundamental equation":

 

[math]

\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\Psi} = K\frac{\partial}{\partial t}\vec{\Psi}.

[/math]

 

leads to derivation of both Schroedinger equation and Newton second law (plus many other equations we consider laws of nature)--that is, you argue (in the abstract of course) how QM and classical are but two aspects of a more fundamental dialectic union that is the logical solution of your fundamental equation--would this be correct ?

Link to comment
Share on other sites

So, Doctordick, are you saying above that you have discovered a mathematical way to "derive" the Schroedinger equation ?
All I have said is that Schroedinger's equation is an approximate solution to my "fundamental equation" and, as such, has led me to define the concepts position, mass, momentum and energy (plus time which I defined earlier) as well defined characteristis of any collection of information valuable to the analysis of a collection of numerical indices no matter what the laws behind those indices may be.

 

You examine my deduction and decide if I have "derived" Schroedinger's equation.

 

Does that make sense to you? -- Dick

Link to comment
Share on other sites

All I have said is that Schroedinger's equation is an approximate solution to my "fundamental equation" ....You examine my deduction and decide if I have "derived" Schroedinger's equation....Does that make sense to you?
Yes, very clear, thank you. Since you ask, your deduction clearly has "derived" the Schroedinger's equation, since, as we see here [http://mathworld.wolfram.com/Derivation.html] by definition: {A derivation is a sequence of steps, logical or computational, from one result to another. The word derivation comes from the word "derive."}.

 

Now, since you claim the Schroedinger's equation is an "approximate solution" to your "fundamental equation", and not the "true solution"...your fundamental equation must then (by your definition) be constrained by a "local truncation error". By "local truncation error" I mean nothing more than the mathematical difference that results from approximate solution and true solution in the use of a mathematical deduction using calculus. Thus we must conclude that the "true" essence of reality can never be a solution to your fundamental equation, only the "approximate" essence...would this be correct ?

Link to comment
Share on other sites

Now, since you claim the Schroedinger's equation is an "approximate solution" to your "fundamental equation", and not the "true solution"...your fundamental equation must then (by your definition) be constrained by a "local truncation error".
We are currently talking about a specific approximate solution. Notice I use this for the purpose of setting up some definitions of valuable universally applicable concepts: i.e., my definitions are designed to be universal and are not limited to the correctness of Schroedinger's equation.
Thus we must conclude that the "true" essence of reality can never be a solution to your fundamental equation, only the "approximate" essence...would this be correct ?
I think you are kind of jumping the gun here. You would have to define exactly what you mean by the phrase “the 'true' essence of reality”. Suppose we let that go until I have finished showing what I have discovered of the solutions to my equation.

 

Meanwhile, since you have accepted the fact that Schroedinger's equation is an approximate solution to my fundamental equation, I think I will go ahead and post a rather important problem in the deduction of that relationship. I have used that fact to define a number of concepts as valuable to any numerical analysis; however, there is a very important issue here which needs to be pointed out. It is well known that almost the entirety of classical mechanics can be derived from Schroedinger's equation; whole books have been written on the subject. But I have deduced Schroedinger's equation in one dimension which is not really a very powerful starting point at all.

 

Many concepts of classical mechanics can be deduced but the classical representation achieved in my recent post is entirely a one dimensional thing. Rotation, dynamic scattering and many other aspects of classical mechanics are a consequence of the three dimensional nature of the classical picture and certainly cannot be even defined in a one dimensional picture. However, there is a very important aspect of the analysis I have put forth which provides us with a way of bringing in additional dimensionality.

 

Note that I have said utterly nothing about the laws of behavior of the collection of reference indices I have defined. My equation is required by symmetry principals alone and has absolutely nothing to do with reality (a point Buffy has made a number of times already). That is one reason I have often referred to my paradigm as a data compression mechanism rather than a true “explanation” (it is, in reality, nothing more than a mathematical structure capable of representing any ”what is”, is “what is” tabular explanation; a simple constraint on what is acceptable and what isn't. Against this, one must comprehend that dimensions, in any representation of these reference indices, is really little more than a statement of their independence. In this paradigm, the only dependence is on context anyway (which is mostly references to invalid ontological element which are a pure figment of our imagination) so the issue of “independence” is a rather moot issue.

 

So, suppose we collect those reference indices in pairs. We can represent a particular pair from the collection represented by the index “[imath]t_i[/imath]” as [imath](x,y)_i[/imath]. The total number of indices might not be even but that is no real problem; we have already created a slew of “invalid” reference indices and throwing in one extra index to make the collection even is of utterly no consequence. Now, instead of keeping track of these references via a point on the x axis, we merely use a point in an (x,y) plane. Once again the problem of losing information arises as, if two “index pairs” happen to be identical, they will plot to the same point. Once again invention of a tau axis to provide separation alleviates the problem and shift symmetry applies independently to all three axes. This leads to exactly the same sort of differential constraints which appeared in my original analysis (which I tried to clarify in a response to Buffy):

[math]\sum_i \frac{\partial}{\partial x_i}\vec{\psi}(x_1,y_1,\tau_1,x_2,y_2,\tau_2, \cdots , x_n,y_n \tau_n,t) = iK_x\vec{\psi},[/math]

 

[math]\sum_i \frac{\partial}{\partial y_i}\vec{\psi}(x_1,y_1,\tau_1,x_2,y_2,\tau_2, \cdots , x_n,y_n \tau_n,t) = iK_y\vec{\psi},[/math]

 

[math]\sum_i \frac{\partial}{\partial \tau_i}\vec{\psi}(x_1,y_1,\tau_1,x_2,y_2,\tau_2, \cdots , x_n,y_n \tau_n,t) = iK_\tau\vec{\psi},[/math]

 

and

 

[math]\frac{\partial}{\partial t}\vec{\psi}(x_1,y_1,\tau_1,x_2,y_2,\tau_2, \cdots , x_n,y_n \tau_n,t) = iK_t \vec{\psi}.[/math]

 

Again, through extended additions of “invalid” ontological elements, the proof that there always exists a collection of such “invalid” ontological elements such that the entire ”what is”, is “what is” table is specified by the “rule” F=0 still stands; however, the implied constraint will now be written:

[math]\sum_{i \neq j }\delta(x_i - x_j)\delta(y_i-y_j)\delta(\tau_i - \tau_j)\vec{\psi} = 0.[/math]

 

Once again, through the use of those anticommuting operators I defined, the five constraints given above can be enforced by

[math]\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\right\}\vec{\psi} = K\frac{\partial}{\partial t}\vec{\psi} = iKm\vec{\psi}.[/math]

 

where [imath]\vec{x}_i[/imath] is a vector in the x,y,tau space pointing to the point represented by [imath](x,y.\tau)_i[/imath] within the set defined by t and [imath]\vec{\nabla}_i[/imath] is a vector in the x,y,tau space whose components are defined by the expression:

[math]\vec{\nabla}_i = \frac{\partial}{\partial x_i}\hat{x}+\frac{\partial}{\partial y_i}\hat{y}+\frac{\partial}{\partial \tau_i}\hat{\tau}[/math].

 

(Note that, in spite of these changes, the appearance of the fundamental equation is unchanged; it is the dimensionality of the paradigm which has changed.) We must also define a y component for that [imath]\vec{\alpha}_i[/imath] : i.e,. the alpha and beta operators must now be defined by:

 

[imath][\alpha_{ix} , \alpha_{jx}] \equiv \alpha_{ix} \alpha_{jx} + \alpha_{jx}\alpha_{ix} = \delta_{ij}[/imath]

 

[imath][\alpha_{iy} , \alpha_{jy}] = \delta_{ij}[/imath]

 

[imath][\alpha_{i\tau} , \alpha_{j\tau}] = \delta_{ij}[/imath]

 

[imath][\beta_{ij} , \beta_{kl}] = \delta_{ik}\delta_{jl}[/imath]

 

and

 

[imath][\alpha_{ix} , \alpha_{iy}]=[\alpha_{ix} , \alpha_{i\tau}]=[\alpha_{iy} , \alpha_{i\tau}]=[\alpha_{ix}, \beta_{kl}]=[\alpha_{iy}, \beta_{kl}]=[\alpha_{i\tau}, \beta_{kl}] = 0[/imath]

 

where

 

[imath]\delta_{ij} = \left\{\begin{array}{ c c }0, & \text{ if } i \neq j \\1, & \text{ if } i=j\end{array} \right.[/imath]

 

Exactly the same arguments used to recover the original constraints with the x,tau representation will now recover the five constraints deduced for the x,y,tau representation above. The fundamental equation looks exactly the same but now contains the representation of an additional axis. Likewise, exactly the same arguments which showed Schroedinger's equation in one dimension was an approximate solution to that fundamental equation will now (without any change) show that Schroedinger's equation in two dimensions is an approximate solution to our new representation.

 

Personally, that first equation, sans the interaction term, reminds me a lot of “string theory”: i.e., vibrations on a one dimensional object. In the same vein, we now have, in a sense, vibrations in a plane surface (perhaps someone could come up with a “sheet theory”?). At any rate, this new development yields a more complex view of our analysis problem, it allows the possibility of defining rotation for example, but it certainly does not bring with it the whole of classical analysis. In order to do that, we need to be working with a three dimensional paradigm. That is no problem at all; we can merely collect those indices in triplets and denote the information references by means of a point in an x,y,z,tau space (tau is still necessary because of exactly the same reasons it was necessary before: without it, information would be lost in the representation). Absolutely every argument I have given here goes through exactly as before and the fundamental equation is unchanged except for the additional axis. Likewise exactly the same arguments I gave for Schroedinger's equation being an approximate solution to that equation lead to the fact that Schroedinger's equation in three dimensions is an approximate solution to the fundamental equation representing this three dimensional paradigm.

 

Since most all of classical mechanics can be deduced from Schroedinger's equation, it follows that this three dimensional paradigm for analyzing any collection of undefined data is a very powerful way of keeping track of one's past and thus of most probable futures: thee dimensional classical mechanics must be an excellent approximation of what is to be expected no matter what the real rules might be. I suspect very strongly that this is exactly why we all see the universe as a collection of three dimensional objects in a three dimensional space. It is quite clear that being able to make valuable life and death predictions based on such a simple model of any ”what is”, is “what is” structure is a powerful benefit to survival and millions of years of evolution have ingrained the classical mechanical solution into us. In fact, most all of the animals on earth seem to have an excellent comprehension of what kind of results can be expected from a three dimensional classical mechanical perspective; their very lives depend upon it. It is clearly a very powerful illusion to possess and, as I have shown, it is something which can be known.

 

At least we can be confident that, no matter how long we live and how much data we have to go by, it will always be true for our past. That seems like something worth having even if it is nothing but a valuable illusion; its value is unmeasurable.:cocktail::woohoo::cocktail:

 

I have more if this raises any interest -- Dick:smilingsun:

Link to comment
Share on other sites

I have more if this raises any interest -- Dick:smilingsun:

Yes it certainly does. I just read the last posts in a bit of a hurry and didn't have time to digest much of it, and when I saw all that math I just went holeey s*** :shrug: Have to get around to that at better time.

 

Anyhow, yeah, as you know, I certainly share your view regarding us identifying our surroundings in terms of 3-dimensional objects in 3-dimensional space because it's useful for survival.

 

I originally approached these issues from the perspective of AI (or just intelligence in general), and it was that open-ended nature of our "perceptions" that I figured first, and implications to our best physical models later (I mean figuring out the semantical nature of them). It is kind of interesting that for the most part, seems you approached the issue from the opposite end, i.e. being familiar with the physical models first.

 

I've also had my fair share of "so says you" replies when trying to explain this to people (the AI perspective), so I would certainly love to see people picking up that math and figuring out what it's all about.

 

Anyway, I hope to be able to put some more time into figuring out all that math in the near future...

 

-Anssi

Link to comment
Share on other sites

If the actual function vec{Psi}_2 were known (i.e., a way of obtaining our expectations for set #2 is known), the above integrals could be explicitly done and we would obtain an equation of the form:

 

left{sum_{i=1}^n vec{alpha}_i cdot vec{nabla}_i +f(vec{x}_1,vec{x}_2, cdots,vec{x}_n,t)right}vec{Psi}_1 = Kfrac{partial}{partial t}vec{Psi}_1.

 

The function f must be a linear weighted sum of alpha and beta operators plus one single term which does not contain such an operator. That single term arises from the final integral of the time derivative of vec{Psi}_2 on the right side of the original representation of the result of integration:

Then the function f has the form

 

f = K \left\{\int \vec{\Psi}_2^\dagger \cdot \frac{\partial}{\partial t}\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1 + \sum_{i \neq j #1}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\delta(\vec{\tau }_i -\vec{\tau }_j)\Psi}_1

 

I'm having some problems getting the latex to work right for this equation if you don't understand what it's suppose to say I'll have to try to get it to work right.

 

Now, this resultant may be a linear differential equation in one variable but it is not exactly in a form one would call “transparent”. In the interest of seeing the actual form of possible solutions allow me to discuss an approximate solution discovered by setting three very specific constraints to be approximately valid. The first of these three is that the data point of interest, vec{x}_1, is insignificant to the rest of the universe: i.e., P_r is, for practical purposes, not much effected by any change in the actual form of vec{Psi}_0: i.e., feed back from the rest of the universe due to changes in vec{Psi}_0 can be neglected. The second constraint will be that the probability distribution describing the rest of the universe is stationary in time: that would be that P_r is, for practical purposes, not a function of t. If that is the case, the only form of the time dependence of vec{Psi}_r which satisfies temporal shift symmetry is e^{iS_rt}.

Then we know that the t dependence of [imath]\vec{\Psi}_r[/imath] under the constraint that you are suggesting is [imath]e^{iS_rt}[/imath] because you are saying that the probability does not depend on the t (or time) axis and when the norm of the function containing that term is taken it becomes equal to 1. Now is the [imath]iS_rt[/imath] term just any constant term or function or does it matter what it is?

 

At this point, we must carefully analyze the development of the function f created when we integrated over set #2 in our earlier example. As mentioned at the time, f was a linear weighted sum of alpha and beta operators except for one strange term introduced by the time derivative of vec{Psi}_2. Please note that, if P_r is insensitive to vec{Psi}_0 and stationary in time then so is P_2. This follows directly from the fact that P_2 is the probability distribution of the “invalid” ontological elements required to constrain the “valid” ontological elements to what is to be explained. There is certainly no required time dependence if the set to be explained has no time dependence, nor can there be any dependence upon vec{Psi}_0 if the set “r” can be seen as uninfluenced by vec{Psi}_0. This leads to the conclusion that

 

Kleft{int vec{Psi}_2^dagger frac{partial}{partial t}vec{Psi}_2dV_2right}vec{Psi}_1=iKS_2vec{Psi}_1

 

and that the function “f” may be written f=f_0 -iKS_2 where [imath]f_0 is entirely made up of a linear weighted sum of alpha and beta operators. So long as the above constraints are approximately valid, our differential equation for vec{Psi}_0(vec{x}_1,t) may be written in the following form.

Is this again due to the only possible time dependence of [imath]\vec{\Psi}_2[/imath] being of the form [imath]e^{iS_2t}[/imath] and so results in nothing more then multiplying the result by [imath]iS_2[/imath].

Also this is nothing more then a approximation so that when we have finished solving for [imath]\vec{\Psi}[/imath] this may in fact not be the case and the right side may have a more complex form.

 

For the simple convenience of solving this differential equation, this result clearly suggests that one redefine vec{Psi}_0 via the definition vec{Psi}_0 = e^{-iK(S_2+S_r)t}vec{Phi}. If one further defines the integral within the curly braces to be g(vec{x}_1), vec{x}_1 being the only variable not integrated over, the equation we need to solve can be written in an extremely concise form:

 

left{vec{alpha}cdot vec{nabla} + g(vec{x})right}vec{Phi} = Kfrac{partial}{partial t}vec{Phi},

 

which implies the following operational identity:

 

vec{alpha}cdot vec{nabla} + g(vec{x}) = Kfrac{partial}{partial t}.

 

That is, as long as these operators are operating on the appropriate vec{Phi} they must yield identical results. If we now multiply the original equation by the respective sides of this identity, recognizing that the multiplication of the alpha and beta operators yields either one half (for all the direct terms) or zero (for all the cross terms) and defining the resultant of g(vec{x})g(vec{x}) to be frac{1}{2}G(vec{x}) (note that all alpha and beta operators have vanished), we can write the differential equation to be solved as

 

nabla^2vec{Phi}(vec{x},t) + G(vec{x})vec{Phi}(vec{x},t)= 2K^2frac{partial^2}{partial t^2}vec{Phi}(vec{x},t).

I’m not quite sure how you did this, it looks like multiplying the result by two was part of the step and I can see that on the right side of this the operation while quite like multiplication it does seem to have a minor difference in that the partial derivative operates on the one that is already present making it the second partial to t. but I’m not quite sure what is happening on the left side I can see that the first term is a result of the partials operating on each other like on the right side and the cross product of the partials containing alphas sum to zero and you defined the second term on the right but why is there no term of the form

 

[math] 4\vec{\alpha}\vec{\nabla}G(\vec{x})\vec{\Phi}(\vec{x},t)[/math]

 

 

 

[math]

\left\{\frac{\partial^2}{\partial x^2} -+ G(x)\right\}\vec{\Phi}(x,t)=\left\{\sqrt{2}K\frac{\partial}{\partial t}- iq\right\}\left\{\sqrt{2}K\frac{\partial}{\partial t}+iq\right\}\vec{\Phi}(x,t).

[/math]

At this point, I will invoke a third approximation. I will concern myself only with cases where to a high degree of accuracy. In this case, the first term on the right may be replaced by -2iq and, after division by 2q, we have

[math]

\left\{\frac{1}{2q}\frac{\partial^2}{\partial x^2}+\frac{1}{2q}g(x)\right\}\vec{\Phi}(x,t)= -i\left\{\sqrt{2}K \frac{\partial}{\partial t} + iq \right\}\vec{\Phi}(x,t).

[/math]

I can only conclude that on the first line the -+ term is suppose to be just + and on the second line the g(x) is suppose to be a upper case G(x) both of which seem to be just minor errors in the latex.

Would this also be equivalent to saying that the function [imath]\vec{\phi}[/imath] also has no time dependence?

Meanwhile, the fact that the Schroedinger equation is an approximate solution to my equation leads me to put forth a few more definitions. Note to Buffy: there is no presumption of reality in these definitions; they are no more than definitions of abstract relationships embedded in the mathematical constraint of interest to us. That is, these definitions are entirely in terms of the mathematical representation and are thus defined for any collection of indices which constitute references to the elements the function vec{Psi} was defined to explain.

It looks to me like the Schroedinger equation is still a partial differential equation and so it still is not even a solution, even if we have a solution to the Schroedinger equation it only give us the function [imath]\vec{\Phi}[/imath] so will we have to reverse the substitutions that you did to obtain the corresponding function [imath]\vec{\Psi}[/imath].

 

Now is it an approximate solution in that any solution to it won’t satisfy the fundamental equation or that it will only give rise to a particular family of solutions to the fundamental equation?

Just how did you come to these definitions?

It also seems that we could have by a slightly different set of substitutions and integrations have arrived at the same equation for the invalid elements.

 

Exactly the same arguments used to recover the original constraints with the x,tau representation will now recover the five constraints deduced for the x,y,tau representation above. The fundamental equation looks exactly the same but now contains the representation of an additional axis. Likewise, exactly the same arguments which showed Schroedinger's equation in one dimension was an approximate solution to that fundamental equation will now (without any change) show that Schroedinger's equation in two dimensions is an approximate solution to our new representation.

This seems to lead me to the question can this be generalized to an N dimensional Schroedinger equation although I have no idea how the Schoedinger equation is used let alone what we would use such a generalization for?

There is probably more that I could bring up but I think that this covers most of the things that I’m wondering about right know

Link to comment
Share on other sites

Sorry about the delay in my responce to Bombadil. In his post, he brings up an issue first breached by Qfwfq last October.

However, I'm not so sure K needs to be constant (x-independent), it seems to me that a real-valued K(x) wouldn't break the symmetry as [imath]psi(x)[/imath] would have an x-independent modulus anyway (indeed, I get the second derivative of P being zero too). :confused:
I came to the conclusion that this issue needs to be settled once and for all. In an attempt to get an understanding of the difficulty everyone seems to have comprehending what I am doing, I went back through the entire thread looking at what I said in the past an how people reacted to it. As a consequence, I came to the conclusion that everyone is missing one very significant point. My work was not at all ever concerned with creating explanations of anything. The opening issue (using numerical labels to refer to the underlying ontological elements on which the explanations are based together with the idea of expressing our expectations as a probability of seeing some specific set) allowed me to express a one to one correspondence between any explanation and a mathematical function. The known past IS a tabular representation of specific points on that function: i.e., the ”what is”, is “what is” tabular explanation.

 

I expressed that function as the norm of a vector in an abstract space for the simple reason that absolutely any mathematical function can be so represented: i.e., [imath]\vec{\Psi}[/imath] can represent any possible transformation from one set of numbers to another. So the solution (the explanation) lies in a specific function [imath]\vec{\Psi}[/imath]. I am not concerned with that problem at all. My only interest is with the constraints imposed upon that solution by the fact that the definitions of those reference labels used to express that solution are defined by the solution: i.e., there exists no information outside the things those reference labels refer to. So what I am presenting is an additional set of constraints imposed on the solution not by reality but simply by the symmetries brought on by the freedom of that labeling system. By expressing the explanation as a mathematical function, I am able to write down the explicit constraints flowing out of the symmetries required by the freedom in that labeling system.

 

This post is very much directed to Qfwfq, Buffy and Erasmus00 as they seem to have dropped out of the discussion. I do not know if they have simply lost interest in the thread or have decided that I can not be argued with. Neither possibility appeals to me as I very much respect their reasoning powers and would like to have their respect as a rational person.

 

As I said, I was moved to write this when Bombadil brought up the issue originally complained about by Qfwfq. That was my assertion that K in my fundamental equation is a constant. Essentially Bombadil is once again asking exactly that very same question.

 

Let's go back to the idea that shift symmetry requires [imath]\sum\frac{\partial}{\partial x_i}P=0[/imath] which I presume you will all accept (except perhaps for Buffy who has never responded to my last post to her; I would really like to know what your reaction was to that argument). Having defined P to be the squared norm of an abstract vector function [imath]\vec{\Psi}[/imath] we need to express this constraint on that function. I don't think you had any problem with the idea that [imath]\sum\frac{\partial}{\partial x_i}\vec{\Psi}=0[/imath] would satisfy that constraint. The problem arose when I proposed replacing that expression with [imath]\sum\frac{\partial}{\partial x_i}\vec{\Psi}=-iK\Psi[/imath] (the first i is a tag on x whereas the second i is the imaginary number [imath]\sqrt{-1}[/imath]). I wanted K to be limited to a constant. You, on the other hand, wanted to open the issue up to more complex functionality and I baulked. Perhaps you can comprehend my complaint if you can understand the following argument.

 

Try looking at my position from this perspective: I chose the form of [imath]\vec{\Psi}[/imath] because it placed utterly no limitation on the method of obtaining P. Within that set of “all methods” of obtaining P the expression

[math] e^{-iK(x_1+x_2+\cdots+x_n)}\vec{\Psi}[/math]

 

yields exactly the same expectations as [imath]\vec{\Psi}[/imath]. You held that the solutions would be opened up by adding in the possibility that K could be a function of the [imath](x_1,x_2,\cdots,x_n)[/imath]. What you seem to have failed to appreciate was the fact that my move had not added any solutions to the mix. When P is calculated as the norm of an abstract n dimensional vector, the possibility of a plane in that space representing a mere shift in direction of that vector with respect to rotation in that plane always exists. All I did was abstract that plane out of that vector representation and express its consequences in an explicit manner. Why did I do that? First, because it was very easy to represent such a possibility in my paradigm, second, the representation yielded a very convenient fundamental equation and third, the existence of such a phase shift in no way altered the effective probabilities [imath]\vec{\Psi}[/imath] was created to express.

 

You wanted to add more complex functions. My question is, why? What purpose would such an addition fulfill? We have already included “all possibilities” in the very structure of [imath]\vec{\Psi}[/imath]. I did what I did for my convenience, not because it had to be done. Just as an aside, [imath]E_8[/imath] “ probably reflects the real world somehow” and it could certainly be embedded in an abstract space used to represent [imath]\vec{\Psi}[/imath]. If that could be abstracted out as some kind of complex 248-dimensional rotation yielding no change in expectations expressed by [imath]\vec{\Psi}[/imath] then it might very well yield an additional valuable alteration to my equation. If I were fifty years younger, I think I would find examination of such a project interesting but, for the moment, it is not in my plans.

 

But the issue here is, “keep it simple!” Erasmus00, you were apparently very disturbed by constraining my examination to a first order linear differential equation.

However, if one can rigorously show that NO linear equation can provide those probabilities (i.e. show that there exist non-linear equations that cannot map to linear equations), does that imply your equation is wrong?
You also need to comprehend the fact that any procedure for generating your expectations can be seen as a way of getting from the description of the case of interest to your expectations and that can be seen as the definition of [imath]\vec{\Psi}[/imath]. All I have done is written down an equation on [imath]\vec{\Psi}[/imath] required by the simple fact that the “language?” used to express your description is a totally undefined structure. This is no more than a very simple additional constraint required by a logical analysis of the source of that language. As such, it must display some very interesting symmetries some of which are represented by my fundamental equation. You should be sufficiently familiar with first order linear differential equation to understand that the equation puts no real constraints on the form of [imath]\vec{\Psi}[/imath] (that form is set by the boundary conditions: what we know), it merely tells us how those solutions must change in order to be consistent with the symmetries. It is little more than an additional logical constraint on rational expectations such that new information will not change the structure of your solution. Scientists do that all the time, this is no more than an analytical expression of the idea.

 

Finally, to Buffy, I would really like to hear your current position on what I have been saying. If you have decided that I am a nut not worth reading, I would like to know about that also and I will take no offense.

 

So, at this point, I think I am ready to respond to Bombadi.

Then the function f has the form

 

f = K left{int vec{Psi}_2^dagger cdot frac{partial}{partial t}vec{Psi}_2 dV_2 right}vec{Psi}_1 + sum_{i neq j #1}beta_{ij}delta(vec{x}_i -vec{x}_j)delta(vec{tau }_i -vec{tau }_j)Psi}_1

 

I'm having some problems getting the latex to work right for this equation if you don't understand what it's suppose to say I'll have to try to get it to work right.

After close examination of your latex, I found two apparent errors: first I believe the # sign requires a back slash as it apparently has some meaning to the latex software. Secondly, you need a “\vec{“ opening on the last Psi. (Not to mention the need for a “math” and “/math” tag to run the latex software. If those changes are made, what results is

[math]f = K \left\{ \int \vec{\Psi}_2^\dagger \cdot \frac{\partial}{\partial t}\vec{\Psi}_2 dV_2\right\}\vec{\Psi}_1 + \sum_{i \neq j\ \#1}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\delta(\vec{\tau}_i -\vec{\tau}_j)\vec{\Psi}_1[/math]

 

which is not a correct representation of f. I do not know if that is simply a problem with your latex or with your understanding. Please examine the equation immediately preceding my assertion that f would be a weighted sum over alpha and beta operators plus one term without such an operator and you should be able to work a representation of that sum out. If you cannot, let me know and I will write it out in detail. There is really no need to work it out in detail as the only fact of interest at this point is that every term (except one) has either a single alpha or a beta operator multiplying it. After we do the integrals, we have to have a weighted sum of those operators plus one term lacking such an operator.

Then we know that the t dependence of [imath]vec{Psi}_r[/imath] under the constraint that you are suggesting is [imath]e^{iS_rt}[/imath] because you are saying that the probability does not depend on the t (or time) axis and when the norm of the function containing that term is taken it becomes equal to 1. Now is the [imath]iS_rt[/imath] term just any constant term or function or does it matter what it is?
Remember, one of the original constraints was [imath]\frac{\partial}{\partial t}\vec{\Psi}=-iK_t\vec{\Psi}[/imath] which has a trivial solution [imath]\vec{\Psi}=e^{-iKt}\vec{\psi}[/imath] where [imath]\vec{\psi}[/imath] has no dependence upon t at all. I wouldn't say the norm is one since the norm is dependent upon the other arguments. What is important here is that shift symmetry in t requires a conserved quantity “[imath]K_t[/imath]”. Exactly the same argument goes through for [imath]\vec{\Psi_r}[/imath]. If the probability distribution of the “rest of the universe” is not a function of time, then one has [imath]\frac{\partial}{\partial t}P_r=0[/imath] essentially exactly the same consequence as previously engendered by shift symmetry which, once again leads us to a conserved quantity related to that differential. In this case, I represented the conserved quantity by “[imath]S_r[/imath]”. Likewise the same arguments go through for the invalid elements represented by [imath]\vec{\Psi_2}[/imath]. I get the definite impression that you understand this; I just wrote it out again to make sure. If you have no arguments with what I just said, we won't need to worry about it.
Also this is nothing more then a approximation so that when we have finished solving for [imath]vec{Psi}[/imath] this may in fact not be the case and the right side may have a more complex form.
Absolutely correct; but we are not going to worry about it because I have already said we are only going after an approximation. My purpose has nothing to do with finding solutions but rather with expanding the vocabulary which can be used with my paradigm.
I’m not quite sure how you did this, ...
That cross term should certainly be there; however, when I set up my first approximation, I essentially said this term has a negligible influence on the result.
The first of these three is that the data point of interest, [imath]vec{x}_1[/imath], is insignificant to the rest of the universe: i.e., [imath]P_r[/imath] is, for practical purposes, not much effected by any change in the actual form of [imath]vec{Psi}_0[/imath]: i.e., feed back from the rest of the universe due to changes in [imath]vec{Psi}_0[/imath] can be neglected.
What you are looking at is the partial of [imath]G(\vec{x})[/imath] with respect to [imath]\vec{x}[/imath]. Now that partial is essentially a measure of the change in the form of G as a function of changes in [imath]\vec{x}[/imath]: i.e., it is essentially a kind of feed back effect which I approximate as negligible so I can drop the term. The whole purpose of this particular exercise is to establish a definition of momentum (together with mass and energy). Having done that, the term which we have omitted is quite clearly is analogous to inserting momentum dependence in the potential used in the Schroedinger equation but I could not have said such a thing without defining “momentum”.
The serious question then is, what happens to my derivation when those constraints are relaxed. If one examines that derivation carefully, one will discover that the only result of these constraints was to remove the time dependent term from the linear weighted sum expressed by g(x). If this term is left in, g(x) will be complicated in three ways: first, the general representation must allow for time dependence; second, the representation must allow for terms proportional to [imath]frac{partial}{partial x}[/imath] and, finally, the resultant V(x) will be a linear sum of the alpha and beta operators.
As I said, momentum dependent potentials are not seen except in quite advanced applications of Schroedinger's equation. Just take my word for it as the only issue here is that any solution of Schroedinger's equation is an approximate solution to my fundamental equation.
I can only conclude that on the first line the -+ term is suppose to be just + and on the second line the g(x) is suppose to be a upper case G(x) both of which seem to be just minor errors in the latex.
You are absolutely correct. I erased the q and forgot the sign. When I rewrote the relationship divided by 2q, I went back to lower case. I carried the same error down to the definition of the potential. I apologize for being so sloppy and I will edit the post to fix these errors.

 

There is an additional comment which can be made here. Notice that mass is essentially momentum in the tau direction (that fictional axis I inserted in order to avoid loss of information in the representation). The Heisenberg uncertainty principal, that the uncertainty in x times the uncertainty in momentum in the x direction is given by [imath]\frac{\hbar}{2}[/imath] (which comes from wave analysis: Schroedinger's equation is a “wave equation” by the way), informs us that the uncertainty in m can be dammed near zero. So the world view in this paradigm is of a four dimensional Euclidean universe where everything of interest is momentum quantized in the tau direction. That very issue removes tau from observation (it is projected out by the uncertainty in tau). This brings to mind the fact that almost all experiments done by scientists are performed in laboratories constructed of “mass quantized entities” and analyzed with instruments which are also made of “mass quantized entities”.:roll: Just pointing out that my paradigm isn't really all that implausible. :jab:

 

There is another interesting observation to be made here. In this paradigm, negative mass is simply momentum in the negative direction in tau. (Note that, if q in my derivation is taken to be negative, we just make p=-q and divide by 2p (essentially removing the other factor from the pair on the right). The point being that we will again obtain [imath]E=mc^2[/imath]: i.e., we do not obtain a negative mass. Essentially Energy is the magnitude of the four dimensional momentum. In my paradigm, it is no more possible to convert mass directly to energy than it is to convert momentum directly to energy; both transitions are severely constrained by kinematics and there is no need to alibi away the failure of massive objects to spontaneously decay into energy. (Just some interesting observations!)

Would this also be equivalent to saying that the function [imath]vec{phi}[/imath] also has no time dependence?
Not really as, in the Schroedinger representation a phase term is still there (the energy would be zero otherwise). Furthermore, we have not removed the entire contribution of K (i.e., [imath]K \neq S_r+S_2[/imath]) there is still an energy term associated with [imath]\vec{\phi}[/imath] and that energy would need to be represented by a phase term.
It looks to me like the Schroedinger equation is still a partial differential equation and so it still is not even a solution, even if we have a solution to the Schroedinger equation it only give us the function [imath]vec{Phi}[/imath] so will we have to reverse the substitutions that you did to obtain the corresponding function [imath]vec{Psi}[/imath].
The actual solution is very dependent upon the context as are the solutions to Schroedinger's equation. If we know that context (we know our expectations for the rest of the universe) our expectations for the elemental reference “x” is fundamentally given by Schroedinger's equation; there is very little to reverse here so long as we are satisfied with the approximations that need to be used (mainly that the energy is very close to [imath]mc^2[/imath]). Note that m here is what is ordinarily called rest mass (since mass is essentially momentum in the tau direction, the energy is not [imath]mc^2[/imath] unless the momentum perpendicular to tau (what physicists ordinarily call momentum) is a negligible component: i.e., this is a non-relativistic situation. In fact, for a free element (free meaning the context implies v(x) is neglectable) the actual energy associated with the index x (i.e., without that energy level shift made to achieve Schroedinger's equation) would be given by [imath]E=c\sqrt{P_x^2+P_y^2+P_z^2+m^2c^2}[/imath]; another rather familiar relativistic expression. ;)
Now is it an approximate solution in that any solution to it won’t satisfy the fundamental equation or that it will only give rise to a particular family of solutions to the fundamental equation?
It is exactly the correct solution to my equation if the approximations I have put forth are negligible to our interests; remember we are generating our expectations, we are not predicting the future.
Just how did you come to these definitions?
They are exactly the representation used in the Schroedinger's equation by hypothesis (physicists defined momentum, mass and energy long before Shroedinger's equation existed). I defined them the way I did (notice that, prior to this, my paradigm did not contain any definitions beyond x, tau, and t) because I have the freedom to define any mathematical relationships I wish, so long as I make it clear as to what I mean by the term.
It also seems that we could have by a slightly different set of substitutions and integrations have arrived at the same equation for the invalid elements. The residue resides in the integrals used to develop G(x). Once the identity with energy is obtained, it becomes clear that it is possible for the solution to exchange energy
That is not difficult to do at all and it has to be true from the constraint that, whatever the rules of behavior are, the valid and invalid elements must obey exactly the same rules: i.e., the equation cannot provide a mechanism for differentiating between valid and invalid ontological elements.
This seems to lead me to the question can this be generalized to an N dimensional Schroedinger equation although I have no idea how the Schoedinger equation is used let alone what we would use such a generalization for?
I will simply let that slide until we get further down the road as the only real issue I intended to bring forth here is that Schroedinger's equation is an approximation to my fundamental equation and use this fact to provide support for my definitions of momentum, mass and energy as reasonably in line with conventional meanings of these terms.

 

As I said, there are whole books which show how Newtonian mechanics can be deduced from Schroedinger's equation. This suggests a very profound notion: that is the fact that any internally consistent explanation of anything will be ruled by Newtonian physics so long as the necessary approximation are valid. It is likewise, very reasonable that we would all have essentially the same background mental image of reality: it is the only internally self consistent background image possible.

 

Have fun -- Dick

Link to comment
Share on other sites

Try looking at my position from this perspective: I chose the form of vec{Psi} because it placed utterly no limitation on the method of obtaining P. Within that set of “all methods” of obtaining P the expression

 

e^{-iK(x_1+x_2+cdots+x_n)}vec{Psi}

 

yields exactly the same expectations as vec{Psi}. You held that the solutions would be opened up by adding in the possibility that K could be a function of the (x_1,x_2,cdots,x_n). What you seem to have failed to appreciate was the fact that my move had not added any solutions to the mix. When P is calculated as the norm of an abstract n dimensional vector, the possibility of a plane in that space representing a mere shift in direction of that vector with respect to rotation in that plane always exists. All I did was abstract that plane out of that vector representation and express its consequences in an explicit manner. Why did I do that? First, because it was very easy to represent such a possibility in my paradigm, second, the representation yielded a very convenient fundamental equation and third, the existence of such a phase shift in no way altered the effective probabilities vec{Psi} was created to express.

From the start of the idea of adding the element [math]e^{-iK(x_1+x_2+\cdots+x_n)}[/math] I don’t think that the complaint has been with if it will change the probability of P but that the term would change the partial derivatives to x and tau and modify the value of [imath]\vec{\Psi}[/imath]. What at least I had missed is that the value of [imath]\vec{\Psi}[/imath] has no real significance only the constraints on [imath]\vec{\Psi}[/imath] and as a result the resulting constraints on P are what is of interest. After some closer examination of the constraints it is clear that by modifying the K in [math]e^{-iK(x_1+x_2+\cdots+x_n)}[/math] to a function rather then a constant can add no additional element that can have any effect on how P behaves (it also can’t remove any element from P).

 

I’m putting this on not because I’ve still got a problem with what you did but because I think I have some idea of how we can say K is a constant and perhaps putting it on this way might help some people understand just what you did and why you did it this way, as well as giving you some more ideas as to what some of the problems other people have had with what you have done.

After close examination of your latex, I found two apparent errors: first I believe the # sign requires a back slash as it apparently has some meaning to the latex software. Secondly, you need a “vec{“ opening on the last Psi. (Not to mention the need for a “math” and “/math” tag to run the latex software. If those changes are made, what results is

 

f = K left{ int vec{Psi}_2^dagger cdot frac{partial}{partial t}vec{Psi}_2 dV_2right}vec{Psi}_1 + sum_{i neq j #1}beta_{ij}delta(vec{x}_i -vec{x}_j)delta(vec{tau}_i -vec{tau}_j)vec{Psi}_1

 

which is not a correct representation of f. I do not know if that is simply a problem with your latex or with your understanding. Please examine the equation immediately preceding my assertion that f would be a weighted sum over alpha and beta operators plus one term without such an operator and you should be able to work a representation of that sum out. If you cannot, let me know and I will write it out in detail. There is really no need to work it out in detail as the only fact of interest at this point is that every term (except one) has either a single alpha or a beta operator multiplying it. After we do the integrals, we have to have a weighted sum of those operators plus one term lacking such an operator.

I had copied the code from a couple of different places and I must have missed the /vec and didn’t realize that a / was needed to be used with the # symbol as for the math tags every time I put them on it just displayed a syntax error.

 

After closer examination I have came up with

 

[math]f(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)=\left\{\frac{1}{n}\int_{}^{}k\Psi_2\frac{\partial}{\partial t}\Psi dv+\sum_{j=\#2}^{}\beta_{ij} \delta (\vec x_i-\vec x_j)+\int_{}^{}\vec\Psi \vec\alpha \vec\nabla\vec\Psi dv\right\}[/math]

 

as the form f should take. This still doesn’t seem quite right to me for one thing it seems the beta elements should all vanish also the first term has a 1/n term due to it being inside of the sum in your equation from 1 to n (I can only conclude that this is all elements in set #1) so it could be removed by moving it to the outside of the sigma. This is what I’m coming up with for the function f that should be substituted in the equation

 

[math]

\left\{\sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right\}\vec{\Psi}_1 = K\frac{\partial}{\partial t}\vec{\Psi}_1

[/math]

 

which is the reason for leaving out the term [math]\vec{\Psi}_1[/math] at the end of the equation.

There is an additional comment which can be made here. Notice that mass is essentially momentum in the tau direction (that fictional axis I inserted in order to avoid loss of information in the representation). The Heisenberg uncertainty principal, that the uncertainty in x times the uncertainty in momentum in the x direction is given by frac{hbar}{2} (which comes from wave analysis: Schroedinger's equation is a “wave equation” by the way), informs us that the uncertainty in m can be dammed near zero. So the world view in this paradigm is of a four dimensional Euclidean universe where everything of interest is momentum quantized in the tau direction. That very issue removes tau from observation (it is projected out by the uncertainty in tau). This brings to mind the fact that almost all experiments done by scientists are performed in laboratories constructed of “mass quantized entities” and analyzed with instruments which are also made of “mass quantized entities”. Just pointing out that my paradigm isn't really all that implausible.

I suspect that an explanation of how the term [imath]\frac{\hbar}{2}[/imath] comes about is for the time being unimportant and unnecessary but I’m not sure just what the term [imath]\hbar[/imath] is, the way that you are using it, it seems that it is just a constant but does it have a certain value or should it be considered just any constant?

 

Now will the Heisenberg uncertainty principal only be valid with the Schroedinger equation? That is, the uncertainty in x times the uncertainty in momentum in the x direction is only given by [imath]\frac{\hbar}{2}[/imath] as long as we are using the Schroedinger equation as a solution. If so, is there a generalized form of it that can be used with the fundamental equation?

 

Also, the Schroedinger equation is a wave equation and it will satisfy the fundamental equation with the constraints you have put on it so, are all possible solutions to the fundamental equation without those constraints also going to be a wave equation?

The actual solution is very dependent upon the context as are the solutions to Schroedinger's equation. If we know that context (we know our expectations for the rest of the universe) our expectations for the elemental reference “x” is fundamentally given by Schroedinger's equation; there is very little to reverse here so long as we are satisfied with the approximations that need to be used (mainly that the energy is very close to mc^2). Note that m here is what is ordinarily called rest mass (since mass is essentially momentum in the tau direction, the energy is not mc^2 unless the momentum perpendicular to tau (what physicists ordinarily call momentum) is a negligible component: i.e., this is a non-relativistic situation. In fact, for a free element (free meaning the context implies v(x) is neglectable) the actual energy associated with the index x (i.e., without that energy level shift made to achieve Schroedinger's equation) would be given by E=csqrt{P_x^2+P_y^2+P_z^2+m^2c^2}; another rather familiar relativistic expression.

Then will all the solutions to the fundamental equation also be dependent on the context (it seems that it must be)?

 

The term looks somewhat familiar, maybe the energy term for total energy although this may not be the case (I’m not sure what the P terms are I think that they are partial derivatives) although I’m not very familiar with relativity so this is likely why I’m not sure what it is, also I’m not sure of just how you got it.

Link to comment
Share on other sites

  • 3 weeks later...

Hi Bombadil, sorry about being so slow in my response. For a number of reasons, I wasn't exactly sure how I should respond. Since I have had a number of household projects underway and not a lot of time to think about my response, I have been sort of waiting for Anssi to get free enough to continue our conversation. Meanwhile, I have come up with a little free time and I had promised that, if you couldn't come up with the correct form of f, I would show you the correct procedure. I have laid it out below.

[math]f(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)=\left\{\frac{1}{n}\int_{}^{}k\Psi_2\frac{\partial}{\partial t}\Psi dv+\sum_{j=\#2}^{}\beta_{ij} \delta (\vec x_i-\vec x_j)+\int_{}^{}\vec\Psi \vec\alpha \vec\nabla\vec\Psi dv\right\}[/math]

 

is not even close to the correct representation of f as you have not included all the integrations required. I have a suspicion that your understanding of algebra is not up to the task of following my work. See if you can follow the following. You need to start from the expression

[math]\left\{\sum_{\#1} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#1)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\right\}\vec{\Psi}_1 + \left\{2 \sum_{i=\#1 j=\#2}\int \vec{\Psi}_2^\dagger \cdot \beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\vec{\Psi}_2 dV_2 \right. +[/math]

[math] \left.\int \vec{\Psi}_2^\dagger \cdot \left[\sum_{\#2} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#2)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right]\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1 = K\frac{\partial}{\partial t}\vec{\Psi}_1+K \left\{\int \vec{\Psi}_2^\dagger \cdot \frac{\partial}{\partial t}\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1[/math]

 

Then take a look at expression following the sentence, “If the actual function [imath]\vec{\Psi}_2[/imath] were known (i.e., a way of obtaining our expectations for set #2 is known), the above integrals could be explicitly done and we would obtain an equation of the form:”

[math] \left\{\sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right\}\vec{\Psi}_1 = K\frac{\partial}{\partial t}\vec{\Psi}_1. [/math]

 

The first expression was written in the form of three elements enclosed by curly brackets so that you would see it as such a collection. The first curly bracket contains

[math]\left\{\sum_{\#1} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#1)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\right\}\vec{\Psi}_1[/math]

 

which contains no arguments at all from set #2. It follows that, so far as that term is concerned, left multiplcation by [imath]\vec{\Psi}_2^\dagger[/imath] and integrating over all arguments from set #2 has yielded exactly one (the sum total of all possibilities has a probability of one by definition). Secondly, since [imath]\vec{\Psi}_1[/imath] is asymmetric with regard to exchange of any arguments, the Dirac delta function shown there must exactly vanish. Thus that first curly bracket comes down to

[math] \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i \vec{\Psi}_1.[/math]

 

This is exactly the first term of the declared result of the above integration. Furthermore the last term of the declared result is clearly the first term of what is enclosed in the last curly bracket (where, once more, the integration over set #2 is exactly one), specifically.

[math]K\frac{\partial}{\partial t}\vec{\Psi}_1. [/math]

 

It follows trivially that f must be the collection of remaining terms (what is enclosed in the middle curly brackets plus that other time derivative in the third curly bracket; namely,

[math]f=\left\{2 \sum_{i=\#1 j=\#2}\int \vec{\Psi}_2^\dagger \cdot \beta_{ij}\delta(\vec{x}_i -\vec{x}_j)\vec{\Psi}_2 dV_2 + \int \vec{\Psi}_2^\dagger \cdot \left[\sum_{\#2} \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j (\#2)}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right]\vec{\Psi}_2 dV_2 \right\}[/math]

[math]-K \left\{\int \vec{\Psi}_2^\dagger \cdot \frac{\partial}{\partial t}\vec{\Psi}_2 dV_2 \right\}[/math]

 

or, rearranging terms,

[math]f= \left\{\sum_{i=\#1 j=\#2}2\beta_{ij}\int \vec{\Psi}_2^\dagger\cdot \delta(\vec{x}_i -\vec{x}_j)\vec{\Psi}_2 dV_2\right\} +\left\{\sum_{\#2} \vec{\alpha}_i \cdot \int \vec{\Psi}_2^\dagger \cdot \vec{\nabla}_i \vec{\Psi}_2 dV_2\right\} +[/math]

[math]\left\{\sum_{i \neq j (\#2)}\beta_{ij}\int\vec{\Psi}_2^\dagger \cdot \delta(\vec{x}_i -\vec{x}_j) \vec{\Psi}_2 dV_2\right\}-K \int \vec{\Psi}_2^\dagger \cdot \frac{\partial}{\partial t}\vec{\Psi}_2 dV_2 [/math]

 

Simple observation discloses that, if [imath]\vec{\Psi}_2[/imath] is known, the integrals can be done and the only arguments left in the expression are from set #1 (all other arguments have been integrated out) and that f is a weighted sum of alpha and beta operators (weighted by the various integrals) plus one term without such an operator.

 

The rest of your post seems to indicate a rather common misunderstanding of my intentions here. Reading your post reminds me of an old joke. The joke involves three characters: an engineer, a physicist and a mathematician. The question is, what is their behavior while walking down a hall at the university and noticing a waste basket full of paper which is on fire in a class room. The engineer gets a bucket of water and pours it into the waste basket; thus putting the fire out. The physicist examines the fire, measuring the height, temperature and color of the flames. He checks how the fire lights up the room, influences the temperature of the room and feels at various distances away from it. When he cannot think of anything else to examine, he gets a bucket of water and pours it into the basket putting the fire out. The mathematician immediately reduces the fire to a problem he has already solved and forgets about it, continuing to walk on down the hall.

 

Of course the central issue of the joke is the fact that a professional engineer has little if any interest in why things are the way they are but only concerns himself with the quickest and best way to achieve the result he desires. The physicist wants to believe he understands the thing. Without understanding, what purpose does his knowledge serve? Finally, as Buffy has said, mathematics has absolutely nothing to do with reality.

 

What I am putting forth here are arguments concerning the fundamental underpinnings of science itself. Common science always makes the assumption that their current expectations for the future are substantially correct and examines their past (what they know or think they know) for events which contradict those expectations. Experimentalists don't really concern themselves with explaining these contradictions; they only search them out. It is supposedly the theorist who is charged with discovering an explanation which will remove the contradictions. If you look at the historical record, you will discover that very few people have been able to conceive of theories which yielded decent explanations and they have all usually been hailed as geniuses.

 

The thing is that most people who try to come up with such solutions are seen as nuts. There is an old adage, “there is a fine line between genius and insanity”. That is a direct consequence of what I call the “by guess and by golly” approach to solving the problem. Good guesses are hard to come by and, even today, there are still quite a large number of problems with those wonderful guesses which those recognized geniuses have put forward. Professional physicists do not like to talk about such things because doubt of their beliefs is damaging to their authority; however, serious discussion of almost all such questions exists. Essentially these problems can be divided into two very specific sets: those where the physics theories seem to give the wrong answers and those which revolve around the assumptions upon which the theories rest.

 

If you read back over my general presentation, you should perceive that I have no concern at all with “wrong answers”. I have specifically stated that I am only concerned with rules which the “flaw-free” explanations must obey: i.e., if they are flaw-free, they cannot give any “wrong answers” with regard to any known information. Everyone seems to jump to the conclusion that I am putting forth a theory here whereas It should be clear to all of them that my attack is not designed to produce any theories of any kind; I am only concerned with the problem of expressing the logical constraints imposed by self consistency itself.

 

Absolutely the first issue which arises if one thinks about such things is the problem of induction. Look around on this very forum and you will find a large number of posts concerning the fact that induction can never be substantiated as logically sound. What I have asked is, “exactly what can one say, without bringing induction to the table”. The answer is quite simple and I think I have demonstrated the logic of that answer: any explanation of anything can be seen as a mathematical function [imath]\vec{\Psi}[/imath] which must obey the equation

[math]\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\Psi} = K\frac{\partial}{\partial t}\vec{\Psi}.[/math]

 

Now that is a nice deduction but relatively worthless without two very important factors. First, one needs to be able to solve the equation and second, the solutions must have some significance: i.e., one needs to establish that “concept-reality” connection which Buffy is so worried about. In order to make such a connection, one must relate solutions of that equation to actual facts of reality (and come up with a good reason for that relation). You simply cannot do that without knowing some solutions and anyone familiar with modern mathematics knows full well that finding a general solution to such an equation is essentially impossible. That expression is a partial differential equation constraining what amounts to, when all invalid ontological elements are included, an infinite number of variables. A general solution to such a differential equation is simply beyond the capability of modern mathematics.

 

There are a lot of differential equations in modern physics where general solutions can not be found. For example, Schroedinger's equation for the universe:

[math]\left\{\sum_{i=1}^n\left(-\frac{\hbar^2}{2m_i}\right)\vec{\nabla} _i^2+\sum_{i \neq j}^n

V_{ij}(\vec{x}_i-\vec{x}_j)\right\}\Phi=i\hbar\frac{\partial}{\partial t}\Phi[/math]

 

can not be solved for [imath]\Phi[/imath] even if we happened to know the exact proper form of [imath] V_{ij}(\vec{x}_i-\vec{x}_j)[/imath]. In fact, we can't find a closed form general solution for the case n=3. As a matter of fact, the theorem of “conservation of momentum” is the critical factor in allowing us to find a general solution to a two body problem. Without conservation of momentum, the only problem for which a general solution can be found, given the exact nature of that potential V, is the solution for one body. The rest of the “solutions” are obtained by a technique commonly referred to as . Some will point out that group theory can give exact results for many body circumstances; however, these are not bone fide “general” solutions but rather apply only to constrained cases.

 

What I am getting at is the fact that the life blood of physics (and the source of their apparent ability to explain almost all underlying phenomena) is approximation. I have a Ph.D. in theoretical physics and, in my experience, theoreticians have complete faith in the correctness of their underlying theories. Their real concerns is, “given those theories, how do we calculate answers”. Notice that Richard Feynman got a Nobel prize for coming up with a notation for keeping track of terms of a perturbation series. I don't mean this as an insult as he was one of the most brilliant theoretical physicists of his day. I bring it up only to point out just how important finding solutions is. To my knowledge, no modern physicists are thinking about “new” theories; they are all thinking in terms of perturbation on current theories. That is exactly what “dark matter” and “dark energy” is all about: they are no more than justification for specific perturbations on current theories.

 

What I am getting at is the fact that I do not have any intentions of teaching physics and/or mathematics here. I will explain the exact logic behind any individual step in my deductions as that is a relatively short process; however, explaining physics and/or mathematics in general is a lifetime process. There are many physicists out there who have excellent comprehension of things like the Schroedinger Equation, Heisenberg uncertainty and how the factor [imath]\hbar[/imath] comes to be an important quantity.

 

Back in my first year of graduate school, I read “Mr Tompkins in Wonderland “ by George Gamow. It was written to explain the important physical constants by imagining alternate worlds where these constants are supposed to have very different values from those they have in our world. It is an excellent discourse on these various constants; however, Gamow has made some serious errors. In particular, his discourse on “Mr. Tompkins goes to Quantum Land” where [imath]\hbar[/imath] has a very large value is clearly just plain wrong (Mr. Tompkins goes on a tiger hunt and it talks about his problems which arise from Heisenberg uncertainty). At the time, I tried to work out what the world would really look like if [imath]\hbar[/imath] were large and I failed miserably. I could not find any starting point where some measurement would be totally and absolutely independent of [imath]\hbar[/imath]. That number so pervades physics that I eventually came to the conclusion that it was circularly defined. I talked to my advisor about it and he told me that my problem was that I didn't understand physics; there was absolutely no way it was circularly defined. If it were, physicists would be well aware of the fact.

 

In my derivation of Schroedinger's equation it is most certainly circularly defined as, in that derivation, it is a simple constant which is multiplied through my fundamental equation (its actual value is of utterly no consequence at all). If you use the standard value for [imath]\hbar[/imath] in that representation then the result is exactly the standard Schroedinger equation.

 

Logically, let's look at it and see what happens if one uses a different value. Suppose one uses ten times the standard value. From my definitions of m, c and V (presuming we have an established solution of my fundamental equation) m and V will be exactly ten times as large and c will have the same value as before. This yields [imath]E=mc^2[/imath] exactly 10 times as large. The resultant Schroedinger equation is exactly the same as before except that each term has been multiplied by ten. The ten is no more than a scale factor on energy; the physical problem being solved is identical. What I am getting at is the simple fact that modern physics is based on separate variables independently defined (through induction) which are actually related through my fundamental equation: i.e., these physical constants are a consequence of multiple effective definition (they are essentially circularly or overly defined).

 

Essentially, what I am saying is that, by showing Schroedinger's equation is an approximation to my equation, all of the physics derivable from Schroedinger's equation is derivable (as an approximation) from my equation. Since all of Newtonian mechanics is derivable from Schroedinger's equation, so is all of Newtonian mechanics. Another way of looking at this is to realize that the semi validity of Newtonian mechanics tells us absolutely nothing about reality; it is no more than a consequence of requiring our explanation of reality to be internally self consistent. This itself is a profound philosophical statement.

 

Have fun -- Dick

Link to comment
Share on other sites

Hi Bombadil, sorry about being so slow in my response. For a number of reasons, I wasn't exactly sure how I should respond. Since I have had a number of household projects underway and not a lot of time to think about my response, I have been sort of waiting for Anssi to get free enough to continue our conversation.

 

And it shouldn't take long now. Technically, the next week is still supposed to be busy for me (another extension for our schedules), but it seems like I'm starting to have some more free time on my hands already; actually have had some time to get back to the posts where I left.

 

I've read all the posts and I have to say that everything you are saying makes a lot of sense to me, but I definitely need to get myself up to speed with all the math.

 

One finding that is somewhat surprising to me is - what you've mentioned couple of times - that it seems newtonian mechanics is a "consequence of requiring our explanation of reality to be internally self consistent". It seems like a logical finding but I need to understand the math better to see it for myself.

 

Why it's surprising is I would have just assumed that an important factor at coming up with a "newtonian worldview" is also our ability to define (/identify) ontological elements freely; i.e. that a very specific type of classification AND self-consistency yields newtonian behaviour to those defined elements (and that just happens to be among the simplest views prediction-wise... in some sense that I did not understand yet).

 

What I'm asking is, isn't it possible to be self-consistent with more complicated worldview's as well? I.e. that it is not self-consistency alone that yields newtonian mechanics?

 

I have to look at the math more to see these things in detail for myself though...

 

Back in action soon,

-Anssi

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...