This is the final crux of my proof of the fact that any flaw free explanation can be represented by a mathematical function which is required to satisfy my “fundamental equation”:

[math]

\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi} = K\frac{\partial}{\partial t}\vec{\Psi}.

[/math]

Understanding this post depends upon the reader possessing a competent understanding the the two previous posts, “Laying out the representation to be solved” and “Conservation Of Inherent Ignorance!”

I will summarize those two posts here; however, that summary will omit some subtle issues covered in the original posts. The post “Laying out the representation to be solved” essentially lays out a specific defined notation which is capable of representing any conceivable circumstances.

Step I of my presentation is the simple fact that any circumstance can be represented via a notation consisting of a collection of numerical indices expressed by [math](x_1,x_2,\cdots,x_n,\cdots,t)[/math]. The central question here is, does there exist any communicable circumstance conceivable which cannot be represented by a computer file? (As an aside, if it is not communicable, there can be no reason to discuss it.) Any computer file can certainly be written as a collection of such packets of numerical references.

Step II of that post is little more than pointing out that one's expectations (engendered by understanding an explanation) can be seen as a collection of probabilities of truth specified for each and every conceivable circumstances: i.e., if you understand an explanation, your **expectations** of truth for any specified circumstance can be represented by a number bounded by zero and one (the definition of a probability). Thus it is that one's understanding of any explanation can be represented by the mathematical function: i.e., it is no more than the conversion of one set of numbers into another (circumstances into probability).

[math]

P(x_1,x_2,\cdots,x_n,\cdots,t)=\vec{\Psi}^\dagger(x_1,x_2,\cdots,x_n,\cdots,t)

\cdot \vec{\Psi}(x_1,x_2,\cdots,x_n,\cdots,t)

[/math]

Where the form, [math]\vec{\Psi}^\dagger \cdot \vec{\Psi}[/math], is little more than a way of handling the requirement that [math]0\leq P\leq 1 [/math] by definition. Thus it follows that every explanation conceivable can be mapped into a function of the form, [math]\vec{\Psi}[/math].

The post “Conservation Of Inherent Ignorance” essentially takes note of the fact that arguments, [math](x_1,x_2,\cdots,x_n,\cdots,t)[/math], of that function are no more than numerical labels for the significant elements underlying that explanation and are thus absolutely and totally arbitrary: i.e., the definitions lie in the explanation, not in the actual numerical labels used to represent them. This leads to the validity of what is called “shift symmetry” in the representation of the arguments of [math]\vec{\Psi}[/math]. That fact yields a constraint on acceptable form of the the functional representation: i.e., it requires that [math]\vec{\Psi}[/math] must obey the following equations:

[math]

\sum^n_{i=1} \frac{\partial}{\partial x_i}\vec{\Psi}(x_1,x_2,\cdots,x_n,t)=ik\vec{\Psi}(x_1,x_2,\cdots,x_n,t)

[/math]

and

[math]

\frac{\partial}{\partial t}\vec{\Psi}(x_1,x_2,\cdots,x_n,t)=iq\vec{\Psi}(x_1,x_2,\cdots,x_n,t).

[/math]**The final required constraint: representing "Rules" mathematically**

There exists one additional constraint upon the function [math]\vec{\Psi}[/math] which should be seen as very important. That constraint has to do with the fact that every explanation (save one) expresses “rules” which must be obeyed. If we are going to bring that set of rules into a mathematical form such that no assumptions are made as to what the rules are we have a rather difficult task to accomplish. I will now present an approach which will accomplish that result.

**Step I:** An examination of the “what is” is “what is” explanation.

The “what is” is “what is” explanation is essentially an explanation which expresses no rules: i.e., in essence the implied expectations are “what happens” is “what happens” and no understanding is possible. The idea is that the information upon which it is based “what is” (what I have defined to be “the past”) has utterly no bearing upon what should be expected. Nevertheless, the case is interesting as it still expresses knowledge of the past, “what is”. Thus it is perhaps the simplest situation to analyze regarding the problem of expressing that past in a clear mathematical form. We may not have any solid expectations but we still apparently have, in our minds eye, a defined past: i.e, the fundamental elements referred to by the “i” index are defined as are the circumstances defined by the “t” index.

That past can be seen as a collection of circumstances, [math](x_1,x_2,\cdots,x_n)[/math] indexed by “t”. Since it is the past (what is presumed to be known: i.e., “what is”), the probability of each of those circumstances is clearly “one” and all other possibilities have a probability of “zero” (they are not part of “what is”). That is the extent of our knowledge of the situation.

Since the number of elements with probability “one” is finite, we can certainly list them in a file along with their specified probability. This is quite analogous to, what in ancient days (prior to computers) was referred to as a tabular representation of a function (note that, in this mental picture, both “x” and “i” indices have been assigned). Clearly, the function represented by that table is identical to the function supposedly representing the explanation no matter what that explanation might be. Since this representation has nothing to do with the explanation itself (other than the fact that what is known is defined and represented) some interesting questions arise directly from the representation itself.

My first question here concerns the issue of recovering the “t” index. If the “t” index were to be omitted, could we establish such an index from the table of circumstance? It should be recognized here that the actual value of that index is immaterial. Regarding the “what is” is “what is” explanation, the past is what the past is and the order you put the circumstances in has utterly no bearing on the issue. Thus the only issue of importance here is that every supposed “circumstance” must have a different attached index.

If every circumstance in our table is different from every other, there is no problem; each must have a different “t” index. A problem occurs when we have two identical circumstances. That situation is a little more complex than it appears to be on first examination. In constructing the table, actual values are assigned to the x and i numerical labels (what they are supposed to be referring to is, at the moment, immaterial) by whoever it is that is representing their explanation; however, the symmetries discussed in the “Conservation of Ignorance” post must still be applicable as the assignment of those labels is arbitrary.

Since the “t” index separates every circumstance into an explicitly different case, the shift symmetry can be used to set one “x” index to be the same in every explicit circumstance and scale symmetry can be used to set a second to be the same. In essence, it is not the actual values of the assigned x indices but rather the internal patterns which are significant. Thus, as stated above, a problem arises when two circumstances are represented by identical patterns.

That situation can be removed via the introduction of “hypothetical elements”: i.e., elements not actually part of the information standing behind the explanation but rather, elements presumed to exist by the explanation. (Note that their existence is implied by the existence of identical circumstances themselves; otherwise the identical circumstance would create no problems.) It should be clear that it is always possible to add hypothetical elements sufficient to make every explicit circumstance in the table different.

A rather interesting characteristic of the table as constructed reveals itself. From the original table together with the added “hypothetical elements”, a new table can be constructed where the “t” index is omitted and is instead represented by the function, [math]t(x_1,x_2,\cdots,x_{n+k})[/math], which is the value of the “t” index associated with represented circumstance without that "t" index. Thus we can construct a new table where the value of “t” index can be see as embedded in the underlying circumstances themselves. Since the index “t” is now (via the addition of hypothetical elements) embedded in the new table, this new table of circumstance (sans the “t” index) is, in a sense, equivalent to the original table. The “t” index has been replaced by those hypothetical elements required to make every circumstance unique.

Exactly this same procedure can be used to produce a table expressing a function which yields the value of some removed “x” index. For example, if we remove the [math]x_1[/math] index from all circumstances in the new table and set the function represented by that table to be exactly [math]x_1[/math], we then have a table representing the function

[math]

x_1=g(x_2,x_3,\cdots,x_{n+q}).

[/math]

As in the first case, we must insure that every pattern [math](x_2,x_3,\cdots,x_{n+q})[/math] is unique. That result can be accomplished by adding “hypothetical elements” to the collections representing the circumstances of interest. The real thing of significance here is that we know the function [math]x_1=g(x_2,x_3,\cdots,x_{n+q})[/math] exists. If that function exists, there exists another function of great interest. Define [math]F(x_1,x_2,x_3,\cdots,x_{n+q})[/math] to be

[math]

F(x_1,x_2,x_3,\cdots,x_{n+q})=x_1-g(x_2,x_3,\cdots,x_{n+q}).

[/math]

That function clearly vanishes for every valid entry to the associated table of circumstances (which, by the way, include all the required hypothetical elements)

Note that the table representing that function still has exactly the same number of entries as did the original table which represented the information upon which our explanation is based so it is still a finite table; however, since the collection of all possible circumstances (the collection for which our explanation was to yield our expectations) is infinite, the function representing our explanation is still essentially wide open: i.e., in order to obtain expectations for circumstances not represented in the table, we must perform some kind of interpolation based upon the constructed table.

What this means is that [math]\vec{\Psi}(x_1,x_2,x_3,\cdots,x_n,t)[/math] is still a totally open function, except for the fact that the probability can not be inconsistent with any case represented by the table upon which our explanation is based (otherwise the explanation would be flawed) and also yield exactly the same expectations as the represented explanation for every circumstance not known (including consistency with each “t” index given the absence of all circumstances greater than or equal to that “t” index). On the other hand, we now know that there must exist a function [math]F(x_1,x_2,x_3,\cdots,x_{n+q})[/math] which vanishes for every valid circumstance. Again, all we have is a finite table of that function and the actual function itself must be obtained via interpolation.

The “what is” is “what is” explanation clearly fulfills all the specified requirements: i.e., it is a valid flaw free explanation of anything. The function “F” must vanish for all valid circumstances and, since the explanation presumes absolutely anything is deemed possible, “F” must also vanish for all other circumstances, not just the known circumstances. Since there are an infinite number of possibilities and all are equally possible, the probability of any given circumstance must be zero. And, since any circumstance is possible, the result of any experiment (observation of a new “t” circumstance greater than the previously known “t”) is totally consistent with the predicted expectations no matter what happens.

Clearly [math]F\equiv 0[/math] represents the function F required by the “what is” is “what is” explanation. The only real problem with that explanation is that it is not a particularly valuable explanation of anything.

**Step II:** Extension to more valuable expectations.

It should be clear here that what we actually desire is a function “F” which vanishes for every possible valid circumstance and is non-zero for every invalid circumstance. Any reasoning person should comprehend that there exists no way to guarantee that any function can be known to satisfy such a proposition as doing so would require one to be “all-knowing” and that would require an infinite amount of information.

Any attempt to discover the correct algorithm, that would be [math]F(x_1,x_2,x_3,\cdots,x_n)[/math] which vanishes only for all possible valid circumstances, is doomed to failure. For example, consider the fact that a mathematical fit can be made to any finite collection of known data plus any random additional data. It follows directly from that possibility that there must exist an infinite number of functions that fit the known data exactly. This means that we can expect no more than undefendable approximations to truth so long as the data available is finite.

Meanwhile there is a side issue which needs to be brought up somewhere and this is perhaps the best place. When people start reading about my notation, [math](x_1,x_2,x_3,\cdots,x_n)[/math], they invariably presume that this can be seen as a set of points on the x axis. That presumption is inherently false as each of the indices [math]x_i[/math] is actually a numerical label and not a measurement of any kind. On the other hand, given a specific assigned set of such numerical labels, it is mentally convenient to think of it as a set of points on the x axis. When it comes to actual facts, such a mental mapping can not exist.

Doctordick, on 05 Aug 2010 - 2:24 PM, said:

I know that almost everyone who reads this is going to jump to the conclusion that the collection of arguments represented by the specific circumstances [math](x_1,x_2,\cdots ,x_n)[/math] can be seen as a set of points on the x axis. That assumption is patently false as I will show in a following post which I will title "A Universal Representation of Rules".

The problem is that in any case where [math]x_i=x_j[/math], mapping the information onto the x axis loses information as the existence of multiple elements vanishes from the data. On the x axis, [math]x_i[/math] and [math]x_j[/math] will map to the same point: i.e., a collection of points on the x axis can not represent such a circumstance. I bring this difficulty up here because we have, above, just discussed a means of overcoming this problem.

A visual picture of the data would be nice, if we could create such a thing without making any presumptions concerning the explanation. In essence, we need a way of visually displaying multiple points with identical x values. There is a very simple way to display such a thing. It can be done by adding “hypothetical data”, or, in this case a hypothetical [math]\tau[/math] axis perpendicular to the x axis: i.e., allowing every [math]x_i[/math] point to be represented by the point [math](x_i,\tau_i)[/math] in an x, tau space.

It should be realized that, having added hypothetical variables (both here and in the discussion above) a very serious question arises. We are as free to assign numerical [math]\tau_i[/math] labels as we were to assign the numerical [math]x_i[/math] labels. The problem arises when we consider the mathematical means to be used to calculate the probability of specific circumstances implied by our explanation. The hypothetical elements discussed above may or may not exist and the mechanism to handle them is quite straight forward. If they actually exist, values will eventually appear. Until that time, in calculating probabilities of specific circumstances, we must integrate [math]\vec{\Psi}^\dagger\cdot\vec{\Psi}[/math] over all possibilities regarding these hypothetical elements. In contrast to that, the underlying [math]\tau_i[/math] data have been completely fabricated and it should be clear that no actual value can ever be known. It should be seen that this clearly requires that the probability calculations must always be integrated over all possibilities regarding these variables. Other than that requirement, the representation is really no different from the earlier “hypothetical elements”.

Returning to the discussion prior to the addition of the tau axis, it is interesting that we can assert the following:

QuoteThe algorithm we are searching for may vary from time to time but it must depend on the data received to date and the method of determining it must be independent of time. I hold that the above is the only valid statement of any problem confronting scientific analysis.

**Any attempt to bestow structure on any solution of any problem beyond that contained in the above statement is to presume facts neither evident or defendable!**

The “rules” standing behind our explanation can be mathematically expressed by

[math]

F(\vec{x}_1,\vec{x}_2,\vec{x}_3,\cdots,\vec{x}_n)=0

[/math]

where [math]\vec{x}_i \equiv x_i\hat{x}+\tau_i\hat{\tau}[/math] in the hypothetical x, tau space. All we really know is that such a function must exist. So long as our table of data is finite, there will exist an infinite number of functions which will fit the bill exactly.

However, there does exist a subtle possibility here. In finding a function which fits the finite table we have constructed, there does exist the possibility that a proposed function is indeed the correct function. We cannot prove that function is correct but, since it fits all the known data, neither can we prove it is wrong. What is interesting about this possibility is that we can examine some of the consequences to be expected and the difficulties to be handled as the above table representation expands towards infinity.

There is one very important issue which arises in such an examination. That is the fact that of the increasing number of circumstances in the table. Certainly, so long as the number of elements defining a circumstance and the number of circumstances themselves are finite the procedure defined above can be accomplished; however, no matter how many we have we must always admit of the possibility of one more circumstance. That is the very definition of infinite. If we do indeed have the correct function, the relationships used cannot be destroyed by the continuity implied by that infinite result. This places some subtle constraints on F.

In generating our “what is” is “what is” table for the function F, we added hypothetical elements. We added the hypothetical tau axis in order to allow representation of identical positions on the x axis. One of the consequences of that step was that, in calculating the probabilities of our expectations, we had to integrate over all [math]\tau_i[/math] values. That essentially says that the tau axis plays no role in the development of F: i.e., it is not an aspect of adding hypothetical elements necessary to make every circumstance in the table of known circumstances different. So there is no issue regarding the extension of the continuity of the tau variables.

The infinite limit in the x case is not so trivial. Extending F to the limit of infinite data would cause the x variables to be continuous and that continuity brings a bit of a problem into procedure of adding hypothetical elements. The single most significant step in generating that table of F was adding hypothetical elements such that all circumstances represented in the table were different. When the number of elements in that table are extended to infinity, we run directly into Zeno's paradox. We cannot list an infinite number of cases thus, in the limit, we cannot know that every x argument in every listed circumstance is different from every other x argument in that circumstance. The argument for hypothetical elements being able to differentiate between circumstances fails.

Once again, the problem has a simple solution: all we need do is require the function [math]\vec{\Psi}[/math] be asymmetric with respect to exchange of any pair of elements. Mathematically, that means that for any i,j pair,

[math]

\vec{\Psi}(\vec{x}_i,\vec{x}_j)=-\vec{\Psi}(\vec{x}_j,\vec{x}_i).

[/math]

Note that, in the above, only the arguments [math]x_i[/math] and [math]x_j[/math] are shown; all the rest are presumed the same as before and therefore not necessarily shown.

Notice that [math]\vec{\Psi}=0[/math] whenever [math]x_i=x_j[/math] as zero is the only number equal to its negative. This type of asymmetry is exactly what stands behind what is called.

What it guarantees is that no two elements in this x, tau space can be in the same place for a specific t index (remember the x indices are mere labels and when they are the same what they represent must be identical). Another way to express the same thing is to assert that all hypothetical elements used to generate F must obey Fermi-Dirac statistics. This will eliminate the problem with continuity of x and the existence of F.

There is a subtle thing going on here. The existence of F is a consequence of our ability to add hypothetical elements which will make every entry to the “what is” is “what is” table unique. The possibility of also adding hypothetical elements which lend nothing to that end also exists. The subtle consequence is that these elements may have nothing to do with establishing the existence of F but none the less influence the form of F. We once again come to the conclusion that there are most probably an infinite number of functions F which fit the given information exactly but yield different probabilities for the new (or unknown) data: i.e., there exist many different explanations even in the continuous infinite limit.

As a side note (at this point), since it was the asymmetry under exchange which generated the required vanishing of identical positions in x, tau space, the absence of this asymmetry (or exchange symmetry) must be the characteristic of those additional elements which serve only to yield different probabilities. In essence, an infinite number of exchange symmetric elements may be added to the mix in order to adjust the calculated probabilities to the probabilities implied by the explanation. As opposed to the earlier elements which caused F to fit the underlying data, these additional elements must obey.

As we still have an infinite number of possibilities which fully fulfill the requirements of a flaw-free explanation, it is valuable to examine possibilities which which can be eliminated through the symmetry requirements discussed in the “Conservation of Ignorance” post. First, the same shift symmetry which exists in x must also exist in the hypothetical tau axis. That fact leads to the constraint on [math]\vec{\Psi}[/math] that

[math]

\sum^n_{i=1} \frac{\partial}{\partial \tau_i}\vec{\Psi}(\tau_1,\tau_2,\cdots,\tau_n,t)=im\vec{\Psi}(\tau_1,\tau_2,\cdots,\tau_n,t)

[/math].

where the arguments [math]x_i[/math] still exist but have not been explicitly written down. By defining [math]\vec{\nabla}_i=\hat{x}\frac{\partial}{\partial x_i}+\hat{\tau}\frac{\partial}{\partial \tau_i}[/math] and [math]\vec{k}=k\hat{x}+m\hat{\tau}[/math] the required conservation constraint implied by x and tau shift symmetry can be written in a two dimensional form

[math]

\sum^n_{i=1} \vec{\nabla}_i\vec{\Psi}(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n,t)=i\vec{k}\vec{\Psi}(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n,t)

[/math].

There is also another very subtle consequence of shift symmetry which concerns the form of the arguments of F. The existence of shift symmetry in both the x and tau dimensions (since we are now viewing the circumstances as a collection of points in the x, tau space) means that the origin must be a free parameter: i.e., changing the presumed origin in that space yields no consequences in the evaluation of F. This means that the information contained in the set of arguments [math](\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n)[/math] is identical to the information contained in the set of arguments consisting of the entire collection of differences between [math]\vec{x}_i[/math] and [math]\vec{x}_j[/math].

If we have all [math]\vec{x}_i[/math] arguments for a particular circumstance, the construction of all [math]\vec{x}_i -\vec{x}_j[/math] for that same circumstance is a trivial problem. Likewise, if we have all [math]\vec{x}_i -\vec{x}_j[/math] arguments for a particular circumstance, the construction of all [math]\vec{x}_i[/math] is rather easily achieved so long as position of the origin is a free parameter as. It may not be as trivial a problem as the reverse but anyone with a decent understanding of algebra should find the process quite straight forward.

That fact implies that we can rewrite our table of known F=0 points (known as a function of [math]\vec{x}_i[/math] arguments) as a new table of known F=0 points as a function of [math]\vec{x}_i -\vec{x}_j[/math] arguments. Adding scale symmetry to the representation (remember, these numbers are nothing more than numerical labels) there is another very important consequence of these symmetries.

We now have F being expressed in the x, tau space in terms of the differences [math]\vec{x}_i -\vec{x}_j[/math]. Let me again bring up the possibility of guessing the correct function F. If we have indeed guessed the correct function then the predicted expectations for unknown circumstances will be correct all the way out to that infinite collection of circumstances. This fact can be seen in a slightly different perspective: **only the correct function** will continue to be correct through out the entire process. This implies another required constraint.

The correct function must vanish for every specified point (i.e., the points allowed by the rule being represented by F) in that two dimensional space. The integration over all tau dependence has to do with the calculation of expectations, and not with the rule F is to represent. Thus ignoring how that representation was achieved, seen merely as a function defined over that x tau space, rotation in the plane of that space cannot change the function (all we really have is a set of points which are being used to define that function).

But rotation will convert tau displacement into x displacement. Since tau displacement is an entirely hypothetical component, F simply can not depend upon the actual tau displacement and by the same token neither can F depend upon actual x displacement. Since we have converted F into a function of distances between points, this essentially says that F can not depend upon the actual magnitude of these separations. This should be quite reasonable as, since we are talking about mere numerical labels, multiplication of all labels by some fixed constant cannot change what is being represented.

Either F simply vanishes and we have no rules (and the “what is” is “what is” explanation is the only valid explanation) or rules actually exist. If rules do indeed exist, F can not vanish for all circumstances: i.e., there must exist some circumstances which are impossible and [math]F\neq 0[/math] must be true for those circumstances. The only integrable function which does not depend upon the magnitude of its argument and still has a non zero value for some argument is the Dirac delta function [math]\delta(x)[/math], commonly defined as follows:

[math]

\int_a^b\delta(x-c)dx=1

[/math]

only if the range of integration includes c and is zero if the range of integration does not. The value of the Dirac delta function is clearly zero everywhere except when the argument is zero; in which case it must be infinite. It is usually defined as the limit of an integrable function who's graph has a fixed area (unity) as the width of the non zero region goes to zero.

Since [math]\delta(x)[/math] only has value for x=0, a power series expansion of F around a distribution satisfying F=0 implies that F may be written

[math]

F=\sum_{i\neq j} \delta(\vec{x}_i-\vec{x}_j) = 0.

[/math]

Thus it is that we come to the conclusion that any appropriate collection of rules can be expressed in terms of those hypothetical elements which can exist and that interactions at a distance in our hypothesized space can not exist. As an aside, it is interesting to note that Newton, in his introduction to his theory of gravity, made the comment that it was obvious that interactions at a distance were impossible. I have always wondered exactly what he had in mind when he said that. I take it to mean that, although field theories make some excellent predictions, they cannot be valid in the final analysis and are only an approximation to the correct result.

**The Final Conclusion**

At this point I have uncovered three specific mathematical constraints implied by the symmetries embedded in the representation of an explanation requiring expectations given by [math]P(x_1,x_2,\cdots,x_n,t)[/math] where

[math]

\int P(x_1,x_2,\cdots,x_n,t)dV_x= \int \vec{\Psi}^\dagger(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n,t) \cdot\vec{\Psi}(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n,t)dV_{x\tau}

[/math]

where [math]dV_x[/math] and [math]dV_{x\tau}[/math] represent the abstract differential volume to be integrated over (both hypothetical elements and possible ranges of presumed valid elements). It is [math]\vec{\Psi}[/math] which represents the explanation.

From the analysis I have presented, the three required constraints are as follows.

[math]

\sum^n_{i=1}\vec{\nabla}_i\vec{\Psi}(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n,t) = i\vec{k}\vec{\Psi}(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n,t)

[/math]

[math]

\frac{\partial}{\partial t}\vec{\Psi}(x_1,x_2,\cdots,x_n,t)=iq\vec{\Psi}(x_1,x_2,\cdots,x_n,t).

[/math]

and the constraint required by there being rules behind circumstances which are possible: i.e., the requirement that there exist a function F which will discriminate between what circumstances can and cannot occur.

[math]

F=\sum_{i\neq j} \delta(\vec{x}_i-\vec{x}_j) = 0.

[/math]

These three mathematical constraints can be cast into a single mathematical constraining relationship via a rather simple mathematical trick. If one defines the following mathematical operators (both the definition of “[a,b]” and the specific alpha and beta operators):

[math]

[\alpha_{ix},\alpha_{jx}]\equiv \alpha_{ix}\alpha_{jx}+\alpha_{jx}\alpha_{ix}=\delta_{ij}

[/math]

[math]

[\alpha_{i\tau},\alpha_{j\tau}]=\delta_{ij}

[/math]

[math]

[\beta_{ij},\beta_{kl}]=\delta_{ik}\delta_{jl}

[/math]

[math]

[\alpha_{ix},\beta_{kl}]=[\alpha_{i\tau},\beta_{kl}]=0

[/math]

where [math]\delta_{ij}[/math] equals one if [math]i=j[/math] and zero if [math]i\neq j[/math]. This requires these mathematical operators to anti-commute with one another and requires their squares to be one half. These mathematical constructs are closely related to what is called Lie algebra (pronounced, “lee” after Sophus Lie). At the moment, we are only concerned with the anti-commutation property as it allows us to mathematically wrap all four of the above constraints into a single equation for [math]\vec{\Psi}[/math]

All we need do is require the constraint on both alpha and beta operators that their sums over all elements of every circumstance be zero; explicitly,

[math]

\left\{\sum_i \vec{\alpha}_i \right\}\vec{\Psi}= \left\{\sum_{i\neq j}\beta_{ij}\right\}\vec{\Psi}= 0

[/math]

where [math]\vec{\alpha}_i = \hat{x}\alpha_{ix}+\hat{\tau}\alpha_{i\tau}[/math]. (Note that this vector construct lies in the x, tau space, not in the abstract space of [math]\vec{\Psi}[/math].) If we then make the simple constraint that we are working with [math]\vec{\Psi}[/math] expressed in the specific x, tau space where the sum of the “momentum” of all the elements in every circumstance is zero. (Note that this is actually no constraint on the problem as, once we have a solution [math]\vec{\Psi}[/math] expressed in that space, a simple Fourier transform can be used to produce the solution in any other frame of reference.)

QuoteIt is a trivial matter to convert a solution of [math]\sum_i\frac{\partial}{\partial x_i}\Psi_0 =0[/math] into a solution of [math]\sum_i\frac{\partial}{\partial x_i}\Psi_1 =iK_x\Psi_1[/math]. Simple substitution will confirm that if [math]\Psi_0[/math] is a solution to the first equation,[math]\Psi_1=e^{\sum_j \frac{iK_x x_j}{n}}\Psi_0\;\;\left(where \;\;i=\sqrt{-1}\right)[/math] is a solution to the second. Exactly the same relationship goes for the equation on tau.

The equation of interest is the following:

[math]

\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}+\sum_{i\neq j}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}= K\frac{\partial}{\partial t}\vec{\Psi}=iKq\vec{\Psi}

[/math]

Note that [math]\delta(\vec{x}_i -\vec{x}_j)\equiv \delta(x_i -x_j)\delta(\tau_i -\tau_j)[/math].

It is almost trivial to prove that the above equation satisfies the constraints expressed above. First, the right hand relationship divided by K is exactly the constraint

[math]

\frac{\partial}{\partial t}\vec{\Psi}(x_1,x_2,\cdots,x_n,t)=iq\vec{\Psi}(x_1,x_2,\cdots,x_n,t).

[/math]

on each component of [math]\vec{\Psi}[/math] in the abstract vector space of interest. I will explicitly show the algebra necessary to the remainder of the proof.

First (from the left) multiply the equation of interest by [math]\alpha_{kx}[/math]. In the original equation, whatever k is chosen, that explicit term appears only once: i.e., the term where i=k. By definition, that operator anti-commutes with every alpha and beta operator in the entire equation except for [math]\alpha_{ix}[/math]. For that specific term (when i=k) [math]\alpha_{kx}\alpha_{ix}=1-\alpha_{ix}\alpha_{kx}[/math]: thus, what happens is that every term of the left hand side of that equation simply changes sign and one additional term is generated (the specific term where i=k is duplicated without an alpha operator). The result of the multiplication is (after [math]\alpha_{kx}[/math] is commuted to the far right so as to operate directly on [math]\vec{\Psi}[/math])

[math]

-\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}+\sum_{i\neq j}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\alpha_{kx}\vec{\Psi}+\frac{\partial}{\partial x_k}\vec{\Psi}= K\frac{\partial}{\partial t}\alpha_{kx}\vec{\Psi}=iKq\alpha_{kx}\vec{\Psi}

[/math]

If one then sums that resulting equation over k, every term will vanish (because of the fact that the sum over the alpha operators taken over all elements vanishes) except for that single term, [math]\frac{\partial}{\partial x_k}\vec{\Psi}[/math] which lacks any alpha or beta operator. The final result, as a consequence of that sum over k, becomes,

[math]

\sum_k \frac{\partial}{\partial x_k}\vec{\Psi}=0.

[/math]

Exactly the same thing happens when we multiply the original equation by [math]\alpha_{k\tau}[/math] and again sum over k. These two operations taken together yield exactly the constraint

[math]

\sum^n_{i=1}\vec{\nabla}_i\Psi(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n,t) = i\vec{k}\Psi(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n,t)

[/math]

when [math]\vec{k}=0[/math]: i.e., when the sum over the momentum in the x, tau space vanishes.

Left multiplication of the original equation with the [math]\beta_{kl}[/math] operator followed by a sum over i and j (where [math]i\neq j[/math] ) results in exactly the final constraint.

That is, we may state unequivocally that it is absolutely necessary that any algorithm which is capable of yielding the correct probability for observing any given pattern of data in any conceivable problem to be explained must obey the relation deduced above, which constitutes my fundamental equation:

[math]

\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}+\sum_{i\neq j}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}= K\frac{\partial}{\partial t}\vec{\Psi}=iKq\vec{\Psi}

[/math]

This constraint follows from the definition of "an explanation" and nothing else. If anyone finds fault with that deduction, please let me know.

Have fun – Dick

PS This is not actually the end of the road here. There are a number of additional conclusions which can be proved which are quite interesting. One is a rather explicit reason for viewing the universe as a three dimensional spacial entity.

**Edited by Doctordick, 02 June 2014 - 03:13 AM.**