Jump to content
Science Forums

"a Universal Representation Of Rules"


Doctordick

Recommended Posts

Hi Rade,

 

Perhaps the reality of the degenerate triangle becomes unveiled in the 4D representation used by Doctordick, when the tau dimension is added to the three x,y,z spatial dimensions ? Seems to me this should hold true, but perhaps I error, because I cannot visualize the outcome of perception of a degenerate triangle in 4D. If this is true, then deep reality (the triangle as triangle) may be sensible to humans only as a 4D thing, yet such representation may be completely outside knowledge of the thing it itself.

 

The perception of a degenerate triangle in 4D is not really applicable unless you wish to represent a 5th dimension in 4D (which already is supposed to be space time). The spatial dimensions x, y and z all have the same scale of distance so adding anything else that isn't quite a pure scale of time means that it cannot be used to represent points along a universal continuum in any consistent spatial sense.

 

Rendering computer graphic surfaces in real time is one application of the representation of 3D + time in 2D over time using similar concepts. Surely you'd have to expect that representing 4D in 3D would be perceived somewhere between that of 4D in 4D and 4D in 2D over time, especially if you are talking about representations of spatial information models.

 

If you didn't know what the perception of 4D in 4D was you would probably fail the Turing Test(B)

Link to comment
Share on other sites

The spatial dimensions x, y and z all have the same scale of distance so adding anything else that isn't quite a pure scale of time means that it cannot be used to represent points along a universal continuum in any consistent spatial sense.
Thank you, but here is perhaps my confusion with the fundamental equation of Doctordick, because he does 'add' a tau dimension to the three space dimensions (x,y,z) and links these with another dimension he calls time, t. So, if I understand Doctordick correctly, his approach is a 5D {x,y,z,tau + t} approach that can be used to explain 4D reality, even if it cannot be perceived. So, it really is not about knowing what the perception of 4D in 4D may be, but what the perception of 4D reality in 5D mathematics may be. Well, perhaps I have the approach of Doctordick wrong. Maybe you understand how he adds the tau dimension to the x, y, z spatial dimensions, and if this approach is mathematically valid ?
Link to comment
Share on other sites

Step III: Some subtle additional constraints on the form of [math]F(\vec{x}_1,\vec{x}_2,\vec{x}_3,\cdots,\vec{x}_n)[/math].

 

As we still have an infinite number of possibilities which fully fulfill the requirements of a flaw-free explanation, it is valuable to examine possibilities which which can be eliminated through the symmetry requirements discussed in the “Conservation of Ignorance” post. First, the same shift symmetry which exists in x must also exist in the hypothetical tau axis. That fact leads to the constraint on [math]\vec{\Psi}[/math] that

 

[math]

\sum^n_{i=1} \frac{\partial}{\partial \tau_i}\Psi(\tau_1,\tau_2,\cdots,\tau_n,t)=im\Psi(\tau_1,\tau_2,\cdots,\tau_n,t)

[/math].

 

where the arguments [math]x_i[/math] still exist but have not been explicitly written down. By defining [math]\vec{\nabla}_i=\hat{x}\frac{\partial}{\partial x_i}+\hat{\tau}\frac{\partial}{\partial \tau_i}[/math] and [math]\vec{k}=k\hat{x}+m\hat{\tau}[/math] the required conservation constraint implied by x and tau shift symmetry can be written in a two dimensional form

 

[math]

\sum^n_{i=1} \vec{\nabla}_i\Psi(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n,t)=i\vec{k}\Psi(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n,t)

[/math].

 

Should all those [imath]\Psi[/imath]'s have the vector arrow on them?

 

I'm not sure what [imath]i\left (k\hat{x} + m\hat{\tau} \right ) \Vec{\Psi}[/imath] implies. I.e. what does it mean, to multiply the [imath]\Vec{\Psi}[/imath] by a vector from the vector space [imath]\hat{x} + \hat{\tau}[/imath]?

 

Or even if it was deliberately just [imath]\Psi[/imath] I'm still not sure what it means.

 

-Anssi

Link to comment
Share on other sites

Should all those [math]\Psi[/math]'s have the vector arrow on them?

Yes they should. I have inserted the required arrows.

 

One thing you should be aware of. This vector representation in an abstract space is (in reality) no more than a simplified way of writing a whole slew of fundamentally identical equations. The expression [math]\vec{\Psi}[/math] can be written as a sum over functions in that abstract vector space.

[math]

\vec{\Psi}=\sum^{dim}_{r=1}\psi_r\hat{q}_r

[/math]

 

Note that the first time I wrote down that relationship, I used “k” as the index; here I have used “r”. One of the problems in this whole thing is that the number of indices expands rapidly if one tries to write things down in detail; however, the mathematical relationships are essentially the same so representing what is going on really does not require writing everything down in detail (that is one of the reasons I left off the arrow symbolizing the abstract vector nature of Psi which I have now fixed). Each [math]\hat{q}_r[/math] constitutes a unit vector in a direction parallel to the rth axis in that abstract space. Thus what is really being represented in the expressions you quote is some unknown number (dim) of totally independent equations of the form:

[math]

\sum^n_{i=1} \frac{\partial}{\partial \tau_i}\psi_r(\tau_1,\tau_2,\cdots,\tau_n,t)=im_r\psi_r(\tau_1,\tau_2,\cdots,\tau_n,t)

[/math].

 

or, in terms of the combined x tau space (which is, in a sense, a different space since I am going to consider them as separate issues quite analogous to what I did when I pulled out those dimensional pairs to be cast in the role of complex numbers). So I am pulling out pairs of equations to be represented in a two dimensional (x tau) space and that allows me to write down sets of equations (where the number of sets r has now been reduced by a factor of 2) sets such as

 

[math]

\sum^n_{i=1} \vec{\nabla}_i\psi_r(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n,t)=i\vec{k}\psi_r(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n,t)

[/math].

 

I'm not sure what [math]i\left(k\hat{x} + m\hat{\tau} \right) \vec{\Psi}[/math] implies. I.e. what does it mean, to multiply the [math]\vec{\Psi}[/math] by a vector from the vector space [math]\hat{x} + \hat{\tau}[/math]?

It means exactly the same thing as multiplying anything else by a vector from the vector space [math]\hat{x} + \hat{\tau}[/math]. All it means is that each of that slew of equations is multiplied by that vector. Note that the only real difference is that [math]\vec{\nabla}_i[/math] operates in that same x tau space.

 

Or even if it was deliberately just [math]\Psi[/math] I'm still not sure what it means.

For the most part, it can be taken just that way. As I said in my opening post on the representation, I only chose the function [math]\vec{\Psi}[/math] to be a vector in an abstract space in order to pull in all possible mathematical relations. All you really need to know is the nature of vector representation in order to pull down the implied mathematical relationships. The relationships I am pulling down here are no more than correlations useful to a general representation of rules.

 

One thing to keep in mind is the fact that probability (what the explanation provides in the final analysis) is a scalar function. The inner or “dot” product of vector operations reduce the thing to a scalar. The only reason for the vector representation is to allow all possible internal correlations. Once all such applicable aspects are handled, [math]\vec{\Psi}[/math] will essentially need be only a scalar also so, in the final analysis, the vector nature of [math]\vec{\Psi}[/math] will vanish. Remember, we are trying to insure that all possibilities are handled; from that respect, until we have proved all possibilities are incorporated, the vector nature of the function should be retained.

 

If it is still not clear let me know. I will try to do better. This was the kind of thing I was hoping for Qfwfq to help but it certainly seems to be a waste of time to expect any help from that quarter.

 

Have fun -- Dick

Edited by Doctordick
Link to comment
Share on other sites

One thing you should be aware of. This vector representation in an abstract space is (in reality) no more than a simplified way of writing a whole slew of fundamentally identical equations. The expression [math]\vec{\Psi}[/math] can be written as a sum over functions in that abstract vector space.

[math]

\vec{\Psi}=\sum^{dim}_{r=1}\psi_r\hat{q}_r

[/math]

 

Note that the first time I wrote down that relationship, I used “k” as the index; here I have used “r”. One of the problems in this whole thing is that the number of indices expands rapidly if one tries to write things down in detail; however, the mathematical relationships are essentially the same so representing what is going on really does not require writing everything down in detail (that is one of the reasons I left off the arrow symbolizing the abstract vector nature of Psi which I have now fixed). Each [math]\hat{q}_r[/math] constitutes a unit vector in a direction parallel to the rth axis in that abstract space. Thus what is really being represented in the expressions you quote is some unknown number (dim) of totally independent equations of the form:

[math]

\sum^n_{i=1} \frac{\partial}{\partial \tau_i}\psi_r(\tau_1,\tau_2,\cdots,\tau_n,t)=im_r\psi_r(\tau_1,\tau_2,\cdots,\tau_n,t)

[/math].

 

Right, and at this point each component of the vector, i.e. each [imath]\psi_r\hat{q}_r[/imath], is a complex value.

 

or, in terms of the combined x tau space (which is, in a sense, a different space since I am going to consider them as separate issues quite analogous to what I did when I pulled out those dimensional pairs to be cast in the role of complex numbers). So I am pulling out pairs of equations to be represented in a two dimensional (x tau) space and that allows me to write down sets of equations (where the number of sets r has now been reduced by a factor of 2) sets such as

 

[math]

\sum^n_{i=1} \vec{\nabla}_i\psi_r(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n,t)=i\vec{k}\psi_r(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n,t)

[/math].

 

Right...

 

It means exactly the same thing as multiplying anything else by a vector from the vector space [math]\hat{x} + \hat{\tau}[/math]. All it means is that each of that slew of equations is multiplied by that vector. Note that the only real difference is that [math]\vec{\nabla}_i[/math] operates in that same x tau space.

 

Okay, so, I guess how you wrote the equation is essentially equivalent to simply saying:

 

[math]

\frac{\partial}{\partial \tau_i} \psi_{\tau x}

+

\frac{\partial}{\partial x_i}

\psi_{\tau x}

=

im\psi_{\tau x}

+

ik\psi_{\tau x}

[/math]

 

(by [imath]\psi_{\tau x}[/imath] I mean its the combined function of [imath]\tau_i[/imath] and [imath]x_i[/imath])

 

Which is just couple of algebraic steps and those unit vectors away from what you wrote.

 

And adding the unit vectors [imath]\hat{x}[/imath] and [imath]\hat{\tau}[/imath] just means we choose to express these results as a 2D vector.

 

One thing to keep in mind is the fact that probability (what the explanation provides in the final analysis) is a scalar function. The inner or “dot” product of vector operations reduce the thing to a scalar. The only reason for the vector representation is to allow all possible internal correlations. Once all such applicable aspects are handled, [math]\vec{\Psi}[/math] will essentially need be only a scalar also so, in the final analysis, the vector nature of [math]\vec{\Psi}[/math] will vanish. Remember, we are trying to insure that all possibilities are handled; from that respect, until we have proved all possibilities are incorporated, the vector nature of the function should be retained.

 

If it is still not clear let me know.

 

It seems quite clear to me now.

 

-Anssi

Link to comment
Share on other sites

Hi Anssi, you seem to be understanding my presentation; however, there are some glaring errors due entirely to your lack of familiarity with vector representations.
 

 


Right, and at this point each component of the vector, i.e. each [math]\psi_r\hat{q}_r[/math], is a complex value.

Yes, each [math]\psi_r\hat{q}_r[/math] is indeed a complex value but most would not express it the way you did. The complex nature being discussed is not shared with [math]\hat{q}_r[/math]. That factor represents a unit vector pointing in the abstract direction defined as the [math]q_r[/math] axis in that abstract representation. Placing that specific factor in the expression has no more consequences regarding complexity than does any other ordinary multiplication. In other words, one would ordinarily specify that it is the function [math]\psi_r[/math] which is complex.

Not a big issue and I would not bring it up except for the fact that it does tend to indicate your lack of familiarity with vector representations. Little things you will pick up on with time.

Now in the second case, your error is a little more significant.
 

 

Okay, so, I guess how you wrote the equation is essentially equivalent to simply saying:

[math]
\frac{\partial}{\partial \tau_i} \psi_{\tau x}
+
\frac{\partial}{\partial x_i}
\psi_{\tau x}
=
im\psi_{\tau x}
+
ik\psi_{\tau x}
[/math]

No, what you wrote down is incorrect. The problem is that [math]\vec{\nabla}_i[/math] is a vector operator defined as


[math]
\vec{\nabla}_i=\frac{\partial}{\partial x_i}\hat{x}+\frac{\partial}{\partial \tau_i}\hat{\tau}
[/math]


What you wrote down omits the vector component of that definition. What you should have written down is


[math]
\left\{\frac{\partial}{\partial \tau_i}\hat{\tau} + \frac{\partial}{\partial x_i}\hat{x}\right\}\psi_{\tau x}
=i\{m\hat{\tau}+k\hat{x}\}\psi_{\tau x}
[/math]


and the arguments of [math]\psi[/math] would be presumed to be points in the x tau space so the [math]\tau x [/math] subscript would not actually be necessary.
 

 

And adding the unit vectors [math]\hat{x}[/math] and [math]\hat{\tau}[/math] just means we choose to express these results as a 2D vector.

The argument required that x and tau be orthogonal to one another so we are in a two dimensional representation by default and those unit vectors are necessary to keep the partial derivatives orthogonal.
 

 

It seems quite clear to me now.

I hope it is just a tad clearer after my complaints.

Have fun -- Dick

Edited by Doctordick
Link to comment
Share on other sites

The argument required that x and tau be orthogonal to one another so we are in a two dimensional representation by default and those unit vectors are necessary to keep the partial derivatives orthogonal.

 

Ah! That is the bit I had not understood.

 

Back to the OP;

 

The infinite limit in the x case is not so trivial. Extending F to the limit of infinite data would cause the x variables to be continuous and that continuity brings a bit of a problem into procedure of adding hypothetical elements. Once again, the problem has a simple solution: all we need do is require the function [math]\vec{\Psi}[/math] be asymmetric with respect to exchange of any pair of elements. Mathematically, that means that for any i,j pair,

[math]

\vec{\Psi}(\vec{x}_i,\vec{x}_j)=-\vec{\Psi}(\vec{x}_j,\vec{x}_i).

[/math]

 

Note that, in the above, only the arguments [math]x_i[/math] and [math]x_j[/math] are shown; all the rest are presumed the same as before and therefore not necessarily shown.

 

Notice that [math]\vec{\Psi}=0[/math] whenever [math]x_i=x_j[/math] as zero is the only number equal to its negative. This type of asymmetry is exactly what stands behind what is called Fermi-Dirac statistics. What it guarantees is that no two elements in this x, tau space can be in the same place for a specific t index (remember the x indices are mere labels and when they are the same what they represent must be identical). Another way to express the same thing is to assert that all hypothetical elements used to generate F must obey Fermi-Dirac statistics. This will eliminate the problem with continuity of x and the existence of F.

 

This part of the OP should be clarified a bit. You should spell out more explicitly what kind of problem the continuity of x creates to the existence of F. Right now it's just saying there will be a problem and then goes directly to the solution of a problem that the reader has not necessarily recognized. I can't remember very well what this was about myself either, and I can't pick it up from the text.

 

There is also another very subtle consequence of shift symmetry which concerns the form of the arguments of F. The existence of shift symmetry in both the x and tau dimensions (since we are now viewing the circumstances as a collection of points in the x, tau space) means that the origin must be a free parameter: i.e., changing the presumed origin in that space yields no consequences in the evaluation of F. This means that the information contained in the set of arguments [math](\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n)[/math] is identical to the information contained in the set of arguments consisting of the entire collection of differences between [math]\vec{x}_i[/math] and [math]\vec{x}_j[/math].

 

I.e. the relevant information is embedded to the relationships between the [imath](x,\tau)[/imath] points, not to the location of the collection(s) of points.

 

If we have all [math]\vec{x}_i[/math] arguments for a particular circumstance, the construction of all [math]\vec{x}_i -\vec{x}_j[/math] for that same circumstance is a trivial problem. Likewise, if we have all [math]\vec{x}_i -\vec{x}_j[/math] arguments for a particular circumstance, the construction of all [math]\vec{x}_i[/math] is rather easily achieved so long as position of the origin is a free parameter as. It may not be as trivial a problem as the reverse but anyone with a decent understanding of algebra should find the process quite straight forward.

 

Yup.

 

Thus ignoring how that representation was achieved, seen merely as a function defined over that x tau space, rotation in the plane of that space cannot change the function (all we really have is a set of points which are being used to define that function).

 

But rotation will convert tau displacement into x displacement. Since tau displacement is an entirely hypothetical component, F simply can not depend upon the actual tau displacement and by the same token neither can F depend upon actual x displacement.

 

...of the entire collection of points.

 

Maybe you should refer to the "displacement over the entire collection" there just to avoid confusion. Although it should be clear to careful reader.

 

Since we have converted F into a function of distances between points, this essentially says that F can not depend upon the actual magnitude of these separations. This should be quite reasonable as, since we are talking about mere numerical labels, multiplication of all labels by some fixed constant cannot change what is being represented.

 

Either F simply vanishes and we have no rules (and the “what is” is “what is” explanation is the only valid explanation) or rules actually exist. If rules do indeed exist, F can not vanish for all circumstances: i.e., there must exist some circumstances which are impossible and [math]F\neq 0[/math] must be true for those circumstances. The only integrable function which does not depend upon the magnitude of its argument and still has a non zero value for some argument is the Dirac delta function [math]\delta(x)[/math], commonly defined as follows:

[math]

\int_a^b\delta(x-c)dx=1

[/math]

 

only if the range of integration includes c and is zero if the range of integration does not. The value of the Dirac delta function is clearly zero everywhere except when the argument is zero; in which case it must be infinite. It is usually defined as the limit of an integrable function who's graph has a fixed area (unity) as the width of the non zero region goes to zero.

 

Since [math]\delta(x)[/math] only has value for x=0, a power series expansion of F around a distribution satisfying F=0 implies that F may be written

[math]

F=\sum_{i\neq j} \delta(\vec{x}_i-\vec{x}_j) = 0.

[/math]

 

I have some troubles connecting this expression to everything you explain leading to it. But, I am taking it as universally valid constraint, via just looking at it as an expression that ensures, that two different elements cannot have the same value, in the same circumstance. I.e. two different points cannot be the same point. I.e. to have a conception of two different elements, there has to exist some difference between them, universally so.

 

-Anssi

Link to comment
Share on other sites

This part of the OP should be clarified a bit. You should spell out more explicitly what kind of problem the continuity of x creates to the existence of F. Right now it's just saying there will be a problem and then goes directly to the solution of a problem that the reader has not necessarily recognized. I can't remember very well what this was about myself either, and I can't pick it up from the text.

Sorry about that. In my mind, the problem was so obvious that it never dawned on me that the reader would fail to comprehend what it was. I have edited the post to include a little more detail regarding that problem.

 

The infinite limit in the x case is not so trivial. Extending F to the limit of infinite data would cause the x variables to be continuous and that continuity brings a bit of a problem into procedure of adding hypothetical elements. The single most significant step in generating that table of F was adding hypothetical elements such that all circumstances represented in the table were different. When the number of elements in that table are extended to infinity, we run directly into Zeno's paradox. We cannot list an infinite number of cases thus, in the limit, we cannot know that every x argument in every listed circumstance is different from every other x argument in that circumstance. The argument for hypothetical elements being able to differentiate between circumstances fails.

Of course, I suspect Qfwfq would find this argument to be meaningless hogwash as, as far as he is concerned, Zeno's paradox is baloney. He sees no problems arising from our inability to actually examine an infinite amount of information. It's that presumption of a valid world view that blocks all communication between us.

 

I.e. the relevant information is embedded to the relationships between the [math](x,\tau)[/math] points, not to the location of the collection(s) of points.

This is correct.

 

...of the entire collection of points.

This one bothers me. I am referring to the individual correlated displacement of the points due to the specified “rotation” of the representation, not to any ordinary displacement “of the entire collection of points”. In the two dimensional space, the magnitude of the distances between the various points stays the same. That is what I am referring to.

 

Maybe you should refer to the "displacement over the entire collection" there just to avoid confusion. Although it should be clear to careful reader.

I don't get the impression what I meant was clear to you.

 

I have some troubles connecting this expression to everything you explain leading to it. But, I am taking it as universally valid constraint, via just looking at it as an expression that ensures, that two different elements cannot have the same value, in the same circumstance. I.e. two different points cannot be the same point. I.e. to have a conception of two different elements, there has to exist some difference between them, universally so.

Perhaps you missed this connection because you misinterpreted what I was doing with that rotation. We need to clear this issue up. If it is the power series expansion which bothers you, that issue is quite simple. In a power series expansion, a function is represented by a sum over a power series in the difference between the start point and the desired evaluation point. Since the function vanishes for all arguments not zero, the impact of all higher powers also vanishes thus only the evaluation at zero argument survives so the sum becomes a simple sum of the underlying function.

 

Have fun -- Dick

Edited by Doctordick
Link to comment
Share on other sites

Sorry about that. In my mind, the problem was so obvious that it never dawned on me that the reader would fail to comprehend what it was. I have edited the post to include a little more detail regarding that problem.

 

Ah yeah, and with that addition now I remember what it was about.

 

This one bothers me. I am referring to the individual correlated displacement of the points due to the specified “rotation” of the representation, not to any ordinary displacement “of the entire collection of points”. In the two dimensional space, the magnitude of the distances between the various points stays the same. That is what I am referring to.

 

I don't get the impression what I meant was clear to you.

 

Yes, I misinterpreted it. What threw me off, and I think could throw off other readers too, is that you immediately follow in the same paragraph with;

 

Since we have converted F into a function of distances between points, this essentially says that F can not depend upon the actual magnitude of these separations. This should be quite reasonable as, since we are talking about mere numerical labels, multiplication of all labels by some fixed constant cannot change what is being represented.

 

Now I suppose none of that is referring to the rotation symmetry anymore, but it is actually about scale symmetry? I mean, I suppose "the actual magnitude of these separations" is either referring to the scale of collections of points, or to the individual "x" or individual "tau" separation (as opposed to combined separations).

 

When I was reading it, I was trying to understand the meaning of the whole paragraph in terms of rotation, and, well, managed to pull out some sort of shaky interpretation.

 

Needless to say, that part of the OP is a bit confusing. If these are two different issues you should clear that out. And probably separate into two different paragraphs. Then it would be much easier to read.

 

Perhaps you missed this connection because you misinterpreted what I was doing with that rotation. We need to clear this issue up. If it is the power series expansion which bothers you, that issue is quite simple. In a power series expansion, a function is represented by a sum over a power series in the difference between the start point and the desired evaluation point. Since the function vanishes for all arguments not zero, the impact of all higher powers also vanishes thus only the evaluation at zero argument survives so the sum becomes a simple sum of the underlying function.

 

Hmm yes that all sounds quite reasonable, but is there a reason why the rotation symmetry was mentioned right before pointing out the dirac delta function? I mean is it necessary to understand rotation symmetry in order to understand the requirement of Dirac delta function? I can't make the connection, but they are placed one after another in the OP, implying there is a connection.

 

-Anssi

Link to comment
Share on other sites

My first question here concerns the issue of recovering the “t” index. If the “t” index were to be omitted, could we establish such an index from the table of circumstance? It should be recognized here that the actual value of that index is immaterial. Regarding the “what is” is “what is” explanation, the past is what the past is and the order you put the circumstances in has utterly no bearing on the issue. Thus the only issue of importance here is that every supposed “circumstance” must have a different attached index.

 

I would have thought that the order is the only thing of interest here. That is, can we define a means by which the original order can be derived or is the question of order here only meant to be used as a means of separating each set of “what is” into a well defined set of elements and all we want to do is ensure that there is no repeating “t” index?

 

That situation can be removed via the introduction of “hypothetical elements”: i.e., elements not actually part of the information standing behind the explanation but rather, elements presumed to exist by the explanation. (Note that their existence is implied by the existence of identical circumstances themselves; otherwise the identical circumstance would create no problems.) It should be clear that it is always possible to add hypothetical elements sufficient to make every explicit circumstance in the table different.

 

How do we know that they are different and that it is not just a lack of information that is causing the sets to be the same but rather a complete understanding of that particular set of elements. Or is this just a trivial case of no real interest?

 

A rather interesting characteristic of the table as constructed reveals itself. From the original table together with the added “hypothetical elements”, a new table can be constructed where the “t” index is omitted and is instead represented by the function, [math]t(x_1,x_2,\cdots,x_{n+k})[/math], which is the value of the “t” index associated with represented circumstance without that "t" index. Thus we can construct a new table where the value of “t” index can be see as embedded in the underlying circumstances themselves. Since the index “t” is now (via the addition of hypothetical elements) embedded in the new table, this new table of circumstance (sans the “t” index) is, in a sense, equivalent to the original table. The “t” index has been replaced by those hypothetical elements required to make every circumstance unique.

 

You are still talking about a finite number of possible sets of elements here, right. That is we aren’t trying to derive a function that gives each “t” in any kind of continuous way. It need only be defined for a finite set and so can be defined one possibility at a time if we so chose to.

 

Also are you trying to remove the “t” in the sense that we are not using “t” as an ordering variable but rather as a variable that separates sets of elements and so by making all of the sets the same, and I assume of the same number of elements although you don’t say if this is the case, it becomes possible to retrieve a missing element without referring to a “t” index and so “t” is not defining sets of elements. This means we can define a function for arriving at our expectations without referring to “t”. This seems to take the idea of shift symmetry even farther from we don’t have a origin to there is no “t” coordinate defined and the idea of cause and effect can’t exist outside of a set of elements.

 

Note that the table representing that function still has exactly the same number of entries as did the original table which represented the information upon which our explanation is based so it is still a finite table; however, since the collection of all possible circumstances (the collection for which our explanation was to yield our expectations) is infinite, the function representing our explanation is still essentially wide open: i.e., in order to obtain expectations for circumstances not represented in the table, we must perform some kind of interpolation based upon the constructed table.

 

I thought that we where still dealing with a finite function but if we have any hope of performing any kind of interpolation don’t we need to be dealing with a continuous function. But if this was the case aren’t we implying that the function is continuous and defined on an infinite number of sets?

 

This seems also to imply that we are no longer able to define a function one possibility at a time but now we need to define continuous functions that will supply the index locations. And the function

[math] F(x_1,x_2,x_3,\cdots,x_{n+q})=x_1-g(x_2,x_3,\cdots,x_{n+q}). [/math]

is now a continuous function of [math](x_1,x_2,x_3,\cdots,x_{n+q})[/math]. But it looks like it is a non constant continuous function. Is the idea of F=0 only a rule that needs to be satisfied not a constraint on F.

 

What this means is that [math]\vec{\Psi}(x_1,x_2,x_3,\cdots,x_n,t)[/math] is still a totally open function, except for the fact that the probability can not be inconsistent with any case represented by the table upon which our explanation is based (otherwise the explanation would be flawed) and also yield exactly the same expectations as the represented explanation for every circumstance not known (including consistency with each “t” index given the absence of all circumstances greater than or equal to that “t” index). On the other hand, we now know that there must exist a function [math]F(x_1,x_2,x_3,\cdots,x_{n+q})[/math] which vanishes for every valid circumstance. Again, all we have is a finite table of that function and the actual function itself must be obtained via interpolation.

 

So we are needing a constraint on [math] \vec{\Psi}(x_1,x_2,x_3,\cdots,x_n,t) [/math] that will insure that P satisfies the “what is” is “what is” explanation no matter what the form of [math] \vec{\Psi}(x_1,x_2,x_3,\cdots,x_n,t)[/math] is that is [math] F(x_1,x_2,x_3,\cdots,x_{n+q})=0 [/math].

Link to comment
Share on other sites

Okay, let's continue this. I started refreshing my memory on the issue of using anti-commuting elements to express the constraints. I would like to understand this bit:

 

It is a trivial matter to convert a solution of [math]\sum_i\frac{\partial}{\partial x_i}\Psi_0 =0[/math] into a solution of [math]\sum_i\frac{\partial}{\partial x_i}\Psi_1 =iK_x\Psi_1[/math]. Simple substitution will confirm that if [math]\Psi_0[/math] is a solution to the first equation, [math]\Psi_1=e^{\sum_j \frac{iK_x x_j}{n}}\Psi_0\;\;\left(where \;\;i=\sqrt{-1}\right)[/math] is a solution to the second. Exactly the same relationship goes for the equation on tau.

 

It seems vaguely familiar, we must have covered it once in the past but I just can't find where, and I can't remember how it all worked again. Maybe you can explain it in some more detail...

 

-Anssi

Link to comment
Share on other sites

Sorry I have been so slow in responding to this post. Let us say I have my reasons and they are somewhat subtle. Nevertheless, let us first clarify exactly what I am saying.
 

It is a trivial matter to convert a solution of [math]\sum_i\frac{\partial}{\partial x_i}\Psi_0 =0[/math] into a solution of [math]\sum_i\frac{\partial}{\partial x_i}\Psi_1 =iK_x\Psi_1[/math]. Simple substitution will confirm that if [math]\Psi_0[/math] is a solution to the first equation,[math]\Psi_1=e^{\sum_j \frac{iK_x x_j}{n}}\Psi_0\;\;\left(where \;\;i=\sqrt{-1}\right)[/math] is a solution to the second.

The fact that simple substitution will confirm the fact resides entirely in the chain rule of differentiation.


[math]
\left\{\sum_i\frac{\partial}{\partial x_i}\right\} e^{\sum_j \frac{iK_x x_j}{n}}\Psi_0=\left\{\sum_i\frac{\partial}{\partial x_i} e^{\sum_j \frac{iK_x x_j}{n}}\right\}\Psi_0 +e^{\sum_j \frac{iK_x x_j}{n}}\left\{\sum_i\frac{\partial}{\partial x_i}\Psi_0\right\}
[/math]

 

but [math]\sum_i\frac{\partial}{\partial x_i}\Psi_0=0[/math] (our original solution) and the differentiation of the exponential term yields a sum over [math]\sqrt{-1}[/math] times the [math]K_x[/math] divided by “n” times the original exponential term (to get rid of the fact that the result is n terms). The subtly here is that the substitution yields a solution to the equation no matter what the value of [math]K_x[/math] associated with each “i” might be. The actual fact is that an infinite number of solutions may be developed here. One may use any collection of coefficients so long as they sum to the desired value.

Essentially what I am getting at is that, given a solution to one equation, it is always possible to create a solution to the other. Actually, there are a great number of subtle consequences here due to the many body nature of the equation which I would rather not get into.

Hope that helps. Sorry about the confusion between “i” the index and “i” the square root of minus one. The placement of “i” as the first coefficient in an exponential would always interpreted as the square root of minus one as its role is fairly clearly not an index.

Have fun -- Dick

Edited by Doctordick
Link to comment
Share on other sites

  • 3 weeks later...

Hi Doctordick/helper,

 

Sorry about the confusion between “i” the index and “i” the square root of minus one. The placement of “i” as the first coefficient in an exponential would always interpreted as the square root of minus one as its role is fairly clearly not an index.

 

Not very clear at all.

 

http://en.wikipedia.org/wiki/Imaginary_number

 

and the differentiation of the exponential term yields a sum over i times the Kx divided by “n” times the original exponential term (to get rid of the fact that the result is n terms)...

One may use any collection of coefficients so long as they sum to the desired value

 

What type of number is a sum over i times anything?

 

If your equation was straight you would be able to remove all the i's on both sides and get a non imaginary result.

Link to comment
Share on other sites

This a requested response to Bombadil's post of 19 December 2010.

 

I would have thought that the order is the only thing of interest here. That is, can we define a means by which the original order can be derived or is the question of order here only meant to be used as a means of separating each set of “what is” into a well defined set of elements and all we want to do is ensure that there is no repeating “t” index?

Remember, “t” is an index denoting a specific circumstance constituting the underlying data to be explained. The actual value of that index is part of your explanation. The fact that the explanation is valid is defended by the fact that every circumstance referenced by the known indices must have a probability of unity (all others are undefined). The data table, as constructed, is essentially a function table of expectations for all known circumstances (including that t index). The only thing of significance here is that the same table can be used as a function table of indices “t” if sufficient presumed circumstances are invented to make all indices different. If that is not true, how can we use that table to yield the correct index to assign to those two (or more) circumstances which appear to be identical.

 

How do we know that they are different and that it is not just a lack of information that is causing the sets to be the same but rather a complete understanding of that particular set of elements. Or is this just a trivial case of no real interest?

The table we are referring to contains an entry for every known circumstance standing behind our explanation. If an entry occurs twice, that fact needs to be recognized (i.e., they are different circumstances even though our explanation seems to count them as identical). It may very well be lack of information but that actual fact is not part of the information underlying our explanation (except for the fact that the circumstance appeared more than once).

 

You are still talking about a finite number of possible sets of elements here, right. That is we aren’t trying to derive a function that gives each “t” in any kind of continuous way. It need only be defined for a finite set and so can be defined one possibility at a time if we so chose to.

 

Also are you trying to remove the “t” in the sense that we are not using “t” as an ordering variable but rather as a variable that separates sets of elements and so by making all of the sets the same, and I assume of the same number of elements although you don’t say if this is the case, it becomes possible to retrieve a missing element without referring to a “t” index and so “t” is not defining sets of elements. This means we can define a function for arriving at our expectations without referring to “t”. This seems to take the idea of shift symmetry even farther from we don’t have a origin to there is no “t” coordinate defined and the idea of cause and effect can’t exist outside of a set of elements.

The index “t” exists only in your explanation. The fundamental purpose of this index is to allow for the fact that the information being explained is presumed to be more than you know. This means that the ordering (established by your explanation) need not be the order in which the information was obtained. What you seem to have missed is that t is still an index of circumstances in your explanation. What is important here is that we can establish a functional table of exactly what index goes with each specific circumstance. There is still a “t” coordinate defined in your explanation; we have just added hypothetical elements in order to make that index retrievable via specification of the circumstance only. If you give me the circumstance and your explanation (including that function) I know what time to assign to that circumstance.

 

I thought that we where still dealing with a finite function but if we have any hope of performing any kind of interpolation don’t we need to be dealing with a continuous function. But if this was the case aren’t we implying that the function is continuous and defined on an infinite number of sets?

Watch you usage of terms here: a finite function normally means a function which yields a finite result; not at all related to “a continuous function”. What we have is a table of entries for the function which yields the probability of a given circumstance to become an entry to that table. The correct function is unknown but the whole purpose of your explanation is to generate those probabilities for circumstances which are not part of what is known. We aren't implying anything; your explanation is expected to provide that information (the process of going from one to the other is called interpolation). It is the common assumption that “proper” interpolation is performed as if the function is continuous but that actually need not be the case. We are leaving the issue totally open here.

 

This seems also to imply that we are no longer able to define a function one possibility at a time but now we need to define continuous functions that will supply the index locations. And the function

[math] F(x_1,x_2,x_3,\cdots,x_{n+q})=x_1-g(x_2,x_3,\cdots,x_{n+q}). [/math]

is now a continuous function of [math](x_1,x_2,x_3,\cdots,x_{n+q})[/math]. But it looks like it is a non constant continuous function.

We have not actually defined the function “F”; we have defined a finite reference table of that function and proved that such a table exists no matter what that data might be (so long as the amount of data is finite anyway). That is the Function exists (at least for the entries of that table) and we are presuming it exists for our explanation (that is the purpose of that explanation). Whether or not it is continuous, stochasic or otherwise is left as an open question .

 

Is the idea of F=0 only a rule that needs to be satisfied not a constraint on F.

There are two probabilities being discussed here. One is the probability something is an entry in that table of known data (which must be zero or one); the other is the probability a circumstance will become a member of the known data (that is bounded by zero and one) and is entirely defined by the explanation. What is important is that, in order for an explanation to be valid, every entry in the table for every specific “t” index must lie within the range of values given by the explanation based upon the information lacking that entry: i.e., the expectations predicted by that explanation must be consistent with the known entries of that table.

 

So we are needing a constraint on [math] \vec{\Psi}(x_1,x_2,x_3,\cdots,x_n,t) [/math] that will insure that P satisfies the “what is” is “what is” explanation no matter what the form of [math] \vec{\Psi}(x_1,x_2,x_3,\cdots,x_n,t)[/math] is that is [math] F(x_1,x_2,x_3,\cdots,x_{n+q})=0 [/math].

Well of course as the “what is” is “what is” explanation must be valid considering the fact that it makes no constraint whatsoever on what to expect.

 

Sorry I missed your post -- Dick

Link to comment
Share on other sites

Sorry I have been so slow in responding to this post. Let us say I have my reasons and they are somewhat subtle.

 

Sorry about being even slower... This stuff requires that I have an opportunity to concentrate properly and it has been difficult lately.

 

Nevertheless, let us first clarify exactly what I am saying.

 

It is a trivial matter to convert a solution of [math]\sum_i\frac{\partial}{\partial x_i}\Psi_0 =0[/math] into a solution of [math]\sum_i\frac{\partial}{\partial x_i}\Psi_1 =iK_x\Psi_1[/math]. Simple substitution will confirm that if [math]\Psi_0[/math] is a solution to the first equation, [math]\Psi_1=e^{\sum_j \frac{iK_x x_j}{n}}\Psi_0\;\;\left(where \;\;i=\sqrt{-1}\right)[/math] is a solution to the second. Exactly the same relationship goes for the equation on tau.

 

 

The fact that simple substitution will confirm the fact resides entirely in the chain rule of differentiation.

[math]

\left\{\sum_i\frac{\partial}{\partial x_i}\right\} e^{\sum_j \frac{iK_x x_j}{n}}\Psi_0=\left\{\sum_i\frac{\partial}{\partial x_i} e^{\sum_j \frac{iK_x x_j}{n}}\right\}\Psi_0 +e^{\sum_j \frac{iK_x x_j}{n}}\left\{\sum_i\frac{\partial}{\partial x_i}\Psi_0\right\}

[/math]

 

but [math]\sum_i\frac{\partial}{\partial x_i}\Psi_0=0[/math] (our original solution) and the differentiation of the exponential term yields a sum over [math]\sqrt{-1}[/math] times the [math]K_x[/math] divided by “n” times the original exponential term (to get rid of the fact that the result is n terms).

 

Okay, so, I guess it's because of that ambiguity with the "i", that you are using "j" as an index in the exponent, so shouldn't that example rather be written as;

 

[math]

\left \{ \sum_j\frac{\partial}{\partial x_j}\right \}

e^{\sum_j \frac{iK_x x_j}{n}}\Psi_0

=

\left\{\sum_j\frac{\partial}{\partial x_j} e^{\sum_j \frac{iK_x x_j}{n}}\right\}\Psi_0 +e^{\sum_j \frac{iK_x x_j}{n}}\left\{\sum_j\frac{\partial}{\partial x_j}\Psi_0\right\}

[/math]

 

Then, because [imath]\sum_j\frac{\partial}{\partial x_j}\Psi_0=0[/imath], the last term drops out;

 

[math]

\left \{ \sum_j\frac{\partial}{\partial x_j}\right \}

e^{\sum_j \frac{iK_x x_j}{n}}\Psi_0

=

\left\{\sum_j\frac{\partial}{\partial x_j} e^{\sum_j \frac{iK_x x_j}{n}}\right\}\Psi_0

[/math]

 

Then it seems all that is left is for me to understand how;

 

[math]

\left\{\sum_j\frac{\partial}{\partial x_j} e^{\sum_j \frac{iK_x x_j}{n}}\right\}\Psi_0

=

iK_x \left \{ e^{\sum_j \frac{iK_x x_j}{n}}\Psi_0 \right \}

[/math]

 

I have difficulties understanding what you mean when you say "differentiation of the exponential term yields a sum over [math]\sqrt{-1}[/math] times the [math]K_x[/math] divided by “n” times the original exponential term"... I have a vague memory of doing something similar to this earlier but I can't remember it anymore, and I couldn't find the old posts either... Help!

 

Hope that helps. Sorry about the confusion between “i” the index and “i” the square root of minus one. The placement of “i” as the first coefficient in an exponential would always interpreted as the square root of minus one as its role is fairly clearly not an index.

 

Yeah that's what I thought.

 

-Anssi

Link to comment
Share on other sites

Okay, so, I guess it's because of that ambiguity with the "i", that you are using "j" as an index in the exponent, so shouldn't that example rather be written as;

 

[math]

\left \{ \sum_j\frac{\partial}{\partial x_j}\right \}

e^{\sum_j \frac{iK_x x_j}{n}}\Psi_0

=

\left\{\sum_j\frac{\partial}{\partial x_j} e^{\sum_j \frac{iK_x x_j}{n}}\right\}\Psi_0 +e^{\sum_j \frac{iK_x x_j}{n}}\left\{\sum_j\frac{\partial}{\partial x_j}\Psi_0\right\}

[/math]

Not quite. You are now using the same index “i” in the two different sums: the one in the differential term and the one in the exponential term. Those sums are independent of one another and should have different indexes. The common solution would to use “k” in one of them but then it is apt to get confused with the constant “K” in the equation. Life is just too complex here to totally avoid ambiguity. Personally, I like identifying the first “i” in the exponential to be the square root of minus one. That context is pretty sure.

 

Then, because [math]\sum_j\frac{\partial}{\partial x_j}\Psi_0=0[/math], the last term drops out;

That assertion is still true as the mixed indices to not come to bear on that term.

 

[math]

\left \{ \sum_j\frac{\partial}{\partial x_j}\right \}

e^{\sum_j \frac{iK_x x_j}{n}}\Psi_0

=

\left\{\sum_j\frac{\partial}{\partial x_j} e^{\sum_j \frac{iK_x x_j}{n}}\right\}\Psi_0

[/math]

Except for the fact that the two sums should have different indices.

 

Essentially the only thing which is bothering you is that differential of the exponential term. What is important for you to be aware of is the fact that the differential of the function [math]e^x[/math] is [math]e^x[/math] and the fact that [math]e^{x_1+x_2}=e^{x_1}e^{x_2}[/math]. Adding the fact that

[math]

\frac{d}{dz}f(x)=\left[\frac{d}{dx}f(x)\right] \frac{dx}{dz}

[/math]

 

one can work out the term that bothers you quite easily. There is no need to worry about [math]\Psi_0[/math] as that term is outside the curly brackets and is thus not being differentiated (you have already discovered that the differential of it vanishes). Thus you can write:

[math]

\sum_j\frac{\partial}{\partial x_j} e^{\sum_i \frac{iK_x x_i}{n}} = \left\{\frac{\partial}{\partial x_1}+\frac{\partial}{\partial x_2} +\cdots+\frac{\partial}{\partial x_n}\right\}e^{\frac{iK_x x_1}{n}}e^{\frac{iK_x x_2}{n}}e^{\frac{iK_x x_3}{n}}\cdots e^{\frac{iK_x x_n}{n}}

[/math]

 

Each partial in the sum differentiates only the term which has the same x dependence and the result as per

[math]

\frac{d}{dz}f(x)=\left[\frac{d}{dx}f(x)\right] \frac{dx}{dz}

[/math]

is as follows

[math]

\frac{\partial}{\partial x_j}e^{\frac{iK_x x_j}{n}}= e^{\frac{iK_x x_j}{n}}[iK_x]\frac{1}{n}

[/math]

 

Notice that the term from the product of exponential terms which is differentiated results in identically the same exponential term thus the resultant product of exponential terms is not changed in any way. The only thing which happens is that each of the n terms is now multiplied by [math]\frac{1}{n}iK_x[/math] . Since there are n terms the net result is simply the factor [math]iK_x[/math]. Thus the net result of the differentiation is exactly

[math]

iK_x e^{\sum_j \frac{iK_x x_j}{n}}\Psi_0 =iK_x\Psi_1

[/math]

 

Actually, there are an infinite set of such conversions. In this case (taken as an example that such a solution exists) I have set all the exponential coefficients the same. That is not necessary; all that is really necessary is that they sum to the desired K. The issue is that the desired form of solution [math]\Psi_1[/math] can be recovered from [math]\Psi_0[/math].

 

I hope that clears things up a bit.

 

I just commented to my wife that it is too bad you have not been educated in advanced mathematics. I said, if you had been, our conversations would have been totally unnecessary; you would have deduced exactly what I deduced without asking any questions at all. She said no that wasn't so. Had you actually been sufficiently educated you probably would have been so brain washed as to not even think about the issues. She may be right.

 

At any rate, any further questions you have should be asked via private messages as I probably will not be reading this forum very often. Just send me the thread name, page and post number and I will respond. Private messages generate e-mail notices so I will be aware that you have a problem. That goes for anyone else who wants to reach me.

 

Have fun with the boys -- Dick

Edited by Doctordick
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...