Jump to content
Science Forums

"a Universal Representation Of Rules"


Doctordick

Recommended Posts

Not quite. You are now using the same index “i” in the two different sums:

 

Ah right of course.

 

Essentially the only thing which is bothering you is that differential of the exponential term. What is important for you to be aware of is the fact that the differential of the function [math]e^x[/math] is [math]e^x[/math] and the fact that [math]e^{x_1+x_2}=e^{x_1}e^{x_2}[/math]. Adding the fact that

[math]

\frac{d}{dz}f(x)=\left[\frac{d}{dx}f{x}\right] \frac{dx}{dz}

[/math]

 

one can work out the term that bothers you quite easily. There is no need to worry about [math]\Psi_0[/math] as that term is outside the curly brackets and is thus not being differentiated (you have already discovered that the differential of it vanishes). Thus you can write:

 

[math]

\sum_j\frac{\partial}{\partial x_j} e^{\sum_i \frac{iK_x x_i}{n}} = \left\{\frac{\partial}{\partial x_1}+\frac{\partial}{\partial x_2} +\cdots+\frac{\partial}{\partial x_n}\right\}e^{\frac{iK_x x_1}{n}}e^{\frac{iK_x x_2}{n}}e^{\frac{iK_x x_3}{n}}\cdots e^{\frac{iK_x x_n}{n}}

[/math]

 

Each partial in the sum differentiates only the term which has the same x dependence and the result as per

[math]

\frac{d}{dz}f(x)=\left[\frac{d}{dx}f{x}\right] \frac{dx}{dz}

[/math]

 

is as follows

 

[math]

\frac{\partial}{\partial x_j}e^{\frac{iK_x}{x_j}}= e^{\frac{iK_x x_j}{n}}[iK_x]\frac{1}{n}

[/math]

 

Okay, if that's supposed to be [math] \frac{\partial}{\partial x_j}e^{\frac{iK_x x_j}{n}}= e^{\frac{iK_x x_j}{n}}[iK_x]\frac{1}{n} [/math] then yes I got the same result now. Thanks.

 

-Anssi

Link to comment
Share on other sites

Okay, if that's supposed to be [math] \frac{\partial}{\partial x_j}e^{\frac{iK_x x_j}{n}}= e^{\frac{iK_x x_j}{n}}[iK_x]\frac{1}{n} [/math] then yes I got the same result now. Thanks.

Yes, you are correct. I have fixed the post you refer to.

 

Thanks -- Dick

Edited by Doctordick
Link to comment
Share on other sites

Alright, let's focus on the anti-commutation trick;

 

From the analysis I have presented, the three required constraints are as follows.

[math]

\sum^n_{i=1}\vec{\nabla}_i\vec{\Psi}(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n,t) = i\vec{k}\vec{\Psi}(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n,t)

[/math]

 

[math]

\frac{\partial}{\partial t}\vec{\Psi}(x_1,x_2,\cdots,x_n,t)=iq\vec{\Psi}(x_1,x_2,\cdots,x_n,t).

[/math]

 

and the constraint required by there being rules behind circumstances which are possible: i.e., the requirement that there exist a function F which will discriminate between what circumstances can and cannot occur.

[math]

F=\sum_{i\neq j} \delta(\vec{x}_i-\vec{x}_j) = 0.

[/math]

 

These three mathematical constraints can be cast into a single mathematical constraining relationship via a rather simple mathematical trick. If one defines the following mathematical operators (both the definition of “[a,b]” and the specific alpha and beta operators):

[math]

[\alpha_{ix},\alpha_{jx}]\equiv \alpha_{ix}\alpha_{jx}+\alpha_{jx}\alpha_{ix}=\delta_{ij}

[/math]

 

[math]

[\alpha_{i\tau},\alpha_{j\tau}]=\delta_{ij}

[/math]

 

[math]

[\beta_{ij},\beta_{kl}]=\delta_{ik}\delta_{jl}

[/math]

 

[math]

[\alpha_{ix},\beta_{kl}]=[\alpha_{i\tau},\beta_{kl}]=0

[/math]

 

where [math]\delta_{ij}[/math] equals one if [math]i=j[/math] and zero if [math]i\neq j[/math]. This requires these mathematical operators to anti-commute with one another and requires their squares to be one half. These mathematical constructs are closely related to what is called Lie algebra (pronounced, “lee” after Sophus Lie). At the moment, we are only concerned with the anti-commutation property as it allows us to mathematically wrap all four of the above constraints into a single equation for [math]\vec{\Psi}[/math]

 

All we need do is require the constraint on both alpha and beta operators that their sums over all elements of every circumstance be zero; explicitly,

[math]

\left\{\sum_i \vec{\alpha}_i \right\}\vec{\Psi}= \left\{\sum_{i\neq j}\beta_{ij}\right\}\vec{\Psi}= 0

[/math]

 

where [math]\vec{\alpha}_i = \hat{x}\alpha_{ix}+\hat{\tau}\alpha_{i\tau}[/math]. (Note that this vector construct lies in the x, tau space, not in the abstract space of [math]\vec{\Psi}[/math].) If we then make the simple constraint that we are working with [math]\vec{\Psi}[/math] expressed in the specific x, tau space where the sum of the “momentum” of all the elements in every circumstance is zero. (Note that this is actually no constraint on the problem as, once we have a solution [math]\vec{\Psi}[/math] expressed in that space, a simple Fourier transform can be used to produce the solution in any other frame of reference.)

 

So let's make sure I have this absolutely right.

[math]\vec{\alpha}_i[/math] represents the position of the i'th element of the circumstance? (This should probably be explicitly stated in the OP)

Thus, requiring that [imath]\left\{\sum_i \vec{\alpha}_i \right\}\vec{\Psi}= 0[/imath] just constitutes a choice of where to put the origin of the [imath]x, \tau[/imath] space?

 

The equation of interest is the following:

[math]

\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}+\sum_{i\neq j}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}= K\frac{\partial}{\partial t}\vec{\Psi}=iKq\vec{\Psi}

[/math]

 

Note that [math]\delta(\vec{x}_i -\vec{x}_j)\equiv \delta(x_i -x_j)\delta(\tau_i -\tau_j)[/math].

 

Yup.

 

It is almost trivial to prove that the above equation satisfies the constraints expressed above. First, the right hand relationship divided by K is exactly the constraint

[math]

\frac{\partial}{\partial t}\vec{\Psi}(x_1,x_2,\cdots,x_n,t)=iq\vec{\Psi}(x_1,x_2,\cdots,x_n,t).

[/math]

 

on each component of [math]\vec{\Psi}[/math] in the abstract vector space of interest.

 

Yup.

 

I will explicitly show the algebra necessary to the remainder of the proof.

 

First (from the left) multiply the equation of interest by [math]\alpha_{kx}[/math]. In the original equation, whatever k is chosen, that explicit term appears only once: i.e., the term where i=k. By definition, that operator anti-commutes with every alpha and beta operator in the entire equation except for [math]\alpha_{ix}[/math]. For that specific term (when i=k) [math]\alpha_{kx}\alpha_{ix}=1-\alpha_{ix}\alpha_{kx}[/math]: thus, what happens is that every term of the left hand side of that equation simply changes sign and one additional term is generated (the specific term where i=k is duplicated without an alpha operator). The result of the multiplication is (after [math]\alpha_{kx}[/math] is commuted to the far right so as to operate directly on [math]\vec{\Psi}[/math])

[math]

-\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}+\sum_{i\neq j}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\alpha_{kx}\vec{\Psi}+\frac{\partial}{\partial x_k}\vec{\Psi}= K\frac{\partial}{\partial t}\alpha_{kx}\vec{\Psi}=iKq\alpha_{kx}\vec{\Psi}

[/math]

 

If one then sums that resulting equation over k, every term will vanish (because of the fact that the sum over the alpha operators taken over all elements vanishes) except for that single term, [math]\frac{\partial}{\partial x_k}\vec{\Psi}[/math] which lacks any alpha or beta operator. The final result, as a consequence of that sum over k, becomes,

[math]

\sum_k \frac{\partial}{\partial x_k}\vec{\Psi}=0.

[/math]

 

Yup.

 

Exactly the same thing happens when we multiply the original equation by [math]\alpha_{k\tau}[/math] and again sum over k. These two operations taken together yield exactly the constraint

[math]

\sum^n_{i=1}\vec{\nabla}_i\Psi(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n,t) = i\vec{k}\Psi(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n,t)

[/math]

 

when [math]\vec{k}=0[/math]: i.e., when the sum over the momentum in the x, tau space vanishes.

 

Yup.

 

Left multiplication of the original equation with the [math]\beta_{kl}[/math] operator followed by a sum over i and j (where [math]i\neq j[/math] ) results in exactly the final constraint.

 

Yup.

 

That is, we may state unequivocally that it is absolutely necessary that any algorithm which is capable of yielding the correct probability for observing any given pattern of data in any conceivable problem to be explained must obey the relation deduced above, which constitutes my fundamental equation:

[math]

\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}+\sum_{i\neq j}\beta_{ij}\delta(\vec{x}_i -\vec{x}_j) \right\}\vec{\Psi}= K\frac{\partial}{\partial t}\vec{\Psi}=iKq\vec{\Psi}

[/math]

 

This constraint follows from the definition of "an explanation" and nothing else. If anyone finds fault with that deduction, please let me know.

 

This was my second time through this and it all looks valid to me, as far as I can tell.

 

Now that my memory is refreshed, I can comment on Qfwfq's complaint. After review, I am even more convinced this is simply due to misunderstanding regarding what DD is trying to prove exactly. As you say Qfwfq, you don't find that to be perfectly clear from the presentation.

 

So I think this is fairly simple to solve;

 

Regarding sufficiency, I have nothing to offer beyond showing that special relativity, general relativity, Shrödinger’s equation, classical mechanics, Dirac’s equation, Maxwell’s equations and some important nuclear equations can be deduced by making some rather standard approximations.

My opinion is that these things you are offering as support are due to the fact that you choose the Lie algebra, even though it doesn't follow of necessity from your premises.

 

First, let me point out that using anti-commuting operators is supposed to be a move which preserves the generality of the presentation. It is not in any sense necessary move, and it should just be seen as a trick to bring the separate differential constraints into the form of one equation.

 

If they do preserve the generality, then they are just one of many ways to represent the constraints, and whatever role they play in the process of DD's deductions of a specific explanation/formulation/terminology (such as Dirac's equation), only implies that that specific explanation, which shares the same terminology, is also connected to the same underlying constraints. (Note that the choices made during the deductions actually constitute assumptions about the underlying information. As long as this choices preserves the generality, it is validly part of the FE, and not part of the specific deductions)

 

That is, while there probably exists many ways to express the symmetry constraints without losing generality, they would just take different routes to deducting Dirac's equation.

 

The above must be true as long as the anti-commuting operators truly do preserve the generality to the underlying information. Which is to say, as long as they do not place any undefendable requirements to what that information is like. That is, as long as all expectations are ultimately connected to some sort of recurring activity. That is, as long as the underlying information is completely undefined.

 

Little bit thought should make you easily see these connections, as long as you keep in mind that nothing is being argued about reality existing in such and such form.

 

As I have said many times, I take mathematics to be a given collection of internally consistent structures. I leave arguments as to the validity of those structures to the experts. I merely make use of them!

That's exactly what I've been noticing. Trouble is you manifest having no idea of how mathematics can and can't be used for drawing conclusions about reality.

 

That comment implies you have been still thinking about how this thing explains reality, or have been viewing the definitions of modern physics as necessary to the explanation of reality. Which is confusing because elsewhere you have from time to time seemed to express the opposite view. So I don't pretend to understand what you are thinking exactly.

 

But this gets us to the second point, also equally important. If this choice of using anti-commuting operators does not preserve the generality of the presentation, then it would be important to recognize exactly what sort of assumption do they constitute about the underlying data. If it is indeed an undefendable assumption, it would imply that that assumption is necessary in order to arrive at modern physics. It could imply, that we know something explicitly about the structure of reality. That would be also very interesting result.

 

Or it could imply that this is just an arbitrary choice in the set of assumptions that leads to a particular terminology (and should in this case be simply moved to be part of the Dirac's deduction).

 

Being that these are just some mathematical definitions which allow a different representation of the same thing, I am unable to see this as anything but yet another set of algebraic steps, standing between the FE and the separate equations. I am unable to see what does the anti-commutation assume about the underlying information. It does not seem any different from the host of other definitions regarding the notation.

 

So what I called for before was for you to actually state what is it, if you can see something undefendable going on here? I.e. what sort of unduly constraint does this move place onto the underlying information?

 

-Anssi

Link to comment
Share on other sites

That comment implies you have been still thinking about how this thing explains reality,
No, I was referring to the fact that Dick claims something which has implications about reality. This is not the same thing as:
or have been viewing the definitions of modern physics as necessary to the explanation of reality.
Now I haven't quite exactly expressed the opposite view but I suspect there is no point in getting back into pointless discussions.

 

(and should in this case be simply moved to be part of the Dirac's deduction)
I kinda said that, also pointing out the risk of Dick not having much left at all.

 

Being that these are just some mathematical definitions which allow a different representation of the same thing, I am unable to see this as anything but yet another set of algebraic steps, standing between the FE and the separate equations. I am unable to see what does the anti-commutation assume about the underlying information. It does not seem any different from the host of other definitions regarding the notation.

 

So what I called for before was for you to actually state what is it, if you can see something undefendable going on here? I.e. what sort of unduly constraint does this move place onto the underlying information?

I don't think there is much point in saying what these things mean about the underlying information. I still have doubts about the eigenvalue equations on which he makes the choice of Lie algebra; they rely on the dependence of phase on the parameter being continuous and derivable, even before it being linear (which causes the single eigenvalue for each of them). If the defenze is that these requisites are obtainable by the arbitrarity of labelling, it seems to me there is just no point in the whole analysis. Either that, or universality is already lost before writing those eigenvalue equations.

 

In any case, it seems he has just arbitrarily "done a bit of math" and I don't see what there is to write a song and dance about. :shrug:

Link to comment
Share on other sites

No, I was referring to the fact that Dick claims something which has implications about reality.

 

What implications are those?

 

I don't think there is much point in saying what these things mean about the underlying information.

 

Well you know, the generality is pretty important aspect of the analysis. If there's a method that is generally valid, then the validity of using that method says nothing about the underlying information.

 

I still have doubts about the eigenvalue equations on which he makes the choice of Lie algebra; they rely on the dependence of phase on the parameter being continuous and derivable, even before it being linear (which causes the single eigenvalue for each of them).

 

I'm sorry but I'm not familiar with the terminology that you are using. I understand DD's analysis though so if you manage to phrase this in the way I understand, then I'm sure I can comment.

 

If the defenze is that these requisites are obtainable by the arbitrarity of labelling, it seems to me there is just no point in the whole analysis.

 

Intuitively, I would have said the same thing. Just few obvious symmetries related to representation choices regarding how to represent some information, right?

 

They come into play during the deductions of specific definitions. When you make one ad hoc assumption (such as "the expectations associated with defined entity X are not affected by the rest of the universe"), it will have an effect to what the other definitions can be via these symmetry requirements.

 

After thinking about it, it became quite clear to me why you can just as well call these symmetry arguments "self-coherence requirements"; they ensure that your set of definitions "fits together", while allowing any set of definitions that "fits together". In that sense, it is sort of similar to how theoretical physicist looks for new defined entities/relationships implied by the existing set, only done from a different perspective. It does also have similar potential to imply - under careful analysis - what new aspects are to be expected from our modern world views. But I'm getting quite far ahead again, there's no point discussing these yet...

 

Anyway, intuitively it doesn't seem like these symmetries would amount to much, yet, there are incredibly few ad hoc approximations required to get to modern physics from here. And each of those approximations can be seen as a mapping choice, valid for any sort of underlying undefined information.

 

I'm guessing that...

 

Either that, or universality is already lost before writing those eigenvalue equations.

 

...you say that because you think that common sense dictates there must be some hidden assumptions somewhere, otherwise the result would be impossible. Well, the challenge is up, find that fishy bit. I can't find it myself and I have been looking very carefully.

 

In any case, it seems he has just arbitrarily "done a bit of math" and I don't see what there is to write a song and dance about. :shrug:

 

But do you understand what the claim is though? If you do, I'm sure you agree that if it is valid, then there is certainly be a lot to dance about?

 

-Anssi

Link to comment
Share on other sites

What implications are those?
The ones you two are preaching. The claim you mention at the end of the same post.

 

Well you know, the generality is pretty important aspect of the analysis. If there's a method that is generally valid, then the validity of using that method says nothing about the underlying information.
I was replying to your query about the ad hoc choices. You asked of "what they mean about the underlying information" but there's no point in saying what. To make it simpler, take the choices all the way to the Dirac equation; would you ask me what they mean? The question would be moot, obviously, they mean the same as specifying the Dirac equation. Would you ask me what colour the King's white horse is? :shrug:

 

I'm sorry but I'm not familiar with the terminology that you are using. I understand DD's analysis though so if you manage to phrase this in the way I understand, then I'm sure I can comment.
Ask your Master. Alternatively you could look up those terms and recognize the concepts in the analysis, but I'm not sure what you would make of the point without more experience. OTOH Dick should be able.

 

Anyway, the eigenvalues are [imath]iq[/imath] and the components of [imath]i\vec{k}[/imath].

 

Intuitively, I would have said the same thing.
Which one? That there is just no point in the whole analysis? If that's so, why are you defending it to the death?

 

In these paragraphs you seem quite confused about what I'm saying and you essentially repeat the usual things without adding anything new. I've no time for going round in circles again and again.

 

Anyway, intuitively it doesn't seem like these symmetries would amount to much, yet, there are incredibly few ad hoc approximations required to get to modern physics from here.
Just like theoretical physicists have been saying for quite a while, except you seem to be confusing the concepts of approximations and choices.

 

And each of those approximations can be seen as a mapping choice, valid for any sort of underlying undefined information.
Where does the mapping choice occur, then? It seems you confirm the first option anyway, so there's no use addressing your next point (as if I hadn't already said what the "fishy bit" is). If you are convinced that arbitrarity of labelling suffices to defend those choices then the "underlying undefined information" isn't information at all and there's no use in the whole analysis.

 

But do you understand what the claim is though?
Ha. You guys never specify it well and you dodge and hop when confronted with objections. Is it an implication about reality or is it not?
Link to comment
Share on other sites

Hi AnssiH,

 

I have difficulties understanding what you mean when you say "differentiation of the exponential term yields a sum over [math]\sqrt{-1}[/math] times the [math]K_x[/math] divided by “n” times the original exponential term"... I have a vague memory of doing something similar to this earlier but I can't remember it anymore, and I couldn't find the old posts either... Help!

 

As DD didn't answer my question about what the sum over an imaginary number means maybe you would like to reveal how you reconciled this?

Link to comment
Share on other sites

As DD didn't answer my question about what the sum over an imaginary number means maybe you would like to reveal how you reconciled this?

Laurie, there's not much point in this and that's why Dick couldn't be bothered with the question.

 

I can understand Anssi getting confused, but people accustomed to a certain kind of computation easily recognize the role of symbols by context and find no trouble when indices are the same letter as something else or even as each other. As Dick said to Anssi, life is too complex to avoid these things and he's right, especially when complex numbers are involved! :lol: It isn't what the real problem is (nor even the complex one, only the imaginary one).

OK, I'll quit being an eeeeedjit. :P

Link to comment
Share on other sites

The ones you two are preaching. The claim you mention at the end of the same post.

 

Well I'm quite sensitive to calling those things implications about reality, since clearly they are not, and interpreting them that way is very harmful. They are implications about our methods of "understanding" (=predicting) reality. Pretty important distinction, and I certainly hope that is what you had in mind when you wrote "implications about reality". If you see these implications as practically equal to making claims about reality, you have something quite topsy turvy in your head about what this things is.

 

I was replying to your query about the ad hoc choices. You asked of "what they mean about the underlying information" but there's no point in saying what. To make it simpler, take the choices all the way to the Dirac equation; would you ask me what they mean? The question would be moot, obviously, they mean the same as specifying the Dirac equation. Would you ask me what colour the King's white horse is? :shrug:

 

If you are referring to the fact that the validity of these ad hoc choices do not tell us anything at all about the underlying information itself, rather they just tell us about the choices we have made in our interpretation of that information, then that's exactly what I was referring to as well.

 

That is, the general validity of an ad hoc choice just means it can be seen as a general method for categorizing any information, just like the ad hoc choice of using base-ten number system doesn't tell us anything about the nature of the things we represent with such number system.

 

Anyway, the eigenvalues are [imath]iq[/imath] and the components of [imath]i\vec{k}[/imath].

 

Ah.

 

Well the question there is also, is it possible to cast any expectations - related to recurring patterns on some undefined information - into the notation and definitions behind [imath]P = \vec{\Psi}^{\dagger} \cdot \vec{\Psi}[/imath]. Much of the length of DD's presentation arises from the discussions related to why these definitions do not place any constraints onto what the underlying information can be, rather they represent choices regarding our representation form.

 

Which one? That there is just no point in the whole analysis?

 

That it doesn't seem like the "arbitrarity of labeling" could have such far reaching consequences. After thinking about it, it's not surprising at all anymore, I feel like I quite understand how it plays out, for the most part anyway.

 

Just like theoretical physicists have been saying for quite a while, except you seem to be confusing the concepts of approximations and choices.

 

That is because in terms of this analysis, the difference amounts to be pretty much semantics... I.e, in terms of discussing a solution to the FE, any "choice" can be called "an approximation" or "an assumption" just as well. Like I said in the next paragraph you quoted "any approximation can be seen as a mapping choice".

 

Where does the mapping choice occur, then?

 

I wouldn't know, nor would I know which choice should be considered to be the "first choice that was made". Essentially this is about arriving at self-coherent set of defintions, which support each others. Quite like the definitions behind "speed", "length" and "time", actually (just we go into little bit more complex issues). In the deductions DD basically makes a choice, and then uses algebra to draw the not-so-obvious consequences of that choice into our other definitions. E.g. defining a certain circumstance in this notation as the measure of "mass", must have quite specific consequences to how mass manifests itself against other definitions (such as energy) and so on and so forth.

 

Ha. You guys never specify it well and you dodge and hop when confronted with objections. Is it an implication about reality or is it not?

 

It's not! This has been repeated so many times it's just not even funny... None of this amounts to any ontological implications or knowledge of reality in any sense. This is not about "what reality is like". There is nothing that this will tell about reality. Everything is an investigation to the relationships between definitions, which we make entirely inside our minds. Think of these defined entities like "containers" of expectations associated with some data patterns, i.e. data patterns that are taken to "mean" there exists such and such "defined objects". Think of the arguments as "when we make this definition - which is a choice always available to us - it leads to such and such properties to our world views. Which just so happen to be the properties that modern physics takes as properties of reality (in some sense or another).

 

I would have thought you should have catched all this being mentioned at least 10 times by now.

 

-Anssi

Link to comment
Share on other sites

Hi Laurie, I think Qfwfq missed what you were asking about.

 

That's not to say I'd be completely sure either... But I'll try to respond!

 

What type of number is a sum over i times anything?

 

If your equation was straight you would be able to remove all the i's on both sides and get a non imaginary result.

 

The i is just a way to express algebraic logic. A specific result is not the issue here. But, there is an important reason to using imaginary component in the algebra.

 

In a nutshell, it's the definition [imath]P = \vec{\Psi}^{\dagger} \cdot \vec{\Psi}[/imath] which allows such a form to [imath]\vec{\Psi}[/imath] that:

 

[math]

\sum^n_{i=1} \frac{\partial}{\partial x_i}\Psi(x_1,x_2,\cdots,x_n,t)=ik\Psi(x_1,x_2,\cdots,x_n,t)

[/math]

 

without the [imath]P[/imath] itself violating shift symmetry.

 

You can find the definitions leading to this at:

http://scienceforums.com/topic/22171-conservation-of-inherent-ignorance/

 

And you can find Qfwfq explaining me the algebra associated with those definitions here;

 

http://scienceforums.com/topic/22171-conservation-of-inherent-ignorance/page__view__findpost__p__303633

 

(go backwards from there)

 

Now the reasons DD has wound up to notation where things can be expressed this way, most certainly have to do with the desire of relating this to the definitions of modern physics. Again, it is important to think carefully whether the definitions he makes to get here, place any undefendable requirements onto the underlying information, or whether it is always possible to make appropriate choices, which would lead you to this presentation form (making the presentation form entirely a model)

 

You should view DD's comments about that issue at;

http://scienceforums.com/topic/19202-anybody-interested-in-diracs-equation/page__p__272890#entry272890

 

I.e. it should be always possible to find a way to express your expectations in terms of two orthogonal functions, which, along with few other appropriate (universally valid) choices would produce the interference effects.

 

I hope I managed to at least point out that you are making a somewhat loaded question, and I can only hope that you just so happen to already have appropriate understanding of DD's analysis so you can make sense of this :I Sorry if that's not the case...

 

-Anssi

Link to comment
Share on other sites

  • 3 weeks later...

The index “t” exists only in your explanation. The fundamental purpose of this index is to allow for the fact that the information being explained is presumed to be more than you know. This means that the ordering (established by your explanation) need not be the order in which the information was obtained. What you seem to have missed is that t is still an index of circumstances in your explanation. What is important here is that we can establish a functional table of exactly what index goes with each specific circumstance. There is still a “t” coordinate defined in your explanation; we have just added hypothetical elements in order to make that index retrievable via specification of the circumstance only. If you give me the circumstance and your explanation (including that function) I know what time to assign to that circumstance.

 

So the index to which information is added to what we know is determined only by our explanation and any characteristics that might give the t index some kind of order only exists due to our explanation. You are just allowing for the possibility that any new information will not necessarily have an order relation with how it is added to the list of known circumstances and only the explanation can determine a t index for any set of information, the t index is not a characteristic of the information.

 

The problem is that in any case where [math]x_i=x_j[/math], mapping the information onto the x axis loses information as the existence of multiple elements vanishes from the data. On the x axis, [math]x_i[/math] and [math]x_j[/math] will map to the same point: i.e., a collection of points on the x axis can not represent such a circumstance. I bring this difficulty up here because we have, above, just discussed a means of overcoming this problem.

 

That is if two elements occupy the same location they can have no more effect then one element. Due to using a function that is not dependent on the actual labeling of the elements but is dependent on the relation of the elements in comparison to each other.

 

Also, if two elements are in the same location there is no way for our explanation to distinguish them and so the only way for such a thing to happen in the first place is for one of the elements to have been added, which would imply that it doesn’t belong in the explanation in the first place.

 

And by solution you are referring to the idea of using a function F that is zero for every possible circumstance and is nonzero for every invalid circumstance. That is it just insures that our explanation satisfies the “what is” is “what is” explanation.

 

The infinite limit in the x case is not so trivial. Extending F to the limit of infinite data would cause the x variables to be continuous and that continuity brings a bit of a problem into procedure of adding hypothetical elements. The single most significant step in generating that table of F was adding hypothetical elements such that all circumstances represented in the table were different. When the number of elements in that table are extended to infinity, we run directly into Zeno's paradox. We cannot list an infinite number of cases thus, in the limit, we cannot know that every x argument in every listed circumstance is different from every other x argument in that circumstance. The argument for hypothetical elements being able to differentiate between circumstances fails.

 

Firstly, how is it that the possibility of infinite information leads to continuous information. Don’t we need a considerably stronger hypothesis to arrive at continuity although it is only possible after we are considering an infinite amount of information.

 

Secondly, isn’t the issue of making each set unique able to be solved in a finite amount of information, so why can’t we just consider a finite list. Even if we do consider an infinite number of points don’t we just have to insure a finite set of points to make it unique and be able to differentiate between different circumstances or is there still the issue of not knowing where the rest of the points are which may make the sets not unique after all?

 

Then when we do consider an infinite amount of information all we have to do is make sure that all the elements are in a different location. What happens to the issue of uniqueness of all of the sets of elements, the explanation will be indexing by t?

 

As a side note (at this point), since it was the asymmetry under exchange which generated the required vanishing of identical positions in x, tau space, the absence of this asymmetry (or exchange symmetry) must be the characteristic of those additional elements which serve only to yield different probabilities. In essence, an infinite number of exchange symmetric elements may be added to the mix in order to adjust the calculated probabilities to the probabilities implied by the explanation. As opposed to the earlier elements which caused F to fit the underlying data, these additional elements must obey Bose Einstein statistics.

 

Won’t the asymmetric elements also influence the probability given by [math] \vec{\Psi} [/math] so why also use symmetric elements, is it just a case of considering all possibilities?

 

Also, what will happen when an asymmetric and a symmetric element are exchanged? What kind of symmetry will influence the explanation?

Link to comment
Share on other sites

So the index to which information is added to what we know is determined only by our explanation and any characteristics that might give the t index some kind of order only exists due to our explanation.

Exactly correct; however, you seem to be attaching underlying significance to this statement which is unwarranted. A “t” index has been defined in my representation of your explanation. My position is that my “t” index is perfectly consistent with the ordinary concept of time. In stating that position, I am recognizing the fact that, in the ordinary concept of time, there can be two very different times attached to almost every definable circumstance. The time at which you come to know of particular circumstance and the time your explanation assigns to the event represented by that circumstance.

 

In actual fact you are talking about two different circumstances; one to which you could attach the idea of “experience” and the other being explicitly referred to as part of your overall explanation: i.e., the second is very definitely “presumed”.

 

Look at it this way, discovering “dinosaur tracks” is an “experience”; that they are “dinosaur tracks” (made millions of years ago) is a presumption. Common explanations concern themselves with the latter, not generally the former.

 

In my representation, “the t index is not a characteristic of the information” it is rather a representation of changes in that underlying information on which the explanation is based.

 

That is if two elements occupy the same location they can have no more effect then one element.

I think you have missed the point here. I am not talking about the explanation at all (what effect something has is a consequence of the explanation) I am talking about the representation itself. As a mental convenience, I am merely proposing to represent an element “x” of the circumstance as a point on a line. If the only information available to you is “a point on a line” (that being a supposed representation of the information), then how do you know how many elements that point represents? Clearly there is a major flaw in any attempt to use such a representation without providing for a way to resolve this difficult.

 

 

Also, if two elements are in the same location there is no way for our explanation to distinguish them and so the only way for such a thing to happen in the first place is for one of the elements to have been added, which would imply that it doesn’t belong in the explanation in the first place.

The problem is not your explanation distinguishing between those elements; it is the representation which can not. It should be clear to you that the representation has a major flaw for which I have set forth a solution: i.e., a specific alteration in the representation achieved by adding a tau axis.

 

And by solution you are referring to the idea of using a function F that is zero for every possible circumstance and is nonzero for every invalid circumstance. That is it just insures that our explanation satisfies the “what is” is “what is” explanation.

Exactly correct.

 

Firstly, how is it that the possibility of infinite information leads to continuous information. Don’t we need a considerably stronger hypothesis to arrive at continuity although it is only possible after we are considering an infinite amount of information.

Again, you are putting the the shoe on the wrong foot here. This is apparently an issue almost everyone (Qfwfq being a prime example) gets backwards. The issue is not to cover all possible representations of circumstances but rather to show that the single representation I propose is capable of representing any collection of underlying circumstances.

 

You should comprehend that I am not saying that an infinite amount of information requires continuity; what I am saying is that, if we allow for “all” possibilities (all the way out to infinity) we must include the possibility of adding an infinite number of “x” labels between any two established “x” labels. That is the very essence of the definition of continuity. That is, the possibility of continuity must be included in our analysis and it is that requirement which generates the problem I am talking about. If you can prove continuity does not exist in the infinite limit, then the problem of creating a unique representation has already been solved by the simple addition of hypothetical elements.

 

Secondly, isn’t the issue of making each set unique able to be solved in a finite amount of information, so why can’t we just consider a finite list.

That would presume there existed a state where no more information could be added. At that point we would be “all knowing”. Notice here that my concern is not with the finite data underlying the explanation but rather the fact that, if an explanation is indeed actually “correct”, that explanation must continue to be correct all the way out to infinity. If it is not “correct” out to infinity, it must be that it must fail to be correct with some finite contingent of underlying circumstances: i.e., it can not be a “correct” explanation even if it fits all known circumstances.

 

Even if we do consider an infinite number of points don’t we just have to insure a finite set of points to make it unique and be able to differentiate between different circumstances or is there still the issue of not knowing where the rest of the points are which may make the sets not unique after all?

You seem to be missing the point that we must be able to guarantee that the method, which we are using to insure the collection is unique, is valid all the way to the infinite limit.

 

Then when we do consider an infinite amount of information all we have to do is make sure that all the elements are in a different location. What happens to the issue of uniqueness of all of the sets of elements, the explanation will be indexing by t?

I don't understand your question. There is no need to consider an infinite amount of underlying information in discussing the validity of the explanation: i.e., that it fits the underlying information. What we need to do is guarantee that the method of obtaining uniqueness of the sets of elements indexed by “t” will not fail in the infinite limit. Actually achieving that infinite limit (except for hypothetical elements which are presumed and not known) is never an actual problem. And, for the hypothetical elements, our explanation must provide a way of mathematically estimating the impact of an infinite volume of that hypothetical data (it's part and parcel of the explanation itself).

 

Won’t the asymmetric elements also influence the probability given by [math] \vec{\Psi} [/math] so why also use symmetric elements, is it just a case of considering all possibilities?

It is not a case of “considering all possibilities” but rather a case of “providing for all possibilities”. We have no idea as to what actual explanations might be correct. We can only confirm their validity; a rather different issue. Thus it is that we really do not know what the correct probabilities should be for circumstances not part of the underlying information being explained. It follows that the representation must be able to represent any possible probability distribution consistent with that underlying information. There can be no constraint on what hypothetical elements are added; thus it follows that symmetry under exchange is allowed for these hypothetical elements.

 

Also, what will happen when an asymmetric and a symmetric element are exchanged? What kind of symmetry will influence the explanation?

Nothing, the symmetry is an aspect of [math]\vec{\Psi}[/math]; once that function is established (consistent with the explanation) it is established. The only mathematical phenomena of interest here is that

[math]

\vec{\Psi}(\vec{x}_1,\vec{x}_2, \cdots, \vec{x}_k, \cdots, \vec{x}_q, \cdots,t)=

-\vec{\Psi}(\vec{x}_1,\vec{x}_2, \cdots, \vec{x}_q, \cdots, \vec{x}_k, \cdots,t)

[/math]

 

which is the very definition of asymmetry under exchange. (Note that “x” is shown as a vector because the hypothetical tau axis orthogonal to the x axis has been added to the representation.) The asymmetric nature of the function requires that

[math]

\vec{\Psi}(\vec{x}_1,\vec{x}_2, \cdots, \vec{x}_k, \cdots, \vec{x}_q, \cdots,t)

+\vec{\Psi}(\vec{x}_1,\vec{x}_2, \cdots, \vec{x}_q, \cdots, \vec{x}_k, \cdots,t)=0

[/math]

 

which clearly requires

[math]

\vec{\Psi}(\vec{x}_1,\vec{x}_2, \cdots, \vec{x}_k, \cdots, \vec{x}_k, \cdots,t)

+\vec{\Psi}(\vec{x}_1,\vec{x}_2, \cdots, \vec{x}_k, \cdots, \vec{x}_k, \cdots,t)=

2\vec{\Psi}(\vec{x}_1,\vec{x}_2, \cdots, \vec{x}_k, \cdots, \vec{x}_k, \cdots,t) =0 \;\; :

[/math]

 

i.e., [math]\vec{\Psi}[/math] must vanish if [math]\vec{x}_q=\vec{x}_k[/math].

 

This is what we needed to guarantee no two such elements are represented by the same point. That is all it says and there are no other implied constraints other than the fact that these elements must therefore obey fermi statistics. Clearly, by construction of the function “F” this constraint is only required by the actual known circumstances which we have to fit if the representation (and thus the explanation being represented) is to be valid; not by the expanded circumstances presumed by the explanation.

 

As an aside I comment that, by standard physics, the outcome of every conceivable experiment performed by any scientist results in consequences directly attributed to instruments constructed via elements obeying Fermi statistics (readings on meters, changes in molecular states, etc). There exists no experiment whose result can be found without using of such instruments. The elements obeying Bose statistics are implied via the explanation of the phenomena and are essentially presumed to exist by the accepted explanation of reality.

 

Go look at A simple geometric proof with profound consequences and think about the subtle implications of that proof.

 

Have fun -- Dick

Link to comment
Share on other sites

  • 2 weeks later...

I think you have missed the point here. I am not talking about the explanation at all (what effect something has is a consequence of the explanation) I am talking about the representation itself. As a mental convenience, I am merely proposing to represent an element “x” of the circumstance as a point on a line. If the only information available to you is “a point on a line” (that being a supposed representation of the information), then how do you know how many elements that point represents? Clearly there is a major flaw in any attempt to use such a representation without providing for a way to resolve this difficult.

 

How I am seeing this, there seems to be two separate issues here that could easily be confused. So let me see if I am understanding them correctly and the distinction between them.

 

The first one is that the representation must be able to represent any set of elements and can’t represent two elements in the same location. This can be solved by making the representation equivalent to the “what is” is “what is” explanation for any set of elements that is going to be represented. This set of points must be included in any explanation. And this is the issue that you are referring to here.

 

The second issue seems to be that there is still the possibility that at different times (I use the word here only to indicate different values of t ) different elements might actually occupy the same x location and so there must be a way to represent that they are different elements. This is solved by the use of a [math]\tau[/math] axis. By giving elements a different location in this [math] (x_i,\tau_i) [/math] space we insure that we can always distinguish these elements form each other and the “what is” is “what is” explanation can hold in this coordinate system and two elements can occupy the same x location as long as the [math]\tau[/math] location is different.

 

Again, you are putting the the shoe on the wrong foot here. This is apparently an issue almost everyone (Qfwfq being a prime example) gets backwards. The issue is not to cover all possible representations of circumstances but rather to show that the single representation I propose is capable of representing any collection of underlying circumstances.

 

So you want to be able to represent any set of information no matter how simple or complex it is. But what about the question of constraints on the explanation? Don’t we also want to insure that for any possible expectations that might be placed on the information there exist an explanation that can represent it?

 

Or is it necessary to add hypothetical elements to the information in order to modify the expectations produced by the explanation? And, we can only expect one explanation to satisfy any finite amount of information being explained and in order to modify the explanation we must add hypothetical elements to the information?

 

The correct function must vanish for every specified point (i.e., the points allowed by the rule being represented by F) in that two dimensional space. The integration over all tau dependence has to do with the calculation of expectations, and not with the rule F is to represent. Thus ignoring how that representation was achieved, seen merely as a function defined over that x tau space, rotation in the plane of that space cannot change the function (all we really have is a set of points which are being used to define that function).

 

So whatever function is chosen to insure that the “what is” is “what is” explanation is followed by the explanation must be scale invariant and can’t depend on the actual values of the function. In a sense it must posses any symmetry that can be derived by performing an invertible transformation to every element being represented.

 

But rotation will convert tau displacement into x displacement. Since tau displacement is an entirely hypothetical component, F simply can not depend upon the actual tau displacement and by the same token neither can F depend upon actual x displacement. Since we have converted F into a function of distances between points, this essentially says that F can not depend upon the actual magnitude of these separations. This should be quite reasonable as, since we are talking about mere numerical labels, multiplication of all labels by some fixed constant cannot change what is being represented.

 

So are you saying that a scaling of the x axis can be looked at as a rotation in the x,[math]\tau[/math] plane and that the scale symmetry of the x axis in this case will result from rotational symmetry of the [math]\tau[/math] axis and the fact that the [math]\tau[/math] axis was added to the representation for the reason of making it possible to represent any possible set of elements and so as a result we can choose any scale we want for it?

 

Is there also a scale symmetry coming from the fact that we could have chosen any scale in the first place for our representation or is this a senseless suggestion because only the explanation can define a scale anyhow?

 

Since [math]\delta(x)[/math] only has value for x=0, a power series expansion of F around a distribution satisfying F=0 implies that F may be written

[math]

F=\sum_{i\neq j} \delta(\vec{x}_i-\vec{x}_j) = 0.

[/math]

 

Thus it is that we come to the conclusion that any appropriate collection of rules can be expressed in terms of those hypothetical elements which can exist and that interactions at a distance in our hypothesized space can not exist. As an aside, it is interesting to note that Newton, in his introduction to his theory of gravity, made the comment that it was obvious that interactions at a distance were impossible. I have always wondered exactly what he had in mind when he said that. I take it to mean that, although field theories make some excellent predictions, they cannot be valid in the final analysis and are only an approximation to the correct result.

 

That is, this is a completely general way to define the “what is” is “what is” explanation. It will insure that any two elements will not occupy the same location, it is not dependent on the values of [math]\vec{x}_i[/math] and any mapping of the elements that doesn’t map two elements to the same location (which we wouldn’t be interested in anyway) will not change the value of it.

 

So if any element is either included in this function or is antisymmetric we know that it will never have two elements occupy the same location in the representation of the information and satisfy the “what is“ is “what is“ explanation.

Link to comment
Share on other sites

Bombadil, I think you need to sit back and think about what I am doing. To date, I think Anssi is the only person who actually comprehends what I am talking about when I say that I have found a logical path for expressing what science would call an explanation of reality without making any assumptions. The critical issue is the definition of “an explanation”. The real problem appears to be that no one here comprehends the necessity of closely examining their definitions and I suspect that includes you. You appear to be using unexamined assumptions embedded in your world view to interpret what I am saying.

 

In an attempt to make the issue clearer to you, let me bring up an example of peoples tendency to avoid examining their definitions. In my original paper (which I tried to get published thirty years ago) I brought up the following example as an attempt to get people to think about this issue.

 

There is a subtle aspect to science unrealized by many scientists. When one designs an experiment, one must be careful to assure that the result is not predetermined by definition: that is, that one is actually checking something of significance. A simple example of what I am talking about can be illustrated by thinking about an experiment to determine if water runs downhill. If one begins that experiment by defining downhill with a carpenters level, one has made a major error. They have clearly predefined the result of the experiment as downhill has been defined to be the direction water runs (the bubble being the absence of water). In such a case, it is rather a waste of time to finish carrying out such an experiment no matter how well the rest of the experiment is designed. It should be clear that to do so is nothing more then checking the consistency of one's definitions.

All over the world it is possible to find tourist sites where “water runs uphill” and they continue to attract people on a daily basis in spite of the fact that they are always nothing except simple illusions. These illusions depend upon the fact that most people assume they know what “uphill” means without ever even thinking about their personal definition of the term. Most people think uphill means some kind of deviation from “level” and that “level” and "perpendicular to the local walls" mean the same thing. All one need do is built any building where everything is constructed to correspond to an orthogonal reference frame slightly off from a plum bob and/or a carpenters level in which case, these people have no real reference for their definition of level though they think (emotionally) that they do.

 

This is exactly the mistake made by most scientists. As an example, they assume they know what they mean by the terms “space” and “time” without ever even considering how these things should be defined. They emotionally feel that they know what they mean and they regard that feeling as sufficient. I am afraid that Qfwfq is guilty of exactly this error as all of his complaints regarding my presentation are essentially based on exactly that kind of unsupportable assumption.

 

How I am seeing this, there seems to be two separate issues here that could easily be confused. So let me see if I am understanding them correctly and the distinction between them.

 

The first one is that the representation must be able to represent any set of elements and can’t represent two elements in the same location.

I get the feeling that you are working with the idea that “location” is a concept you understand. You are bringing that concept (and many of the ideas which go with it) into my analysis without seriously worrying about the issue. In my work, I have not presumed the existence of either space or time. I have defined something which I call time; but it is certainly not the common concept of time (although it will turn out to be exactly the same parameter used by physicists in their expressions of the rules of reality they have discovered). The identification of my “time” with there usage of the term is quite simple so I haven't really worried about it much.

 

The idea of “location” is a much more complex concept and should not be brushed over without thought. The index “x” is nothing more than a number assigned to the actual underlying ontological element which the index “i” refers to (that reference is defined by your explanation).

 

Thus it is that my representation refers to specific circumstances via an arbitrary collection of “xi” indices.

 

This can be solved by making the representation equivalent to the “what is” is “what is” explanation for any set of elements that is going to be represented. This set of points must be included in any explanation. And this is the issue that you are referring to here.

You have made an unreasonable mental jump here. There is no argument for presuming such a “set of points have to be included in any explanation” including the “what is” is “what is” explanation. These are nothing more or less than numerical labels. What must be included in any explanation is whatever it is that is represented by the “i” index. As it turns out, many of the elements of the physicists explanations are associated with the concept “location”. It will indeed turn out that that the “x” index I have defined will end up bearing a very strong resemblance to exactly the location parameter used by physicists in their expressions of the rules of reality. But that is not a logical presumption at this point.

 

The issue here is that I have decided to represent that collection of “x” indices as points on a defined mathematical frame of reference (an x axis) for the simple reason that human minds are quite adept at thinking about patterns of points in a space. Better than it does for a collection of numbers anyway. The issue which has to be handled is that the specified circumstance may include two or more cases of the same index. Take for example this post. If the number “i” were to refer to the position of a letter in the standard english alphabet, in your particular explanation of this post the number “4” would appear a number of times. In view of the fact that the correct explanation of the post might give different meanings to different occurrences (for instance, it might be in code) I want to think of the information in terms of the numerical label “x” (undefined until I find another explanation). If I were to represent the post with points on an x axis how would you determine (from that representation) how many “e's” those points represented. As it stands, the representation is simply inadequate to the circumstances.

 

The second issue seems to be that there is still the possibility that at different times (I use the word here only to indicate different values of t ) different elements might actually occupy the same x location and so there must be a way to represent that they are different elements.

Again you shouldn't really speak of this in terms of “location” (unless you are specifically speaking of a position on that x axis used to represent that “i” index). The real issue here is that just because your explanation defines that element to be a single reference, you must admit of the possibility that there might exist an explanation which would regard that specific circumstance to be multiple elements (again, perhaps if you thought of it as a secrete code you might comprehend that what you thought of as multiple occurrences of “e” could be seen as occurrences of different elements, which ones depending on other embedded information).

 

This is solved by the use of a [math]\tau[/math] axis. By giving elements a different location in this [math] (x_i,\tau_i) [/math] space we insure that we can always distinguish these elements form each other and the “what is” is “what is” explanation can hold in this coordinate system and two elements can occupy the same x location as long as the [math]\tau[/math] location is different.

This is correct and you should be able to see this as a way of solving the problem I have just present, not as a way of importing your world view into the analysis.

 

So you want to be able to represent any set of information no matter how simple or complex it is. But what about the question of constraints on the explanation?

If you go back to the definition of an explanation, the issue is that there exists a function [math]\vec{\Psi}[/math] which represents your explanation by yielding exactly the same probability for any possible circumstance as does your explanation. It is by this means that the constraints implied by your explanation are represented in that function. I am simply not concerned with the constraints you proposed in order to make your explanation make sense. I am concerned only with the constraints implied by the definition of “an explanation”: i.e., no matter how much additional information is obtained (into the indefinite future by my definition of time) the explanation will still be consistent with the entirety of the information to be explained.

 

Don’t we also want to insure that for any possible expectations that might be placed on the information there exist an explanation that can represent it?

Again, you are jumping the gun here. I don't see how you can assert that there has to exist an explanation for any set of expectations. My assertion is that, if you do have a valid explanation, then you have a method of obtaining internally consistent probabilities for your expectations and the existence of that process itself defines a function and guarantees the existence of that function. You seem to be trying to assert the converse: i.e., you are asserting that there exists an explanation for every possible [math]\vec{\Psi}[/math] function. I am making no such assertion. Though it certainly could be true. :shrug:

 

Or is it necessary to add hypothetical elements to the information in order to modify the expectations produced by the explanation?

Necessity is not the issue! Certainly hypothetical elements can modify [math]\vec{\Psi}[/math] so, if we want to be able to produce “any internally consistent set of expectations”, we pretty well have to allow additions of hypothetical elements.

 

And, we can only expect one explanation to satisfy any finite amount of information being explained and in order to modify the explanation we must add hypothetical elements to the information?

Here you go again. You are clearly thinking in terms of your the standard world view where it is presumed that new information can invalidate your explanation. I am not denying that fact; what I am saying is that, if such a thing does occur, the explanation is now known to be an invalid explanation and is of no interest to me. The point being that it was invalid all along; I just didn't know it. Again, I am looking at the constraints on valid explanations required by the definition of an explanation and nothing else.

 

So whatever function is chosen to insure that the “what is” is “what is” explanation is followed by the explanation must be scale invariant and can’t depend on the actual values of the function. In a sense it must posses any symmetry that can be derived by performing an invertible transformation to every element being represented.

Again, the “what is” is “what is” explanation. You seem to be confusing the “what is” is “what is” explanation with the information which is to be explained. They are different things. I brought up the “what is” is “what is” explanation for one very simple reason: it explains all of the available information and is perfectly valid for any conceivable collection of information. The scale invariance has nothing to do with the represented explanation. It has to do with the form of the representation. These indices (the numerical labels) used to refer to elements behind the circumstances being explained are totally arbitrary; however, once you establish them for your analysis, you can not arbitrarily change them.

 

So are you saying that a scaling of the x axis can be looked at as a rotation in the x,[math]\tau[/math] plane and that the scale symmetry of the x axis in this case will result from rotational symmetry of the [math]\tau[/math] axis and the fact that the [math]\tau[/math] axis was added to the representation for the reason of making it possible to represent any possible set of elements and so as a result we can choose any scale we want for it?

No! What I am saying is that, having added to my representation a tau axis orthogonal to that x axis I now have a hypothetical x, tau plane (it's hypothetical because the underlying problem does not require it: i.e., it is there for my mental convenience only). But, even after we have established specific x and tau indices, the representation leaves the door open to rotations in the x, tau plane. Solving the new problem (finding a [math]\vec{\Psi}[/math] function which produces exactly the same probabilities as the explanation being represented) is the same as the original problem. However the rotation changes the scale of the x and tau distributions. Since the tau distribution is totally arbitrary and has to be integrated over anyway, the solution can not depend upon the changes in tau indices. But, since rotation changes the scale of the already established x indices, this implies the solution can not depend upon changes in the “established” x indices either.

 

Is there also a scale symmetry coming from the fact that we could have chosen any scale in the first place for our representation or is this a senseless suggestion because only the explanation can define a scale anyhow?

Recall my comment about representing an explanation of this post above. What numerical labels you use to represent the coded message is immaterial; but it is important that, once you set those numerical labels you shouldn't go around changing them. If you did, your representation would be changing. What I am talking about here is the fact that addition of that hypothetical tau axis has yielded a new symmetry in the representation which must be taken into account.

 

So if any element is either included in this function or is antisymmetric we know that it will never have two elements occupy the same location in the representation of the information and satisfy the “what is“ is “what is“ explanation.

You seem hung up on the “what is“ is “what is“ explanation. Try thinking about the issue this way: picture the circumstances known for some value of the index ”t” as points in the x tau plane. Paint all those points white. Then paint all the rest of the plane black. If you add all the black points to the circumstance as hypothetical elements, then the function

[math]

F=\sum_{i \neq j}\delta(\vec{x}_i -\vec{x}_j)=0

[/math]

 

will guarantee that the known circumstances will conform exactly to the ones you are explaining. If any element of the circumstance to be explained (a white point) is identical to one of those black points F will blow up and cannot be equal to zero: i.e., F will constrain your expectations (non zero probability) to exactly those white points. So any distribution of elements can be so constrained. If you want to open up those constraints a bit, just omit adding a black point for every circumstance you want to include as a possibility for the known circumstance. In that case, the function F as given above will not blow up for that particular circumstance. If you remove all the hypothetical black points, you have brought yourself back to the “what is“ is “what is“ explanation: i.e., all circumstances are possible and there is no constraint.

 

The whole issue here is that any possible rule can be enforced by a proper selection of hypothetical elements together with that delta function definition of F=0. It is a rather simple proof of a very powerful relationship. It makes it quite clear that the rules of the universe can be written as a rather simple function of what you presume exists. And that it doesn't make any difference as to what those rules are;

[math]

F=\sum_{i \neq j}\delta(\vec{x}_i -\vec{x}_j)=0

[/math]

 

is always capable of enforcing those rules given the proper addition of hypothetical elements.

 

I hope this is helpful.

 

Have fun -- Dick

Link to comment
Share on other sites

...Try thinking about the issue this way: picture the circumstances known for some value of the index ”t” as points in the x tau plane. Paint all those points white. Then paint all the rest of the plane black. If you add all the black points to the circumstance as hypothetical elements, then the function

[math]

F=\sum_{i \neq j}\delta(\vec{x}_i -\vec{x}_j)=0

[/math]

 

will guarantee that the known circumstances will conform exactly to the ones you are explaining. If any element of the circumstance to be explained (a white point) is identical to one of those black points F will blow up and cannot be equal to zero...I hope this is helpful.

Thank you, it was very helpful, because it raises the obvious question, what black points ?

 

You did not define any black "points" to be present on the tau plane. What you defined are "white points" that represent individual circumstances (elements) known for some value of the index "t". Logically, after you place the white points (first #1, then #2, ...so on) on a black colored plane, what remains after you place the final "white point" are not discrete "black points" on the tau plane that can represent unknown hypothetical elements. What remains is the "continuum of blackness" that was always present even before you added the first "white point".

 

The "blackness" of your F function has nothing at all to do with "black points"--no such points exist, neither can you add them or form them to be hypothetical elements. As soon as you "point" to any part of the blackness to identify a possible element of the circumstance, by definition, that "point" must become a "white point" to be entered into your F function, it is what you know to be an element, that is why you pointed. The F function can only be a combination of discrete "white points" (elements of the circumstance) plus the non-discrete continuum of blackness.

 

Clearly, if you remove all the background blackness from the F function, then all that can logically remain are the "white points", that is, what you already know. But, to do so would make no sense, because it would be impossible to ever add new knowledge about the circumstance in the future. You can add all the white points you want to a black plain to infinity, and blackness will always remain. It would appear you need to redo your F function to remove "black points" that can never exist.

Link to comment
Share on other sites

  • 3 weeks later...

The issue here is that I have decided to represent that collection of “x” indices as points on a defined mathematical frame of reference (an x axis) for the simple reason that human minds are quite adept at thinking about patterns of points in a space. Better than it does for a collection of numbers anyway. The issue which has to be handled is that the specified circumstance may include two or more cases of the same index. Take for example this post. If the number “i” were to refer to the position of a letter in the standard english alphabet, in your particular explanation of this post the number “4” would appear a number of times. In view of the fact that the correct explanation of the post might give different meanings to different occurrences (for instance, it might be in code) I want to think of the information in terms of the numerical label “x” (undefined until I find another explanation). If I were to represent the post with points on an x axis how would you determine (from that representation) how many “e's” those points represented. As it stands, the representation is simply inadequate to the circumstances.

 

Don’t you have to include a variable that is at least continuous to represent the elements or points of interest as any discontinuous representation would imply that nothing being represented can be continuous? This would seem to be an undefendable statement at this point. As for the idea of this being a representation of patterns and not just a collection of numbers, this seems to be a question of preference and of choice of how this is being looked at, and has not been implied by the representation that you have chosen. In short I don’t really see how you have defined a mathematical frame of reference due to how humans think about things. It seems more like you have defined the minimum needed to represent an unknown set of X labels and how we see the representation is just a consequence of how it is looked at.

 

So you are using the i index as a way of saying what a element will be considered to be and not as a means of identifying elements in the explanation. That is, It tells us what an element is ( for instance that the element is an e) not that it is a different element then some other element being represented (every element in the representation doesn’t have a unique i index). If this is the case don’t we have to allow for the possibility that a single i index will have multiple X labels associated with it?

 

If so it makes some sense as to why it might be used this way as it would solve the question of how we distinguish between different occurrences of the same type of element in the representation, and in fact gives meaning to giving different elements a type. In short it would be used only to distinguish between different types of elements but it also would imply we can’t say that it is the same element that we are referring to as things change without making further assumptions.

 

Necessity is not the issue! Certainly hypothetical elements can modify [math]\vec{\Psi}[/math] so, if we want to be able to produce “any internally consistent set of expectations”, we pretty well have to allow additions of hypothetical elements.

 

But why include different types of elements that can be added to the representation? Is it just an issue of we can add these different types of elements so it is possible to add such an element?

 

No! What I am saying is that, having added to my representation a tau axis orthogonal to that x axis I now have a hypothetical x, tau plane (it's hypothetical because the underlying problem does not require it: i.e., it is there for my mental convenience only). But, even after we have established specific x and tau indices, the representation leaves the door open to rotations in the x, tau plane. Solving the new problem (finding a [math]\vec{\Psi}[/math] function which produces exactly the same probabilities as the explanation being represented) is the same as the original problem. However the rotation changes the scale of the x and tau distributions. Since the tau distribution is totally arbitrary and has to be integrated over anyway, the solution can not depend upon the changes in tau indices. But, since rotation changes the scale of the already established x indices, this implies the solution can not depend upon changes in the “established” x indices either.

 

But won’t there be the problem of the rotation that maps all of the x axis to a single point and rotates everything to the [imath]\tau[/imath] axis. As this rotation exists but the location on the [imath]\tau[/imath] axis has no influence on ones expectations, so won’t this imply some kind of discontinuity in the symmetry as well?

 

Recall my comment about representing an explanation of this post above. What numerical labels you use to represent the coded message is immaterial; but it is important that, once you set those numerical labels you shouldn't go around changing them. If you did, your representation would be changing. What I am talking about here is the fact that addition of that hypothetical tau axis has yielded a new symmetry in the representation which must be taken into account.

 

So after we set the numerical labels can we change them in accordance with one of the symmetries, or are these symmetries now going to vanish? Or are the actual labels only defined by the explanation so that we can’t talk about the labels we can only talk about the symmetries that can be included in equivalent representations of what the labels represent, and the labels are defined by a particular choice of the explanation which must be equivalent to one defined to follow the symmetry’s that are being discussed?

Link to comment
Share on other sites

Thanks Bombadil, I got your note.

 

Don’t you have to include a variable that is at least continuous to represent the elements or points of interest as any discontinuous representation would imply that nothing being represented can be continuous?

At this point the answer to that question is a definite “no”. There is no such thing as “points of interest”. You are apparently missing the central issue of the representation itself. Maybe it would be clearer to you if I pointed out a very basic issue here. What I have done is to design a representation capable of representing any circumstance from any explanation without making any assumptions whatsoever concerning exactly what that explanation is.

 

The central problem being examined is the creation of an explanation from some specific set of information without making any assumptions. If we make no assumptions at all, then we know absolutely nothing about the circumstances being explained. That creates a rather difficult problem to represent. So I make two fundamental assumptions (or axioms if you prefer). The first is that an explanation to be represented exists and the second is that the explanation is something which can be communicated. If no explanation exists, the problem is essentially moot. If the explanation can not be communicated what could I do with a solution if I had one? Thus those two assumptions are pretty basic. If you can come up with a possibility which violates those assumptions let me know.

 

The first point of the above is, if I know an explanation and the language required to communicate it I certainly know the circumstances that explanation explains and the ontological elements required by that explanation. Thus laying out those circumstances I wish to represent is quite a simple job. I can define every one of the ontological elements required by that explanation in that known language and attach a different numerical label to each every definition. It follows that the circumstance of interest can be represented by a collection of those numerical labels. (They are neither continuous nor infinite.)

 

[math](i_1,i_2,i_3,\cdots,i_n)[/math]

 

So I can obviously represent those circumstances; however, that representation utterly fails to satisfy the requirements I have set for my representation. My goal is to design a representation which makes no assumptions whatsoever. The representation just given is capable of representing only a specific explanation: i.e., it makes the assumption that the given explanation is the only one I want to represent. That is a constraint on the representation which fails to be entirely general.

 

The “x” index used to represent the underlying noumena being represented by the ontological elements labeled by the “i” index provides a mechanism for bridging that constraint. In order for you to understand how the “x” index manages that feat. Let us consider two different explanations for exactly the same circumstances. Either explanation can be represented by the notation given,

 

[math](i_1,i_2,i_3,\cdots,i_n);[/math]

 

however, a number of difficulties arise. One, the second explanation may require different ontological elements and a different language to express those ontological elements. Thus the “i” indexes used in the two explanations may refer to totally different ontological definitions. But both explanations are explaining the same known circumstances thus every “i” index from one explanation can be tied to an “i” index from the other explanation (they are referring to exactly the same collection of noumena). Which noumena is being referred to is identified by the numerical label “x”.

 

There are a few subtle difficulties which can occur. It may be that a single ontological element from one explanation (perhaps “water” from the “fire”, “wind”, “water” and “earth” the ancients spoke of) consists of a number of different ontological elements from a more complex explanation (consider the atomic table used by chemists discussing the same circumstances). In this case, the second explanation might require considerably more ontological labels than the first. One instance of “water” (the numerical label “3” from the above list) might need to be represented by several ontological elements in the second explanation (think [math]H_2O[/math]) and clearly “earth” (in the Chemists explanation) would require quite a collection of elements.

 

It should be clear from the example I just gave that both representations most probably would require more than one instance of the same ontological element and a single “i” label from one would be equivalent to a number of labels from the second. The first problem can be handled by allowing different “i” labels to refer to exactly the same definition and the second could be handled by allowing the several different labels from the second to refer to the same noumena. Since exactly what the noumena are is unknown, these freedoms allow both explanation to represent exactly the same circumstances.

 

The two explanations just discussed can be any two different explanations of the same circumstances. It is the “x” index which allows us to represent both with exactly the same notation. The number of elements being represented is still finite.

 

As for the idea of this being a representation of patterns and not just a collection of numbers, this seems to be a question of preference and of choice of how this is being looked at, and has not been implied by the representation that you have chosen.

We have an explanation and a language with which to express it. Embedded in the information needed to understand that language and the explanation are the definitions of the required ontological elements. The collection of circumstances as represented by that list of “i” labels laid out by the person who conceived of the explanation and knew the language will most probably contain patterns making that explanation an obvious result. We have no interest in either the language or the explanation. Our only interest is in discovering the constraints on the explanation required by the definition of “an explanation”. All we need to do that is a notation which is capable of representing any collection of circumstances defined by any explanation (given that the explanation and the language necessary to communicate it are both known).

 

If so it makes some sense as to why it might be used this way as it would solve the question of how we distinguish between different occurrences of the same type of element in the representation, and in fact gives meaning to giving different elements a type. In short it would be used only to distinguish between different types of elements but it also would imply we can’t say that it is the same element that we are referring to as things change without making further assumptions.

How we distinguish between different occurrences is unimportant. It is the required distinguishing embedded in the explanation which is represented by the definitions of ontological elements created by the explainer which is important. That lies in the list of those element definitions the explainer can prepare. But we have no concern with what language he uses or what he believes those ontological elements are. We are concerned only with the constraints on [math]\vec{\Psi}[/math] (our mathematical representation of the expectations the explanation implies) which are required by the definition of “an explanation”. All we are concerned about with regard to the other issues you bring up is, “can the required circumstances be represented by my notation?”.

 

But why include different types of elements that can be added to the representation? Is it just an issue of we can add these different types of elements so it is possible to add such an element?

Because there may exist explanations which require additional elements. In fact, if an explanation requires these elements to be identified via a position on an axis (that would be any explanation requiring a spacial component) we would have to add the tau axis in order to maintain representation of all of the information (an issue I thought I made clear in my presentation). It isn't just possible, it is necessary if the notation is to be able to represent any possible explanation. Another way to view the circumstances is to identify the noumena as positions on an x axis.

 

But won’t there be the problem of the rotation that maps all of the x axis to a single point and rotates everything to the [imath]\tau[/imath] axis. As this rotation exists but the location on the [imath]\tau[/imath] axis has no influence on ones expectations, so won’t this imply some kind of discontinuity in the symmetry as well?

It appears to me that you are making exactly the same mistake that Qfwfq, modest and Erasmus00 make on a continuing basis. You are not using my definitions but are instead presuming they correspond perfectly to the confused meanings common to general conversation. I suspect the biggest problem is that they are simply mentally incapable of comprehending the vast extent of the assumptions they make in most all their conclusions. Hypothetical does not mean invalid. I defined “invalid” as inconsistent with the known information and “hypothetical” as not part of the known information. These are vastly different concepts. Hypothetical does not mean “does not exist” but rather means that one cannot prove it does.

 

Tautology is a name commonly given to totally hypothetical constructs which are absolutely valid: i.e., can not be proved false. Tautology is generally defined as a series of self-reinforcing statements that cannot be disproved because the statements depend on the assumption that they are already correct. All mathematical proofs are essentially tautologies as the axioms on which the proof depends are, for the sake of the proof, assumed to be correct. Of interest here is that my presentation is a proof and not a theory.

 

I am reminded of a running argument I have had with Qfwfq for many years. He simply seemed incapable of comprehending my interpretation of Zeno's paradoxes; he persisted long enough for me to give up trying to explain it to him. As I see it, it can not be proved that Achilles can pass the Tortoise as that would require proving the existence of motion: i.e., you can perhaps prove that he was once behind the Tortoise and was later beyond the Tortoise (specific instances of known information) but you cannot prove he got there by moving past the Tortoise (that is an assumption as to prove it requires an infinite amount of information). Qfwfq et al continually confuse hypothetical with invalid.

 

You are doing exactly the same thing here. You are assuming that the tau axis I introduced as hypothetical is an invalid proposition: i.e., that it can be proved to be wrong. It is true that the tau axis is not part of the known information but, without it, that self same information can not be represented as positions on a x axis. Just as the assumption of motion can not be proved wrong by Zeno's paradox; the real issue is that motion can not be proved correct. That makes it “hypothetical”. Just as an aside, the “what is” is “what is” explanation is, as far as I am aware, the only explanation which does not make use of hypothetical elements; that is what makes that explanation so important from an abstract perspective.

 

In essence, once we introduce a hypothetical element (which is required by the explanation under discussion) that hypothetical element must be viewed as being as real as any of the other elements the explanation explains. Your complaint that rotation can remove meaning of the “x” axis from the representation is no more meaningful than to note that the opening orientation omitted meaning from from the “tau” axis. It is the “x,tau” plane which is required by the explanation. The fact that “tau” is not part of the “known information” is no different from the fact that the Tortoise moves: it is an essential part of the explanation and must be handled as if it is real in analyzing the consequences of that explanation.

 

All I have done is to replace the “x” mathematical label for a specific noumena (which can be seen as a collection of points on the x axis) with an “(x,tau)” mathematical label (which can be seen as a collection of points in an x, tau plane. It is that collection of points which represent the supposed “known information”.

 

Or are the actual labels only defined by the explanation so that we can’t talk about the labels we can only talk about the symmetries that can be included in equivalent representations of what the labels represent, and the labels are defined by a particular choice of the explanation which must be equivalent to one defined to follow the symmetry’s that are being discussed?

Again, I think you are confusing some important issues. The labels are only defined by who ever is laying out the information pertinent to that explanation for analysis. The language used to explain the explanation is part of the “known information” and is irrelevant to the analysis: i.e., it is an extremely constraining assumption to assume that the known information being explained requires the explanation be expressed in some specific language. A specific explanation may require a specific language but that is not at all the same as presuming there exists no other explanation requiring a different language.

 

And Rade, don't bother commenting on any of this as it is all clearly over your head.

 

Have fun -- Dick

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...