Jump to content
Science Forums

What can we know of reality?


Recommended Posts

Anyhow, that perspective is important for this thread; that a self-coherent worldview is a set of made up facts rather than an unavoidable platonistic set of "real ontological facts". When you draw unavoidable conclusions from self-coherence, that doesn't mean we have found the ontological reality from logic alone. It kind of means those conclusions can be considered valid even in the absence of ontological knowledge. Or in other words, they are valid for any self-coherent description of any possible reality.

 

-Anssi

 

What I'm thinking is one self-consistent model/theory is "isomorphic" to any other. There might be different terms in different theories, and one might start with different axioms. But these are ultimately the same thing. What is logical in one perspective is logical in all perspectives. So if physics is derived from logic at one time and place, it is valid everywhere at all times.

Link to comment
Share on other sites

Obviously, divergence (of an info-theoretic, stochastic series of canonical states) isn't a problem for lifeforms, because they converge, or stay on the cusp, as it were.

 

Energy, and the co-option of the way energy 'flows', to a lower state, by 'agents' -coenzymes, haemoglobin; lots of things are based on the transfer of energy as charge (electrons). This 'energy routing' capability appears to be what keeps life balanced, on its surfboard.

Link to comment
Share on other sites

Dr Dick. I think that I made my last post without fully thinking through all of what you said so I think it best if I go through your last reply to me step by step and see if I can better understand what you are saying.

I think you have missed the entire point of my presentation. Anyone who understands the comment that you cannot disprove solipsism has to know is that there exists no way of knowing what is illusion and what is reality. That is the very issue upon which my analysis is built. I define reality to be a “valid ontology” and point out that explanations are facilitated through the invention of “invalid ontological elements”. It is our very freedom to do this that allows us come up with an explanation of that “valid ontology”. Recognizing the power of that freedom, I laid out a specific procedure for yielding exactly the “valid ontological elements” no matter what they were. Accuracy does not even become an issue here.

Ok I can understand that the question is not of accuracy. Either the element is real or it is an element that is an illusion and so is not a real object by definition, although perhaps this is not the best way to phrase this.

It also sounds like the function that tells us if the set is a valid ontology is a different function then the explanation.

Also I can see that if we are adding elements to our list of what we know to make a explanation then there is no way for us to tell if these are valid or not unless we know the entire list of valid ontological elements.

However I still don’t understand how if we can’t tell the difference between the two sets of objects how this function can (although I don’t doubt that it can be done). All I can conclude is that whatever the set of objects is the entire set must be self consistent. And this does not seem sufficient to tell them apart as it seems a system can be consistent without being real however I see no way that a system can be real without being consistent.

It is not a convenience, it is a requirement. In order to make sure that no possibility is eliminated, every valid ontological element must be represented as individual. Now theories (or epistemological constructs) invariably identify some elements as being the same. If we are to allow such labeling, we must add an “invalid” orthogonal axis in order to represent our valid ontological elements as points in a Euclidean space.

Now, how I understand this is that we are making sure that no element is repeated with the same x coordinate but I still don’t understand exactly why this is. I think that you mean that theories (or epistemological constructs) will require the use of more then one of some ontological elements so that we put another axis to put them on but I don’t understand why they can’t be on the x axis. The reason I think that they can’t be is that if they were then we can’t insure the existence of an explanation but don’t see why this is.

I have been warning them against creating an epistemological construct which defines what is and is not real. I have explicitly kept the two separate without defining what they are: i.e., my construct presents no constraints whatsoever on the possible explanation.

 

The only constraint on my construct is due to symmetry alone and, if you go and read those three post I have referenced a number of times, you should be able to comprehend why the constrains imposed by fundamental symmetry are required (see the three posts on “physicsforums.com”: my post on the what symmetries are, selfAdjoint's response to that post and my response to selfAdjoint). Your solution to a problem cannot produce information which is not contained in the presentation of that problem.

 

Then, what you mean is that we don’t want to define what it is that we are explaining because by doing so we limit the possible explanations even if we can’t see how it is limiting it and since we are trying to understand a construct that can incorporate all possible constructs doing such a thing will only make whatever we come up with conditional which is the last thing we want to do.

 

Now, are you suggesting a symmetry, which I think you are or are you simply pointing out that it must obey symmetry at this point? How I understand symmetry is that if something is unchanged by use of a coordinate system then there is no way for us to place a coordinate system at any particular place without assuming that it goes there which can affect the answer.

I’m somewhat confused just what is the difference between an explanation and an interpretation of an explanation?

This question stills seems relevant to me though perhaps it has no relevance on the discussion.

 

What I understand the explanation function to be is that we input the entire set of ontological elements that fall under any particular “t” coordinate and the function then gives us a one (1) if it is a set of valid ontological elements or a zero (0) if it is not. Is this correct?

 

If this is correct, then it seems that the explanation is not a function of the coordinates but rather a function of the order in which the elements appear in the coordinate system, so that wherever the origin of the coordinate system is it still gives the same result.

I think when I posted this I had jumped to several conclusions that are most likely incorrect as well as an unnecessary.

Link to comment
Share on other sites

It also sounds like the function that tells us if the set is a valid ontology is a different function then the explanation.
There is no function which tells us if the set is a valid ontology! The function I am referring to as [imath]\vec{\Psi}[/imath] yields your expectations for a given set on ontological elements at a specific time t. This is exactly what your "explanation" provides.
Also I can see that if we are adding elements to our list of what we know to make a explanation then there is no way for us to tell if these are valid or not unless we know the entire list of valid ontological elements.
Exactly; this is the fundamental premise of my attack.
However I still don’t understand how if we can’t tell the difference between the two sets of objects how this function can (although I don’t doubt that it can be done).
Well, you should doubt that it can be done because that is exactly what "we can't tell the difference between the two sets" means!
All I can conclude is that whatever the set of objects is the entire set must be self consistent. And this does not seem sufficient to tell them apart as it seems a system can be consistent without being real however I see no way that a system can be real without being consistent.
That is exactly why I brought up the fact that you cannot disprove solipsism: i.e., nothing is real. All I am saying is that some part of your ontology might be real. You cannot disprove that either.
Now, how I understand this is that we are making sure that no element is repeated with the same x coordinate but I still don’t understand exactly why this is. I think that you mean that theories (or epistemological constructs) will require the use of more then one of some ontological elements so that we put another axis to put them on but I don’t understand why they can’t be on the x axis. The reason I think that they can’t be is that if they were then we can’t insure the existence of an explanation but don’t see why this is.
The point here is that we are modeling all explanations, not just one! Any given specific explanation might hold forth the idea that two "noumena"are the same element (thus require the same x index); however, if we are to use our representation to represent any possible explanation, we must maintain the ability to refer to these two "noumena" independently. It follows that the representation must maintain the representation of their independent existence. This issue also arises when we decided to represent their existence as points on the x axis: i.e., if two "noumena" have exactly the same index, our representation does not contain the information that there is more than one. This is the sole reason for the introduction of the tau index and thus the tau axis.
Then, what you mean is that we don’t want to define what it is that we are explaining because by doing so we limit the possible explanations even if we can’t see how it is limiting it and since we are trying to understand a construct that can incorporate all possible constructs doing such a thing will only make whatever we come up with conditional which is the last thing we want to do.
It seems to me that you are restating exactly the reason behind the tau axis. I can only take it to imply that you understand the reason I set that axis up.
Now, are you suggesting a symmetry, which I think you are or are you simply pointing out that it must obey symmetry at this point? How I understand symmetry is that if something is unchanged by use of a coordinate system then there is no way for us to place a coordinate system at any particular place without assuming that it goes there which can affect the answer.
This seems a little confused. I think you need to read my take on symmetry carefully as it is somewhat alien to the norm. See if you can follow my arguments posted for savior machine (post #696 on the “Is time just an illusion?” thread on physicsforums), selfAdjoint’s response to it (post number 697 immediately below that post) and my response to selfAdjoint’s (post number 703 on that same page). These three posts should be read very carefully as they clarify my contention that all proofs are tautological in nature: i.e., what is proved must be embedded in the axioms themselves or the proof could not be accomplished. What is significant here is that mathematical deduction can carry tautological consequences far beyond what can be comprehended by the human mind.
This question stills seems relevant to me though perhaps it has no relevance on the discussion:"I’m somewhat confused just what is the difference between an explanation and an interpretation of an explanation?"
Yeah, I can see how that issue can seem confusing. Again, my perspective is somewhat alien to the norm but I think it is more objective in that it does not confuse some very important issues. As you should have noticed, my definition of "an explanation" is that it is a method of obtaining expectations; as such it is the method itself (the function I define to be [imath]\vec{\Psi}[/imath]). The "interpretation of the explanation" is the English description of the information necessary to realize those expectations as rational: i.e., your interpretation of the how and why you came to the conclusion those expectations are what one should expect.
"What I understand the explanation function to be is that we input the entire set of ontological elements that fall under any particular “t” coordinate and the function then gives us a one (1) if it is a set of valid ontological elements or a zero (0) if it is not. Is this correct?

 

If this is correct, then it seems that the explanation is not a function of the coordinates but rather a function of the order in which the elements appear in the coordinate system, so that wherever the origin of the coordinate system is it still gives the same result."

No; you are mixing up some very different issues.
I think when I posted this I had jumped to several conclusions that are most likely incorrect as well as an unnecessary.
I think I agree with you there.

 

Have fun -- Dick

Link to comment
Share on other sites

There is no function which tells us if the set is a valid ontology! The function I am referring to as vec{Psi} yields your expectations for a given set on ontological elements at a specific time t. This is exactly what your "explanation" provides.

Ok so then if I understand this right the function [imath]\vec{\Psi}[/imath]will supply a vector quantity in three variables that will be the coordinates of the ontological element that the explanation supplies( am I correct in saying that this dose not need to be a valid ontological element that it supplies). And it is a function of the x tau and present T coordinates of the ontological elements that we presently know of (both valid and invalid).

Now at one point you say.

The function vec{psi} is open to be absolutely any function. The only constraint on vec{psi} is that its normalized scaler product, vec{psi}^dagger cdot vec{psi} must be the probability your flaw-free explanation gives for a specific set of reference indices (x_i,tau_i) for a given t index. If your explanation yields such expectation (probability estimates) then a method of achieving them exists. That proves the function vec{psi} exists. The probability so defined, cannot be a function of the particular symbols (read numeric labels) but rather must be a function of the entire set taken as a whole. This implies the existence of what is normally called a shift symmetry and such a shift symmetry requires the following behavior of the function vec{psi}.

If I understand this right the result after normalizing should be 1 but this just says that the point is in the explanation and hardly looks like a probability although I can see that it can be used as one. Now I’m wondering if there is something that led you to the idea that the normalized scalar product must be the probability your flaw-free explanation gives for a specific set of reference indices? I can see that this dose make some sense in that if it is supplied by the explanation then it must be a element of the explanation and so would have a probability of 1.

You talk about normalizing it but all you show is using the dot product on it so are you also requiring that it is a unit vector?

Well, you should doubt that it can be done because that is exactly what "we can't tell the difference between the two sets" means!

The reason I say I don’t doubt it is that to me to say that I doubt something it seems I’m saying that I came to the conclusion without proof that there is not one, which I don’t see as a requirement of the statement “we can’t tell the difference between the two sets” although it does seem to imply there is not one. This doesn’t mean I plan or expect to find one, just that I see no reason to assume that there is or is not one.

The point here is that we are modeling all explanations, not just one! Any given specific explanation might hold forth the idea that two "noumena"are the same element (thus require the same x index); however, if we are to use our representation to represent any possible explanation, we must maintain the ability to refer to these two "noumena" independently. It follows that the representation must maintain the representation of their independent existence. This issue also arises when we decided to represent their existence as points on the x axis: i.e., if two "noumena" have exactly the same index, our representation does not contain the information that there is more than one. This is the sole reason for the introduction of the tau index and thus the tau axis.

Ok, I think I understand the reason behind the tau axis. It is that in order to use the ontological elements in a theory we have to define it and if it is defined then it can only be used as a particular element and since the x axis is only for different ontological elements we cant use it for a repeat of some ontological element so we have to use a new axis that you have named the tau axis. Now if I understand this right this gives the equation

[math]

\sum_{i \neq j} \delta(x_i - x_j) \delta(\tau_i - \tau_j) \vec{\psi}(x_1,\tau_1,x_2,\tau_2, \cdots , x_n, \tau_n, t) =0

[/math]

Now I did a quick search for the dirac delta function (which is what I think you are using) to get the basic idea of what it does and it looks like if I understand this right the elements in the summation are the x and tau element of the coordinates of the ith and jth element and the delta function will give the result of 0 as long as these are not the same as one of the elements that we have already set, this should have the effect of insuring that no two elements have the same coordinates, you then multiply it by the function [imath]\vec{\Psi}[/imath] And I don’t see this, as having an effect on insuring that no two elements have the same coordinates but I don’t understand what else it is doing.

This seems a little confused. I think you need to read my take on symmetry carefully as it is somewhat alien to the norm. See if you can follow my arguments posted for savior machine (post #696 on the “Is time just an illusion?” thread on physicsforums), selfAdjoint’s response to it (post number 697 immediately below that post) and my response to selfAdjoint’s (post number 703 on that same page). These three posts should be read very carefully as they clarify my contention that all proofs are tautological in nature: i.e., what is proved must be embedded in the axioms themselves or the proof could not be accomplished. What is significant here is that mathematical deduction can carry tautological consequences far beyond what can be comprehended by the human mind.

After reading through those posts some more it is starting to sound like symmetry is just the lack of an axiom referring to where the origin of the coordinate system goes, or perhaps just an axiom that says that we don’t know where the origin of the coordinate system is.

 

P.S this is the first time I’m trying to get latex to work and mostly I’m just copying and pasting it in place so if it has any errors in it this is likely the reason.

Link to comment
Share on other sites

Ok so then if I understand this right the function [imath]vec{Psi}[/imath]will supply a vector quantity in three variables that will be the coordinates of the ontological element that the explanation supplies( am I correct in saying that this dose not need to be a valid ontological element that it supplies). And it is a function of the x tau and present T coordinates of the ontological elements that we presently know of (both valid and invalid).
I think you have this a little confused. [imath]\vec{\Psi}[/imath] is an abstract vector: that is, it consists of a collection of numbers which can be seen as coordinates in an abstract n dimensional space where an arrow from the origin to the point specified by those coordinates is the abstract vector being represented. The argument of the function [imath]\vec{\Psi}[/imath] is [imath](x_1,\tau_1,x_2,,\tau_2,\cdots,x_n,\tau_n,t)[/imath]. It is the magnitude of that vector which is of interest to us.

 

Essentially, if we are given a collection of x and tau indices representing the “present” referred to by the index t, then the magnitude of [imath]\vec{\Psi}[/imath] is proportional to the probability that the specific collection of x and tau indices. That is what [imath]\vec{\Psi}[/imath] represents because that is what [imath]\vec{\Psi}[/imath] is defined to be. I do not know how to discover [imath]\vec{\Psi}[/imath]. The only real issue of interest at this point is, “does [imath]\vec{\Psi}[/imath] exist”.

 

I assert that [imath]\vec{\Psi}[/imath] must exist because your explanation of those elements referred to by the x and tau indices yields expectations that they will be seen at the time (the present) referred to by t. [imath]\vec{\Psi}[/imath] has the form it does as I want it to include all possibilities and any mathematical procedure of any kind (any computer program or any method of going from one representation to another) can be seen as a method of transforming one set of numbers into another set so that this representation omits no possibilities whatsoever.

 

I hope this clarifies your confusion over what [imath]\vec{\Psi}[/imath] represents,

If I understand this right the result after normalizing should be 1
”Normalization” is a word used to refer to the fact that probability is defined to be a number bounded by zero and one. The magnitude of [imath]\vec{\Psi}[/imath] can not be less than zero (as magnitude is defined to be a positive number) but. If [imath]\vec{\Psi}[/imath] is open to being any possible function, computer program or method, its magnitude can certainly exceed one. Now a probability of “one” means it has to happen.

 

Clearly the sum over all possibilities must be one: i.e., if we sum the magnitude of [imath]\vec{\Psi}[/imath] (after it is normalized) over all possibilities, we must get an answer of “one”. Thus it is that the procedure of “normalization” can be used after we discover the actual usable function [imath]\vec{\Psi}[/imath]. All we need do is sum the magnitude of [imath]\vec{\Psi}[/imath] over all possible arguments. That will yield an answer which is, in all probability, a large number A. If we then go back and divide [imath]\vec{\Psi}[/imath] by the square root of A, we have the normalized function [imath]\vec{\Psi}[/imath]: i.e., if we then sum this "normalized function" over the magnitudes for all possible x, tau arguments, we will get the answer “one”! Now, the magnitude of [imath]\vec{\Psi}[/imath] can be directly interpreted to be the probability that the present specified by the index t will be given by the x tau arguments of [imath]\vec{\Psi}[/imath].

 

Now, when the possibilities become infinite in number, normalization runs into some difficulties; however, those difficulties can be handled with a little logic. If you want me to go into that issue now I will; but, for the moment, it really has little bearing on the subject under discussion. If you have an explanation which yields your expectations (probabilities of outcomes) then a method exists for getting from the collection of indices to those probabilities.

The reason I say I don’t doubt it is that to me to say that I doubt something it seems I’m saying that I came to the conclusion without proof that there is not one, which I don’t see as a requirement of the statement “we can’t tell the difference between the two sets” although it does seem to imply there is not one. This doesn’t mean I plan or expect to find one, just that I see no reason to assume that there is or is not one.
That is a very reasonable explanation of your statement; however, it kind of overlooks a rather important aspect of my representation. There exists a philosophical dichotomy I have heard referred to as “solipsism” verses “realism”. “Solipsism” is the idea that everything is illusion created by our minds and “realism” is the idea that everything is real and no illusion at all. It has been concluded by almost all philosophers that you cannot prove solipsism is false but they (scientists at least) presume it is a ridiculous idea of little or no value.

 

I take a rather different view. As I see it, part of what our explanations require to exist might actually exist while, at the same time, some of the elements required by our explanations might just be illusions created by our subconscious in order to make that explanation work. I just think my perspective is much more objective than the common idea that “it’s all real”. The important factor here is that, if your explanation is to be “flaw-free”, there can be no way of telling the difference (and that would be, no experiment which would display the difference). The moment you came up with an experiment proving some element of your explanation was an illusion, that explanation would be discarded. But that fact cannot be taken as proof that no elements of your explanation are illusion; however, that is the assumption made by every scientist I have ever talked to. (Intellectual lemmings????) They all presume that, if you can’t prove some element of their theory is false, it is real. This totally overlooks the power and value of using illusionary elements.

Ok, I think I understand the reason behind the tau axis.
I think you have somewhat of an inkling of the reason behind the tau axis but I don’t think I would call it an understanding. Try reading my article ”A Universal Analytical Model of Explanation Itself”. I know that many terms and references in that article are quite different from the way I am presenting the thing here but I think the reason for introducing the tau axis is pretty clearly presented (see the box “Sub Problem number 1:”)

 

The issue is that, if you are going to represent the existence of element “i” as a point on a real x axis (this is fundamentally a one dimensional space representation) you cannot represent two ith elements. If a specific point is used to represent a specific element, two elements can not be represented by one point. The introduction of a tau axis relieves us of the difficulty. It is that simple and has no import beyond that fact. In particular, it has absolutely nothing to do with the introduction of the expression

[math]

\sum_{i \neq j} \delta(x_i - x_j) \delta(\tau_i - \tau_j) \vec{\psi}(x_1,\tau_1,x_2,\tau_2, \cdots , x_n, \tau_n, t) =0.

[/math]

 

That expression arises from some very different thoughts. You should understand that, objectively, there exists absolutely no way of establishing whether or not a given element is really an objective part of reality or is actually no more than an invented illusionary element required by the proposed explanation. That realization should make it quite clear to you that that there exists a trade off between the two (between what is presumed to exist and the rules which are to be obeyed. The rules are quite dependent upon what is presumed to exist and, likewise, what must exist is quite dependent upon what the rules are. Certainly, if one steps back and looks at the problem of creating explanations, that fact should not be at all be unexpected. Actually, it is a rather mundane and obvious observation.

 

This suggests that, if one keeps an open mind as to what rules should be adopted as a “universal rule”, one might find a rule much simpler than the rules currently presumed to be the “laws of physics”. As I presented my thoughts to Anssi, I made it quite clear that I was trying to keep the thing as simple as possible. I created “invalid” ontological elements designed to eliminate problems I found in the issue of finding [imath]\vec{\Psi}[/imath]. (Essentially leaving the idea as to what the laws might be to the last.)

 

 

The first step was to create elements such as to make the number of arguments in [imath]\vec{\Psi}[/imath] the same for all t indices. The second step was to add fictional elements sufficient to make all presents indexed by t different. That step allows t to be recovered from the list of x and tau indices associated with that t: i.e., the function. [imath]\vec{\Psi}[/imath] is no longer double valued and thus, if we know all the relevant x and tau, we also know t.

 

That thought leads to a subtle extension of this endeavor. Suppose we add sufficient fictional elements to make every present unique even if any single arbitrary element is removed. In that case, if we know (n-1) indices associated with a particular t, we also know exactly what the nth index must be. That nth index can be written as a mathematical function of the other (n-1) indices. We may not be able to write this down as an explicit function but, so long as the number of indices under consideration is finite, we can clearly set this down as a table.

 

Anyone who has any training in mathematics at all certainly knows that there exists an infinite number of explicit function which will exactly yield that table. Each one will give a different result for points not actually in the table. This result can be used to construct a very simple F=0 rule which will yield exactly our explicit indices which represent all known presents (collections of x, tau indices on which our explanation is based). That is to say that there exists a flaw-free explanation associated with every function which fits that table. Clearly the number of flaw-free explanations is infinite.

 

What we want, using Ockham’s razor, is the simplest such function which will fit any given such table. This is the chain of thought behind my using

[math]

F=\sum_{i \neq j} \delta(x_i - x_j) \delta(\tau_i - \tau_j) =0

[/math]

 

In any finite table, this equation will constrain all labels to be different and any specific collection of labels can be reproduced by the simple act of adding “invalid ontological elements” until all the wrong answers are eliminated. That may sound like an insane suggestion; however, it's really not as insane as it sounds (consider vacuum polarization). Clearly it must represent a flaw-free explanation and it ends up yielding some very surprising results.

Now I did a quick search for the dirac delta function (which is what I think you are using) to get the basic idea of what it does and it looks like if I understand this right the elements in the summation are the x and tau element of the coordinates of the ith and jth element and the delta function will give the result of 0 as long as these are not the same as one of the elements that we have already set, this should have the effect of insuring that no two elements have the same coordinates, you then multiply it by the function [imath]vec{Psi}[/imath] And I don’t see this, as having an effect on insuring that no two elements have the same coordinates but I don’t understand what else it is doing.
Actually, what I am saying is that the product [imath]F\vec{\Psi}[/imath] must always equal zero as, if F is not zero (i.e., two of the (x,tau) indices are the same) then [imath]\vec{\Psi}[/imath] must be zero (i.e., the probability of seeing that set must be zero) and, if [imath]\vec{\Psi}[/imath] is not zero (the probability of seeing that set of indices is some real non zero number) then F must be zero (no two indices are the same).

 

As a result, if [imath]\vec{\Psi}[/imath] is indeed the correct function (the function which yields exactly our table of indices) then the statement

[math]

F\vec{\Psi}=\sum_{i \neq j} \delta(x_i - x_j) \delta(\tau_i - \tau_j) \vec{\psi}(x_1,\tau_1,x_2,\tau_2, \cdots , x_n, \tau_n, t) =0

[/math]

 

is an undisputed fact!

After reading through those posts some more it is starting to sound like symmetry is just the lack of an axiom referring to where the origin of the coordinate system goes, or perhaps just an axiom that says that we don’t know where the origin of the coordinate system is.
I personally prefer the statement that every conceivable symmetry can be seen as a statement of some specific instance of ignorance. That is, the fundamental issue of symmetry arguments (“the most powerful arguments which can be made”) is that they are based on “conservation of ignorance”. Essentially, information not available in the statement of the problem can not be found in the solution of the problem. Mathematics is a very complex tautology!

 

To propose an axiom, “that we don’t know where the origin of the coordinate system is”, seems to me to be a rather clumsy statement of the issue. More to the point, the issue is that you just can’t pluck information from nothing.

 

If you can understand this post, I think you are well on the way to understanding what I am presenting.

 

Have fun -- Dick

 

P.S Learning latex is not trivial. Try the latex editor at LaTeX Equation Editor

Link to comment
Share on other sites

...You should understand that, objectively, there exists absolutely no way of establishing whether or not a given element is really an objective part of reality or is actually no more than an invented illusionary element required by the proposed explanation...
Just so I understand, are you saying there are two distinct types of "given elements" intermingled in your set: (1) elements of objective reality--what can be called the metaphysical given elements, and/or (2) elements that are pure illusionary and invented by the human mind ? So, suppose metaphysical elements are M and illusionary elements are I, then a valid set could be the random elements: {M1, I1, I2, M2, M3, M4, I3, M5,....to some finite number of elements }--but you would have no a priori way to establish if any specific element is real and which is invented ? But, what if this is incorrect. What if there is only one type of given element, but that each element is a dialectic intermingle of objective + illusionary attributes, and it depends how the element is measured which attribute is expressed ? I just wonder what this possibility does to your equation of explanation.
Link to comment
Share on other sites

Are there any important properties of the n dimensional abstract vector space that you are using that are worth pointing out at this point?

Essentially, if we are given a collection of x and tau indices representing the “present” referred to by the index t, then the magnitude of vec{Psi} is proportional to the probability that the specific collection of x and tau indices. That is what vec{Psi} represents because that is what vec{Psi} is defined to be. I do not know how to discover vec{Psi}. The only real issue of interest at this point is, “does vec{Psi} exist”.

If I understand this right all that is important is the length of the normalized vector that [imath]\vec{\Psi}[/imath] gives. Now, how I understand this, no matter where the coordinate axis is located as long as the elements in the argument are the same the normalized function has to give the same probability (that is the function is not of the coordinates but of the elements that the coordinates are for). But, what about the function before it has been normalized does it also maintain the same length? It seems to me that it would have to because if it gave a different length it would have to be a different function.

Now, when the possibilities become infinite in number, normalization runs into some difficulties; however, those difficulties can be handled with a little logic. If you want me to go into that issue now I will; but, for the moment, it really has little bearing on the subject under discussion. If you have an explanation which yields your expectations (probabilities of outcomes) then a method exists for getting from the collection of indices to those probabilities.

For now I see little reason of going into it if it has no immediate use though it sounds like we will have to go into it at some point.

I take a rather different view. As I see it, part of what our explanations require to exist might actually exist while, at the same time, some of the elements required by our explanations might just be illusions created by our subconscious in order to make that explanation work. I just think my perspective is much more objective than the common idea that “it’s all real”. The important factor here is that, if your explanation is to be “flaw-free”, there can be no way of telling the difference (and that would be, no experiment which would display the difference). The moment you came up with an experiment proving some element of your explanation was an illusion, that explanation would be discarded. But that fact cannot be taken as proof that no elements of your explanation are illusion; however, that is the assumption made by every scientist I have ever talked to. (Intellectual lemmings????) They all presume that, if you can’t prove some element of their theory is false, it is real. This totally overlooks the power and value of using illusionary elements.

This makes some sense but don’t we have to assume that there is more then one possible flaw-free explanation if we are going to have this view, that is if there is only one flaw-free explanation then it is the only possible reality and if we can find it we know that it must be reality, of course I think that we have already agreed that there is an infinite number of possible functions that are flaw-free and can describe any list of elements, so it would seem that we have no way to tell what is real and what is an illusion.

The first step was to create elements such as to make the number of arguments in vec{Psi} the same for all t indices. The second step was to add fictional elements sufficient to make all presents indexed by t different. That step allows t to be recovered from the list of x and tau indices associated with that t: i.e., the function. vec{Psi} is no longer double valued and thus, if we know all the relevant x and tau, we also know t.

I see little trouble with doing this in that we have to conclude that we add invalid elements all the time and cant tell the difference so why not add them for this reason, but, don’t we have to make sure that all such elements that we add are allowed by the function [imath]\vec{\Psi}[/imath] (that is, they have a probability greater then 0)?

 

[math]

F=\sum_{i \neq j} \delta(x_i - x_j) \delta(\tau_i - \tau_j) =0

[/math]

 

In any finite table, this equation will constrain all labels to be different and any specific collection of labels can be reproduced by the simple act of adding “invalid ontological elements” until all the wrong answers are eliminated. That may sound like an insane suggestion; however, it's really not as insane as it sounds (consider vacuum polarization). Clearly it must represent a flaw-free explanation and it ends up yielding some very surprising results.

If I understand you right, it doesn’t sound so much an insane suggestion as it sounds impractical to accomplish. Although it sounds like something that it is more important that it can be done then something that we plan to do.

I’m not quite sure what you are doing here but, what I think you are doing is adding elements until all elements that are on any particular list of ontological objects have been added after doing so we know that any elements in that list can be recovered simply by finding all objects such that F = 0.

 

[math]

F\vec{\Psi}=\sum_{i \neq j} \delta(x_i - x_j) \delta(\tau_i - \tau_j) \vec{\psi}(x_1,\tau_1,x_2,\tau_2, \cdots , x_n, \tau_n, t) =0

[/math]

 

Then, this says that all elements that can be in the list of elements are in the list of elements, (that is if the object is in the list then F = 0 and if the object is not in the list then the probability of it being in the list is 0). How you have this written, shouldn’t the right side of this be a zero vector and not just a zero?

Doing this is fine as long as the list is finite but what if there are an infinite number of possible elements of that list?

If you can understand this post, I think you are well on the way to understanding what I am presenting.

I don’t know that I understand every thing in this post but I think I have some idea of what you are talking about.

Link to comment
Share on other sites

Okay, survived yet another christmas :)

 

Hi Anssi, it's nice to hear from you again and I am sorry that we need to cover so much mathematics. I read in a recent article that Finland is number one in the world in mathematics education; too bad your teachers failed to perk your interest in the subject when you were young and picked things up like a sponge. On the other hand, perhaps they are number one for the very same reson your interest was destroyed.

 

Heh, perhaps. I had exceptionally good math teacher for 8th and 9th grades though. Too little too late? :)

 

I have one question about post #89 still. Earlier I thought I figured it out already, but now I'm not so sure.

 

[math]

\alpha_{qx}\sum_{i=1}^n \vec{\alpha_i}\cdot \vec{\nabla_i} = \left\{\sum_{i=1}^n \vec{\alpha_i}\cdot \vec{\nabla_i} \right\}(-\alpha_{qx}) +\frac{\partial}{\partial x_q}

[/math]

 

So, trying to make sure I understand exactly how that last term [imath]\frac{\partial}{\partial x_q}[/imath] "arises when i=q". I.e. when this occurs in the sum;

[math]\alpha_{qx}\alpha_{qx}\frac{\partial}{\partial x_q}[/math]

 

I first thought the term arises because [math]\alpha_{qx}\alpha_{qx}[/math] would be 1, but then it was aa+aa = 2aa = 1, so it would seem [math]\alpha_{qx}\alpha_{qx} = 0,5[/math]

 

So now I'm not quite sure how to end up with just [imath]\frac{\partial}{\partial x_q}[/imath]

 

Other than that, I think I understand everything of #150 up to the point you "sum both sides of the equation over q", ending up with:

 

[math]

\left\{\sum_i -\vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}-\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\sum_q\alpha_{qx}\vec{\Psi} +\sum_q\frac{\partial}{\partial x_q}\vec{\Psi}= iKm\sum_q\alpha_{qx}\vec{\Psi}.

[/math]

(You had that dot in the end, I suppose that's meaningless and there by accident?)

 

I suppose the entire equation still stands, but I can't figure out what it means to sum over q. I thought q is just one specific index. How does that expand into a sum?

 

I'm guessing that has something to do with how the definitions [imath]\sum_i \vec{\alpha}_i \vec{\Psi} = \sum_{i \neq j}\beta_{ij} \vec{\Psi} = 0[/imath] are used to make most of the terms to vanish. Did these definitions arise from some logical constraint that was deduced earlier, or are they defined just for the purpose of making some terms vanish? I think I need some clarification about their use.

 

Actually I couldn't figure out why those definitions mean the sum over [imath]\alpha_{qx}\vec{\Psi}[/imath] vanishes. Hmm...

 

Other than that, I'm starting to feel like I'm close to figuring out this equation... There's definitely some light at the end of the tunnel...

 

-Anssi

Link to comment
Share on other sites

Just so I understand, are you saying there are two distinct types of "given elements" intermingled in your set: (1) elements of objective reality--what can be called the metaphysical given elements, and/or (2) elements that are pure illusionary and invented by the human mind ? So, suppose metaphysical elements are M and illusionary elements are I, then a valid set could be the random elements: {M1, I1, I2, M2, M3, M4, I3, M5,....to some finite number of elements }--but you would have no a priori way to establish if any specific element is real and which is invented ? But, what if this is incorrect. What if there is only one type of given element, but that each element is a dialectic intermingle of objective + illusionary attributes, and it depends how the element is measured which attribute is expressed ? I just wonder what this possibility does to your equation of explanation.

 

If you look at the concepts that Doctordick is talking about as a set of definitions, then they are exactly the definitions that are required to express the logical constraints and relationships that we are talking about.

 

What you seem to suggest is just different definitions (different way to see the issue), and certainly you could not use the same exact equations to express whatever logical consequences there might exists.

 

Or perhaps you are going further than that, and when you say "What if there is only one type of given element", you are not just talking about how we might choose to classify reality, but you are actually suggesting that perhaps the ontological reality is like that and whether in that case this treatment is invalid?

 

People certainly like to think of their attempts to understand reality as attempts to uncover how reality really breaks into individual pieces (with identity), but that does turn counter-productive fairly quickly.

 

Think about different sorts of "snow". Some snow is wet, some is like powder. We might choose to classify the wet snow and the dry snow as two different materials, or we might not, depending on if that is deemed useful or not. When there's two people arguing about the ontological reality, all I hear is arguments about whether dry and wet snow are both actually "snow" or not. Or about whether snow is actually water or if rather water is actually snow.

 

A good example is that I hear common people regularly making the assertion that "4th dimension of reality is time". They heard somewhere that that is how some physicists describe reality, as a 4-dimensional spacetime block, and thought that means it's the ontological reality and therefore the "4th dimension of reality is time". It's hard to contain myself and not start an argument about whether it's actually the "3rd dimension of reality" that is time! :shrug::lol::)

 

Anyway, where was I... Oh yeah, so when Doctordick is referring to invalid and valid elements, perhaps it's easier to think of that issue as a consequence of our ability to tack identity onto whatever features of reality we see fit (i.e. label things in such way that we consider their identity to persist through time... think of the differences in the identity of things between different QM interpretations).

 

Or perhaps you are in fact wondering what if one had a valid worldview where there would be considered to exist only "one type of given element, but that each element is a dialectic intermingle of objective + illusionary attributes, and it depends how the element is measured which attribute is expressed ?", I'm not entirely sure what that means but I suppose you could map that worldview onto the x,tau,t-table simply by labeling each different manifested attribute (at any given moment) with different label. One might say; you could simply choose to think of them as different sorts of elements when they manifest themselves differently! See the semantics?

 

Make no mistake about this; the treatment would still not tell you if you are really seeing "the same element manifesting itself differently", or if they are actually "different sorts of elements". To think about that question is exactly identical to the sleet vs. snow argument, or which "dimension of reality" is "time" -> completely meaningless ontologically because we invented the definitions that cause us to consider anything as something. We decide whether something is considered the "same" or "different" element through time.

 

-Anssi

Link to comment
Share on other sites

Hi Anssi, glad to see you back. I also think you are getting very close to understanding the mathematics I have presented so far. Don’t get too self confident. The next step is solving that differential equation and that is not a trivial issue. Meanwhile, let me start at the end of your post.

Actually I couldn't figure out why those definitions mean the sum over [imath]alpha_{qx}vec{Psi}[/imath] vanishes. Hmm...
The definitions do not mean the indicated sums vanish. The vanishing of the sum is a further constraint on the definitions of the collections of alpha and beta operators. Note that the earlier definitions (the commutation relationships) define the squares of the elements but not the actual value: even if their application on [imath]\vec{\Psi}[/imath] is defined to be a real number, its value is only defined via its square.
I first thought the term arises because [math]alpha_{qx}alpha_{qx}[/math] would be 1, but then it was aa+aa = 2aa = 1, so it would seem [math]alpha_{qx}alpha_{qx} = 0,5[/math]
You are correct but that means [imath]\alpha_{qx}\vec{\Psi}= \sqrt{1/2}\vec{\Psi}[/imath]. Clearly, it is possible (under a specific definition of these alpha and beta operators) that the correct answer for [imath]\alpha[/imath] operating on [imath]\vec{\Psi}[/imath] could be either plus or minus [imath]\sqrt{1/2}[/imath]. So the statement that [imath]\sum_k \vec{\alpha}_k \vec{\Psi} = \sum_{k \neq l}\beta_{kl} \vec{\Psi} = 0[/imath] is no more than a additional constraint on the definitions of those alpha and beta operators and please note zero is zero and zero times anything is zero). Thus, it is true by definition and we can simply regard it as a fact in our algebra.

 

With regard to the problem you began referred to in the beginning of your post, I apologize. I essentially left out several steps which I thought you would see as obvious. I was multiplying (or operating on the left side of the fundamental equation) by [imath]\alpha_{qx}[/imath] (note that I used a “kl” just above here and am back to using a q here. I did that intentionally in order to help you understand that i, j, q, k, … are just letters implying we have an index here standing for some integer. When we perform a sum, the sum symbol has to include the name of the integer index the sum is referring to: i.e., which index is being used to select the terms to be summed. But back to the algebra (note that I am only showing the left hand side of the fundamental equation):

[math] \alpha_{qx}\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\Psi}=[/math]

[math] \left\{\sum_i \alpha_{qx}\vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\alpha_{qx}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\Psi}=[/math]

[math] \left\{\sum_i \alpha_{qx}\left\{\alpha_{ix}\frac{\partial}{\partial x_i}+ \alpha_{i\tau}\frac{\partial}{\partial \tau_i}\right\} + \sum_{i \neq j}\alpha_{qx}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\Psi}.[/math]

 

Where I have actually written out the terms implied by the vector dot product [imath]\vec{\alpha}_i \cdot \vec{\nabla}_i [/imath]. Continuing with the algebra, the last expression above can be rewritten:

[math] \left\{\sum_i \left\{\alpha_{qx}\alpha_{ix}\frac{\partial}{\partial x_i}+ \alpha_{qx}\alpha_{i\tau_i}\frac{\partial}{\partial \tau}\right\} + \sum_{i \neq j}\alpha_{qx}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\Psi}[/math]

 

and we can explicitly use the commutation rules defining the alpha and beta operators. The rule, for this particular case, being explicitly, [imath]\alpha_{kx}\alpha_{dx}=-\alpha_{dx}\alpha_{kx}+\delta_{kd}[/imath], [imath]\alpha_{kx}\alpha_{d\tau}= -\alpha_{d\tau}\alpha_{kx}[/imath],

and [imath]\alpha_{kx}\beta_{qj}= -\beta_{qj}\alpha_{kx}[/imath]. (Note again that I have changed the index names in my rule; I am not trying to confuse you but rather to get you used to recognizing exactly what is being said.) We can directly apply the commutation rule to the above expression via the following implied results:

[math]\alpha_{qx}\alpha_{ix}= -\alpha_{ix}\alpha_{qx}+\delta_{qi},[/math]

 

[math]\alpha_{qx}\alpha_{i\tau}= -\alpha_{i\tau}\alpha_{qx}[/math]

 

and

 

[math]\alpha_{qx}\beta_{ij}= -\beta_{ij}\alpha_{qx}.[/math]

 

Substituting into the expression above, we now have the following,

[math] \left\{\sum_i -\left\{(\alpha_{ix}\alpha_{qx}+\delta_{iq})\frac{\partial}{\partial x_i}+ \alpha_{i\tau}\alpha_{qx}\frac{\partial}{\partial \tau_i}\right\} + \sum_{i \neq j}-\beta_{ij}\alpha_{qx}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\Psi}.[/math]

 

Since [imath]\alpha_{qx}[/imath] commutes with all the other expression in those products shown above, it may be immediately factored to outside the right curly brackets. What is left is exactly the same as what we started with except for the change in sign and that [imath]\delta_{iq}[/imath] term. The [imath]\delta_{iq}=1[/imath] when i=q and zero otherwise so it adds a single term outside the curly brackets, that term being [math]\frac{\partial}{\partial x_q}[/math] (because it occurs only when i=q). Thus it is that (if we operate on both sides of the fundamental equation: i.e., by algebraic rules, we do not then invalidate the equal sign) we have exactly the result:

[math]

\left\{\sum_i -\vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}-\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\alpha_{qx}\vec{\Psi} +\frac{\partial}{\partial x_q}\vec{\Psi}= iKm\alpha_{qx}\vec{\Psi}.

[/math]

 

The dot is there because the expression is part of an English sentence and it is the period showing the end of the sentence. I was taught that proper punctuation was to be used in mathematical publications. (Sometimes I fail to use the proper punctuation and I apologize for that omission if and when it happens.) At this point, if (using the normal rules of algebra) we sum both sides of that equation over the index q, we will get the expression,

[math]

\left\{\sum_i -\vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}-\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\left\{\sum_q\alpha_{qx}\right\}\vec{\Psi} +\left\{\sum_q\frac{\partial}{\partial x_q}\right\}\vec{\Psi}= iKm\left\{\sum_q\alpha_{qx}\right\}\vec{\Psi},

[/math]

 

as q only appears in the [imath]\alpha_{qx}[/imath] terms plus that single (unsummed on i) partial derivative with respect to [imath]x_q[/imath]. Furthermore, since we have already defined the sum over all k of the expression [imath]\alpha_{kx}[/imath] (note my change from q to k there) to be zero, only that sum over differentials remains and we deduce that, if we have the correct [imath]\vec{\Psi}[/imath], then we know that our constraint,

[math]\sum_i \frac{\partial}{\partial x_i}\vec{\Psi}=0,[/math]

 

is satisfied.

 

Unless you voice another question, I will presume you understand how the deduced constraints are contained in that fundamental equation. Let me know if you feel you understand everything up to the fundamental equation and I will continue by pointing out some subtle issues which arise out of the infinities we have introduced and how those issues can be handled.

 

Meanwhile, I think it would be very useful to read my responses to both Bombadil and Rade just to make sure you and I are on the same wave length so to speak.

Are there any important properties of the n dimensional abstract vector space that you are using that are worth pointing out at this point?
Not really; the only reason I express the explanation as an n dimensional abstract vector is in order to assure that no process of getting from the references to the ontological elements to your expectation for those specific references is omitted: i.e., all procedures of obtaining expectations can be so expressed.
If I understand this right all that is important is the length of the normalized vector that [imath]vec{Psi}[/imath] gives. Now, how I understand this, no matter where the coordinate axis is located as long as the elements in the argument are the same the normalized function has to give the same probability (that is the function is not of the coordinates but of the elements that the coordinates are for).
It isn’t absolutely clear but I do get the impression you do fundamentally understand the relationship implied here.
But, what about the function before it has been normalized does it also maintain the same length? It seems to me that it would have to because if it gave a different length it would have to be a different function.
Before normalization, its length is proportional to the required probabilities; all normalization does is make the proportionality 1:1.
… it would seem that we have no way to tell what is real and what is an illusion.
As has been said many times (on this very forum and in many professional publications), one can not prove solipsism is false and, as solipsism is the philosophic position that everything is illusion, it certainly follows that there exists no way to separate reality from illusion; however, it is just as true to say that you cannot prove there does not exist something real behind our concepts of reality. It seems to me that the only rational position is to assume both real and illusionary ontological elements exist behind our thoughts: i.e., it seems rather ridiculous to presume one knows what is real.
… don’t we have to make sure that all such elements that we add are allowed by the function [imath]vec{Psi}[/imath] (that is, they have a probability greater then 0)?
Here you sort of have the cart on the wrong side of the horse. [imath]\vec{\Psi}[/imath] is defined to be the function which yields your expectations. If you have a flaw-free explanation, since that explanation results in your having specific expectation, a method must exist to get from a description of a situation to your expectation for that situation. Certainly your explanation would not include elements not required by your explanation thus your explanation must yield non-zero probabilities for the occurrences of these elements and the method by which you generate these expectations is exactly what is to be encompassed in the function [imath]\vec{\Psi}[/imath]: i.e., if [imath]\vec{\Psi}[/imath] does not, then it simply is not the correct function. My fundamental equation has nothing to do with this fact, it simply places specific additional constraints on that explanation required by symmetry and compliance with the rational paradigm I have introduced.
Then, this says that all elements that can be in the list of elements are in the list of elements, (that is if the object is in the list then F = 0 and if the object is not in the list then the probability of it being in the list is 0). How you have this written, shouldn’t the right side of this be a zero vector and not just a zero?
As I said to Anssi above, zero is zero; a zero vector has no components so is it really any different from a simple zero? If you like, you could use the word “Null” to signify the same thing; the null set consists of nothing and is quite equivalent to the concept zero.
Doing this is fine as long as the list is finite but what if there are an infinite number of possible elements of that list?
Nowhere am I actually proposing to do such a thing. All I am saying is that, in the abstract, the constraint is a valid constraint: i.e., it will yield the results we desire. These infinities do indeed generate some problems (problems in our theories but not actually in reality as you cannot have an infinite number of tests of that theory). You simply presume that, if the theory (your explanation) works in a finite (sufficiently large) number of situations, it will always work. You certainly see that such a thing has to be an assumption don’t you?
I don’t know that I understand every thing in this post but I think I have some idea of what you are talking about.
I think you are awfully close to understanding the central thrust of it.
Just so I understand, are you saying there are two distinct types of "given elements" intermingled in your set: (1) elements of objective reality--what can be called the metaphysical given elements, and/or (2) elements that are pure illusionary and invented by the human mind ? So, suppose metaphysical elements are M and illusionary elements are I, then a valid set could be the random elements: {M1, I1, I2, M2, M3, M4, I3, M5,....to some finite number of elements }--but you would have no a priori way to establish if any specific element is real and which is invented ?
I am using the adjective “valid” to identify the elements of objective reality: that is, elements who’s existences must be explained in every flaw-free explanation, and arenapparently what you mean by M.

 

Another way of visualizing what I am talking about is to use a term common to many discussions which I would associate with specific elements associated with a given t (i.e., a specific present). I think the word “event” very much captures this concept. If you understand what I mean, one can speak of “real events” as apposed to presumed events.

 

In deference to Qfwfq who seems to believe Zeno’s paradox to be a trivial issue, I suspect that Zeno was considering the issue of continuity itself and was trying to get people to comprehend that it is an assumption utterly impossible to prove. The fact that we identify two events to be the same object at two different points, A and B, and then jump to the conclusion that the object moved along a continuous path from A to B, is an unprovable hypothesis because the proof would require us to have specific information about the existence of an infinite number of events. The “fact” that the object moved through those points is illusionary in the sense that it is a presumed fact necessary to our ordinary explanation of the two events. Certainly other explanations exist: it could have magically disappeared at A and then magically reappeared at B.

But, what if this is incorrect. What if there is only one type of given element, but that each element is a dialectic intermingle of objective + illusionary attributes, and it depends how the element is measured which attribute is expressed ? I just wonder what this possibility does to your equation of explanation.
Associating attributes with an element is an aspect of the explanation. Without an explanation, your question is meaningless. My whole intent here is to allow all possibilities and you are describing a specific possibility inherent to some explanation. My position is simply that, in the final analysis, any flaw-free explanation of any collection of events need only explain the “real” aspects of reality. The illusory component, even if you want to attach it to specific “real” events (as I do with the introduction of tau) is a component of the explanation, not at all a component of reality. That explanation is “flaw-free” only so long as you cannot prove these components are illusory.

 

Another subtle point here is the fact that my representation is a new paradigm in the sense that I am not really creating a model of any explanation but rather, I am using the power of creating invalid ontological elements to develop an explanation of any explanation. That is why I keep saying there exists an interpretation which conforms to my formula and I do not say that all explanations themselves conform to that equation. Just as scientists generally put forth interpretations of events which explain a lot of “superstitious” explanations; this fact cannot be taken to imply those “superstitious” explanations obey the scientist’s equations as the “superstitious” explanations often omit many elements presumed real by the scientists. (By the way, I use the word “superstitious” as a substitution for “non-scientific” because that is the interpretation of the circumstances taken by many scientists.)

 

Have fun -- Dick

Link to comment
Share on other sites

Nowhere am I actually proposing to do such a thing. All I am saying is that, in the abstract, the constraint is a valid constraint: i.e., it will yield the results we desire. These infinities do indeed generate some problems (problems in our theories but not actually in reality as you cannot have an infinite number of tests of that theory). You simply presume that, if the theory (your explanation) works in a finite (sufficiently large) number of situations, it will always work. You certainly see that such a thing has to be an assumption don’t you?

Yes I can see that it is an assumption although one commonly made without notice, we still can only assume that if something is repeated with the same result that it will always give the same result.

 

Now if I understand your equations correctly the equation

 

[math]

\frac{d}{dx_i}P=\sum_i \frac{\partial}{\partial x_i}P = \left\{\sum_i \frac{\partial}{\partial x_i}\vec{\psi}^\dagger \right\}\cdot \vec{\psi} + \vec{\psi}^\dagger \cdot \left\{\sum_i \frac{\partial}{\partial x_i}\vec{\psi}\right\} = 0

[/math]

 

In which the [math]\vec{\psi}[/math] is the normalized function and [math]\vec{\psi}^\dagger[/math] is its complex conjugate will be satisfied by the equation

 

[math]

\sum_i \frac{\partial}{\partial x_i}\vec{\psi}(x_1,\tau_1,x_2,\tau_2, \cdots, x_n,\tau_n,t) = -iK\vec{\psi}

[/math]

 

Now I’m afraid that at this point there are a couple of things I don’t understand about this the first one is I think the same problem that Qfwqf had (that being that K could be a function of [math](x_1,\tau_1,x_2,\tau_2, \cdots, x_n,\tau_n,t)[/math] ) The second one being that I don’t understand where the negative comes from or the [math]\sqrt[]{-1}[/math] came from. I also see no reason that the constant in the partials to x have to equal the constant in the partials to [math]\tau[/math]. I suspect that all of these come from the same place that you talk about in post 77 that being other symmetries that you have not gone into yet. You say there, that it would be best to put off these until you have started to show solutions to the equations. If this is the case and you want to put off this subject until you have started to show some of the solutions I am willing to pass over the issue for the time being with the intention that we will come back to it at a later time.

 

I have looked through your proof that your fundamental equation satisfies both of these constraints and it looks like the result is that it satisfies the constraint that

 

[math]

\sum_i \frac{\partial}{\partial x_i}\vec{\Psi}=0,

[/math]

 

When I thought it was suppose to satisfy the constraint that

 

[math]

\sum_i \frac{\partial}{\partial x_i}\vec{\psi}(x_1,\tau_1,x_2,\tau_2, \cdots, x_n,\tau_n,t) = -iK\vec{\psi}

[/math]

 

Now I suspect that the other constraints on the partials are found by picking a different subs for [math] \alpha[/math] when it is commuted through the equation; Is this correct?

Also I’m wondering where you came up with the rules for the anticommuting elements are these common rules for anticommuting elements?

 

I would also like to know if there are any other things that I have not gone into that I need to make sure that I understand or anything that I need to go back to and go over again before I try and follow you in solving the fundamental equation, which it looks like you and Anssih are just about ready to go into.

Link to comment
Share on other sites

Thank you for the clarifications Dr. This bit revealed what I was missing:

 

===quote Doctordick===

What is left is exactly the same as what we started with except for the change in sign and that [imath]\delta_{iq}[/imath] term. The [imath]\delta_{iq}=1[/imath] when i=q and zero otherwise so it adds a single term outside the curly brackets, that term being [math]\frac{\partial}{\partial x_q}[/math]

=================

 

So, what happens when i=q is;

 

[math]

(-\alpha_{qx}\alpha_{qx}+1)\frac{\partial}{\partial x_q} = -\alpha_{qx}\alpha_{qx}\frac{\partial}{\partial x_q} + 1\frac{\partial}{\partial x_q}

[/math]

 

And now last term can be moved outside of the brackets, correct?

 

(note that I used a “kl” just above here and am back to using a q here. I did that intentionally in order to help you understand that i, j, q, k, … are just letters implying we have an index here standing for some integer. When we perform a sum, the sum symbol has to include the name of the integer index the sum is referring to: i.e., which index is being used to select the terms to be summed.

 

Yes, I thought I'd understood that but this revealed one false assumption I had made. I thought there was just another small ambiguity in the notation and the index "q" referred to some single, specific alpha (=always the same alpha, unlike "i"), but now I guess not...

Actually before you answer that, let me first ask you about the notation of "sum" in general. Way back in PF, the first time you used it like this:

 

[math]

\sum_{i=1}^{i=n}

[/math]

 

That makes sense as it defines the starting and ending values of i.

 

Later, I saw this:

 

[math]

\sum_{i=1}^n

[/math]

 

I assume that essentially means the same thing(?)

 

Then I saw this:

 

[math]

\sum_{i}

[/math]

 

I suppose that too means the same thing? Suppose you simplify it just to make the equations look cleaner?

 

Isn't it funny that we cannot escape human ambiguity even in mathematical notations? :)

 

So back to those i, j, q indices. Earlier I thought q referred to some arbitrarily chosen specific alpha because that way when you sum over i, at some point i=q. Consequently I didn't understand what [imath]\sum_q\alpha_{qx}\vec{\Psi}[/math] could possibly mean. But now I suppose it means:

 

[math]

\sum_{q=1}^{q=n}\alpha_{qx}\vec{\Psi}

[/math]

 

?

Except, I suppose it doesn't matter if the starting value is 1 or something else?

 

Sorry about obsessing on these little details. I kinda have to :)

 

The dot is there because the expression is part of an English sentence and it is the period showing the end of the sentence. I was taught that proper punctuation was to be used in mathematical publications. (Sometimes I fail to use the proper punctuation and I apologize for that omission if and when it happens.)

 

Heh, yeah just had to make sure :)

 

Unless you voice another question, I will presume you understand how the deduced constraints are contained in that fundamental equation. Let me know if you feel you understand everything up to the fundamental equation and I will continue by pointing out some subtle issues which arise out of the infinities we have introduced and how those issues can be handled.

 

There's just one more little thing I need to make sure of;

 

When you say sum over [imath]\alpha_{qx}\vec{\Psi}[/imath] is 0 by definition, you are referring to the definition:

 

[math]

\sum_i \vec{\alpha}_i \vec{\Psi} = 0

[/math]

 

?

So even though it's not exactly [imath]\alpha_{qx}[/imath] in the definition, that definition nevertheless implies the sum over [imath]\alpha_{qx}\vec{\Psi}[/imath] to be 0? (which seems to make sense as the tau component can be seen as 0 in every alpha? I think :)

 

If that's correct, yes I think I have (somewhat superficial) understanding of that equation and how it turns into [imath]\sum_i \frac{\partial}{\partial x_i}\vec{\Psi}=0[/imath]

 

Meanwhile, I think it would be very useful to read my responses to both Bombadil and Rade just to make sure you and I are on the same wave length so to speak.

 

From what I've read, we seem to be. I skimmed through the posts fairly quickly though, and skipped most of that one long post (because I supposed we'd pretty much be on the same wave length anyway). But I'll prolly read that one too at some point :)

 

EDIT: Oh you meant your responses on that same post. Yeah we seem to be on the same page from what I can tell. You response to Rade about assuming event A and B are the "same object" is essentially what I meant in my reply to Rade, when I said we "tack identity onto whatever features of reality we see fit". Since there are many valid ways to identify "objects", I often refer to "semantical worldview". Of course I often get accused of claiming "everything is semantics", but what can I do ;)

 

-Anssi

Link to comment
Share on other sites

Well, it’s 2008, happy new year to the crowd. Sorry I have been so slow to respond; we’re still in Denver being entertained by our 18 month old granddaughter. I have been rereading this thread, trying to understand the difficulty people have with my presentation. I edited some minor errors and inserted a PS on a post #112 to Qfwfq.

 

Bombadil, I checked your profile and was surprised by the few number of posts you have made. Being such a small number, I read them all. I might comment that the education level on most “science forums” is quite low and that hypography.com is probably among the best; however, I think, even here, you have to take most of the responses you obtained with a grain of salt so to speak. In particular, a lot of posters understanding of relativity is rather rudimentary. There is a lot of history in “word of mouth” stuff that just doesn’t show up in books; a formal education by experienced teachers is well worth the cost especially in theoretical physics as you need to know what led others to think the way they did.

 

Learning mathematics from a book can be difficult and I sympathize with your circumstance. One thing I would say is, never accept anything without having convinced yourself that it is correct. As Anssi has commented just above, ambiguity exists in mathematics notation as it does in any language; the only difference is that mathematicians have made a great effort to make sure the context sufficient to resolve those ambiguities is small enough to effectively resolve the problems as they occur.

 

You seem to have a misunderstanding of my use of shift symmetry to deduce a differential relationship. But, before we get into that, I will make another attempt to clarify the earlier difficulty with Qfwfq.

 

Just as an aside, let me comment on the character of errors which can exist in any thesis. Any scientific field may be seen as a body of unexamined assumptions (things thought to be so obviously true that they are simply not examined) together with postulated relationships (those things specifically held forth as the basis of the field; including any examined assumptions) and the logical deductions which may be obtained from those relationships. Errors may occur in any of those three areas; however, the character and consequences of those errors vary quite considerably.

 

Errors in unexamined assumptions are the most difficult to uncover for the very simple fact that suspicion of error seldom exists there. Discovery of these errors are often the basis of what most call major scientific breakthroughs. Finding errors in the postulated relationships is a straight forward issue requiring only time and diligence. These are the errors searched for by common scientific research: i.e., checking and comparing experimental results. Finally, errors in deduction, if they exist at all, seldom persist for any length of time. The simple act of communicating a new paradigm contains a tremendous bias against carrying such errors forward. Sooner or later someone will complain about that erroneous deduction.

 

The reason I brought that up is that none of my work has been carefully examined and I have probably made a number of such errors. Qfwfq has definitely uncovered one such error. Not to excuse my stupidity but rather to explain it, I jumped to the relationship

[math]

\sum_i \frac{\partial}{\partial x_i}\vec{\psi}(x_1,\tau_1,x_2,\tau_2, \cdots, x_n,\tau_n,t) = -iK\vec{\psi}

[/math]

because it is essentially a central expression fundamental to conservation of momentum in conventional quantum mechanics. In the presentation of conventional quantum mechanics it is essentially brought forth by postulate and not deduction. If you go back and read the exchange between Qfwfq and myself, you will find that I baulked when he wanted to make K a general function of [imath](x_1,\tau_1,x_2,\tau_2, \cdots, x_n,\tau_n,t)[/imath]. The reason I baulked was because that proposition violated shift symmetry.

 

I soon realized that Qfwfq’s arguments were quite correct; the error in deduction was mine. If one is given the fact that [imath]\vec{\Psi}[/imath] is defined such that the probability of interest is proportional to the magnitude of that vector, one can immediately deduce that if we can prove [imath]\sum_i \frac{\partial}{\partial x_i}\vec{\Psi}[/imath] must vanish, then so must the expression [imath]\sum_i \frac{\partial}{\partial x_i}P[/imath]. On the other hand, if we start with a proof that [imath]\sum_i \frac{\partial}{\partial x_i}P [/imath] must vanish, Qfwfq is perfectly correct, it does not follow that the expression [imath]\sum_i \frac{\partial}{\partial x_i}\vec{\Psi}=-iK\vec{\Psi}[/imath] (where K is a real number, possibly zero) is the only possibility for [imath]\vec{\Psi}[/imath]. As he said, there are a great number of possibilities which will result in [imath]\sum_i \frac{\partial}{\partial x_i}P=0 [/imath].

 

The issue here is that I very definitely want

[math]

\sum_i \frac{\partial}{\partial x_i}\vec{\psi}(x_1,\tau_1,x_2,\tau_2, \cdots, x_n,\tau_n,t) = -iK\vec{\psi}

[/math].

 

to be the only valid extension from the two expressions [imath]\sum_i \frac{\partial}{\partial x_i}\vec{\Psi}=0[/imath] and [imath]\sum_i \frac{\partial}{\partial x_i}P=0[/imath]. The correct way to deduce that result is to realize that the expression

[math]e^{-\frac{i K}{n}(x_1+x_2+\cdots+x_n)}[/math]

 

is an expression which is consistent with shift symmetry. The fact that there are many other functions which yield only a phase shift in [imath]\vec{\Psi}[/imath] is effectively insignificant against the central issue that the steps in the method used to generate P from the references to the valid ontological elements must all conform to shift symmetry.

 

Now, having made that rather verbose excuse for the conflict between Qfwfq and I, we can move on to your misinterpretations of what was said.

[math]\frac{d}{dx_i}P=\sum_i \frac{\partial}{\partial x_i}P[/math]

 

is simply not a valid expression. The expression [imath]\frac{d}{dx_i}P[/imath] is only defined if [imath]x_i[/imath] is the only variable in P. Now against this, [imath]\frac{\partial}{\partial x_i}P[/imath] is defined to be [imath]\frac{d}{dx_i}P[/imath] when all the other variables are considered to be constants. It follows that putting that sum in there makes no sense. Removing that first term leaves the rest of the expression as valid (it is no more than the chain rule for differentiating products).

 

Now, with regard to how shift symmetry leads to the differential expression, it is actually very simple. Somehow, your expectations must arise from your explanation (herein represented by [imath]\vec{\Psi}[/imath]). The argument of that function is the description of what you are thinking of which I have chosen to represent via numerical labels thus making [imath]\vec{\Psi}[/imath] a mathematical function. Note that all we have of this function are a finite number of instances represented by a finite number of arguments for each instance: i.e., what we actually know.

 

The issue of shift symmetry arises because the actual numerical labels used cannot affect the result of the method since that result must depend on the things being represented not the numerical labels being used. It follows that

[math]\vec{\Psi}(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=\vec{\Psi}(x_1+a,\tau_1,x_2+a,\tau_2,\cdots,x_n+a,\tau_n,t).[/math]

 

This expression is essentially in the form of the numerator in the definition of a derivative. That fact can be used to prove that, as the number of possibilities for the indices [imath]x_i[/imath], [imath]\tau_i[/imath] and t go to infinity (which would constitute continuity in the [imath](x,\tau, t)[/imath] space), the consequences of shift symmetry lead directly to the differential relationships I have presented as fundamental constraints.

I don’t understand where the negative comes from or the [math]sqrt[]{-1}[/math] came from.
That comes from the differential of the exponential function, [imath]\frac{d}{dx}e^{-ikx}=-ik e^{-ikx}[/imath]. The point being that all the exponential function does to [imath]\vec{\Psi}[/imath] is to introduce a phase shift in the complex plane (plus a normalization adjustment). In other words, multiplying by such a function does not alter the resultant P obtained from [imath]\vec{\Psi}[/imath] but does change the result obtained from differentiation. For the moment, I wouldn’t worry about the issue; it will become a very useful fact when we get down to solving that “fundamental” equation and I will make it clear as to how and why we use it.
I also see no reason that the constant in the partials to x have to equal the constant in the partials to [math]tau[/math].
They don’t; exactly what is going on there will become clear later. For the moment, it would perhaps be better to see the fundamental equation in the form

[math]\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\Psi} = K\frac{\partial}{\partial t}\vec{\Psi}.[/math]

 

as, in actual fact, we are concerned with satisfying the original four constraints; one on the x dependence, one on the tau dependence, the F=0 rule and one constraint on the time dependence. The one on the time dependence is essentially [imath]\frac{\partial}{\partial t}\vec{\Psi}=0[/imath]. This is essentially the simplest differential equation known and has a rather trivial solution. Since the partial is defined as the ordinary derivative when all other variables are handled as if they are constants, the solution is simply [imath]\vec{\Psi}[/imath] equals a constant: i.e., the differential of a constant is zero.

 

At this point we can take advantage of that complex phase shifting exponential function I talked about just above here. We know that the general solution [imath]e^{-imt}[/imath] (which itself is consistent with shift symmetry: i.e., it does nothing except generate a phase shift in the “(real, imaginary)” plane) will yield no change in probabilities generated by [imath]\vec{\Psi}[/imath]. Replacing [imath]\frac{\partial}{\partial t}\vec{\Psi}[/imath] with [imath]-im \vec{\Psi}[/imath] (K essentially amounts to nothing more than a scale factor associated with the definition of time.)

 

I might also comment that my fundamental equation in many respects is simply zero plus zero equals zero. As it stands, it contains no information to speak of but when one reorders the sums and collects things differently, one obtains some interesting internal relationships.

I suspect that all of these come from the same place that you talk about in post 77 that being other symmetries that you have not gone into yet.
Perhaps post #83 on page 9 of this thread would be worth looking at. It has more detail and is somewhat less confused than #77 if you include my comments above on the logic.
I have looked through your proof that your fundamental equation satisfies both of these constraints and it looks like the result is that it satisfies the constraint that

[math]

sum_i frac{partial}{partial x_i}vec{Psi}=0,

[/math]

 

When I thought it was suppose to satisfy the constraint that

[math]

sum_i frac{partial}{partial x_i}vec{psi}(x_1,tau_1,x_2,tau_2, cdots, x_n,tau_n,t) = -iKvec{psi}

[/math]

That is correct; your thoughts are a result of some of the confusion I was referring to above. Sorry about that.
Now I suspect that the other constraints on the partials are found by picking a different subs for [math] alpha[/math] when it is commuted through the equation; Is this correct?
Yes this is correct; again, the details can be found in post #83.
Also I’m wondering where you came up with the rules for the anticommuting elements are these common rules for anticommuting elements?
These anticommuting elements arise in formal quantum mechanics from the very beginning. The actual definitions sometimes differ by a multipliable factor but the essential issue is the change in sign under commutation. Setting the sum to zero is not to be found in most general presentations. I use that to abstract out the constraints required by shift symmetry. You should realize that those constraints are only valid when the whole universe is included (here the whole universe means absolutely all information available to create that flaw-free explanation).
I would also like to know if there are any other things that I have not gone into that I need to make sure that I understand or anything that I need to go back to and go over again before I try and follow you in solving the fundamental equation, which it looks like you and Anssih are just about ready to go into.
All you need do is inform me anytime you find anything confusing.

 

And finally, Anssi, I essentially have no additional comments to make to you. I think that, for all intents and purposes, you have got everything pretty well straight.

Except, I suppose it doesn't matter if the starting value is 1 or something else?
You are absolutely correct. Essentially, the definition of the terms to be summed and the start point of the sum is given below the sum sign and the limit is expressed above the sum sign. Anytime anything is omitted, the expression presumes the omitted items are understood (already defined elsewhere). A simple sum sign usually indicates the sum is to be taken over all possibilities for the following term (this is rarely done and is usually only seen when what is being done is somewhat abstract). The terms in the sum usually will display some meaningful index; in that case it is usual to specify the index under the sign and, if that is all that is found there, the sum will be over all possibilities. The next most complex possibility is to specify both the starting value and the ending value of the index; this is required if the range to be summed does not include all possible terms. One could say that ambiguity is something to be determined through context and mathematicians generally don’t worry about ambiguity which can be quickly resolved.

 

You should also have noticed that Dirac delta function is summed over [imath]i\neq j[/imath]. You should also understand why. Anytime i=j, the argument is zero and the Dirac delta function blows up so we don’t want to see any of those terms there. Just as an aside, I could just as well specified i>j as it would have ended up covering the same set of terms; however, although [imath]i\neq j[/imath] covers every difference twice, I like it because it seems more symmetric and the simple factor of two has no real consequences.

There's just one more little thing I need to make sure of;

 

When you say sum over [imath]alpha_{qx}vec{Psi}[/imath] is 0 by definition, you are referring to the definition:

 

[math]

sum_i vec{alpha}_i vec{Psi} = 0

[/math]

 

?

So even though it's not exactly [imath]alpha_{qx}[/imath] in the definition, that definition nevertheless implies the sum over [imath]alpha_{qx}vec{Psi}[/imath] to be 0? (which seems to make sense as the tau component can be seen as 0 in every alpha? I think :)

Again, you are being confused by the vector notation. In order for

[math]

\sum_i \vec{\alpha}_i \vec{\Psi} = 0

[/math]

 

to be true, both components of the alpha vector have to vanish: i.e., both the sum over [imath]\alpha_{qx}\vec{\Psi}[/imath] and the sum over [imath]\alpha_{q\tau}\vec{\Psi}[/imath] must be zero.

 

By the way, this could be seen as another example of mathematical ambiguity. Here we have a vector appearing to multiply a vector without definition of the operation. It could certainly be seen as ambiguous if these vectors were in the same vector space; however, the vector spaces involved are quite different. The multiplication in this case would be essentially multiplying every component of one of those vectors by the other vector ending up with a substantially increased abstract vector space. Ordinarily such things are handled as undefined: i.e., the actual multiplication is never carried out; the thing is just written as shown.

If that's correct, yes I think I have (somewhat superficial) understanding of that equation and how it turns into [imath]sum_i frac{partial}{partial x_i}vec{Psi}=0[/imath]
I don’t think your understanding is superficial at all. You strike me as being able to follow the procedure given the instructions and that is all mathematics is about. Just to make sure, take a good look at my response to Bombadil above and check out the last part of the detailed post #83 I made for Qfwfq.

197544

I thought I had made a pretty good defense of the requirement of shift symmetry to Buffy back in post #129 in November. If you haven’t read that post, I would appreciate your opinion as to its clarity. Buffy and Qfwfq seem to have stopped commenting on my assertions. I don’t know what that means. Could mean I convinced them; could mean they are sick of arguing with me. :shrug: I hope they are still reading this as a little criticism is a very valuable thing. Either way, I will get to the issue of solution when I get home from Denver. We have to stay here for another week for a number of reasons so don’t expect anything for a week.

 

Have fun – Dick

 

PS Anssi, if you look down on that youtube url I gave for "our granddaughter's dance recital” you will see another with me showing her ZBrush. I was just putting whatever she wanted on the sphere: i.e., nose, eyes, mouth ears and hair. She got a big kick out of the fact that what I created would rotate in three dimensions.

Link to comment
Share on other sites

Bombadil, I checked your profile and was surprised by the few number of posts you have made. Being such a small number, I read them all. I might comment that the education level on most “science forums” is quite low and that hypography.com is probably among the best; however, I think, even here, you have to take most of the responses you obtained with a grain of salt so to speak. In particular, a lot of posters understanding of relativity is rather rudimentary. There is a lot of history in “word of mouth” stuff that just doesn’t show up in books; a formal education by experienced teachers is well worth the cost especially in theoretical physics as you need to know what led others to think the way they did.

 

I can understand how things can get past by word of mouth and how it can become a problem to tell just what is word of mouth and what is not without having a good understanding of the subject (which as of yet I don’t have). So far I have tried to take little “word of mouth” too seriously instead trying to put more effort into learning the math that is used in the physics, rather then trying to understand it by nonmathematical methods.

While I can see that a formal education in theoretical physics has some advantages for the time binging I’m going to continue doing it as I have been.

 

Learning mathematics from a book can be difficult and I sympathize with your circumstance. One thing I would say is, never accept anything without having convinced yourself that it is correct. As Anssi has commented just above, ambiguity exists in mathematics notation as it does in any language; the only difference is that mathematicians have made a great effort to make sure the context sufficient to resolve those ambiguities is small enough to effectively resolve the problems as they occur.

 

For the most part this is what I have been doing normally trying to use texts that also have problems that I can work my way through as well.

 

as, in actual fact, we are concerned with satisfying the original four constraints; one on the x dependence, one on the tau dependence, the F=0 rule and one constraint on the time dependence. The one on the time dependence is essentially frac{partial}{partial t}vec{Psi}=0. This is essentially the simplest differential equation known and has a rather trivial solution. Since the partial is defined as the ordinary derivative when all other variables are handled as if they are constants, the solution is simply vec{Psi} equals a constant: i.e., the differential of a constant is zero.

 

Then for now should the right side of the fundamental equation be thoughts of as

 

[math] K\frac{\partial}{\partial t}\vec{\Psi}=0 [/math]

 

In which [math] \psi [/math] and [math] \Psi [/math] satisfy the equations

 

[math] \psi=\Psi e^{-ikx}[/math]

 

and

 

[math] \frac{d}{dx}\Psi=0 [/math]

 

I might also comment that my fundamental equation in many respects is simply zero plus zero equals zero. As it stands, it contains no information to speak of but when one reorders the sums and collects things differently, one obtains some interesting internal relationships.

 

I had started to notice that this seemed to be the effect although when the second equals sine is put in there I’m not quite sure how it remains equal.

Link to comment
Share on other sites

Hello, sorry it took me a while to reply again, just being a bit busy until the end of february at least.

 

PS Anssi, if you look down on that youtube url I gave for "our granddaughter's dance recital” you will see another with me showing her ZBrush. I was just putting whatever she wanted on the sphere: i.e., nose, eyes, mouth ears and hair. She got a big kick out of the fact that what I created would rotate in three dimensions.

 

Heh, yeah this reminds me of when I was about 10 or whatever and found some simple graphics editor incredibly fascinating. Can't remember what it was but it was essentially identical to "Paint" that comes with Windows. So I'm just thinking Paint -> ZBrush... Kids have better toys these days :D

 

You should also have noticed that Dirac delta function is summed over [imath]ineq j[/imath]. You should also understand why. Anytime i=j, the argument is zero and the Dirac delta function blows up so we don’t want to see any of those terms there. Just as an aside, I could just as well specified i>j as it would have ended up covering the same set of terms; however, although [imath]ineq j[/imath] covers every difference twice, I like it because it seems more symmetric and the simple factor of two has no real consequences.

 

Yeah seems like I understood the meaning of [imath]i\neq j[/imath] correctly.

 

Again, you are being confused by the vector notation. In order for

[math]sum_i vec{alpha}_i vec{Psi} = 0[/math]

to be true, both components of the alpha vector have to vanish: i.e., both the sum over [imath]alpha_{qx}vec{Psi}[/imath] and the sum over [imath]alpha_{qtau}vec{Psi}[/imath] must be zero.

 

Ah, right.

 

I don’t think your understanding is superficial at all. You strike me as being able to follow the procedure given the instructions and that is all mathematics is about. Just to make sure, take a good look at my response to Bombadil above and check out the last part of the detailed post #83 I made for Qfwfq.

 

Well there are some things that I am kind of shaky with and its easy to forget some important details (and consequently make all sorts of mistakes).

 

Some things that have puzzled me little bit were actually mentioned in the posts to Bombadil and Qdwdq. I see [math]\vec(\psi)[/imath] and [math]\vec(\Psi)[/imath] and [math]\vec(\phi)[/imath] all being used to refer to the same function, at least that's what it looks like to me. Is there a reason there's a slightly different symbol used from time to time, or was it just to make the explanation little bit clearer?

 

Related to that, the exact same thing Bombadil found confusing:

 

[math]\sum_i \frac{\partial}{\partial x_i}\vec{\Psi}=0[/math]

and

[math]\sum_i \frac{\partial}{\partial x_i}\vec{\psi}(x_1,\tau_1,x_2,\tau_2, \cdots, x_n,\tau_n,t) = -iK\vec{\psi}[/math]

 

If I've understood it correctly, the left sides of those equations are essentially identical? And the reason for allowing the [imath]-iK\vec(\psi)[/imath] was - I believe - for mathematical convenience?

 

My understanding becomes very much superficial when it comes to your explanation of the deduction with [imath]e^{-\frac{i K}{n}(x_1+x_2+\cdots+x_n)}[/imath], and similarly in post #83 the explanation with [imath]\vec{\phi}=e^{iK_x * ( x_1+x_2+\cdots+x_n)}e^{iK_\tau * (\tau_1+\tau_2+\cdots+\tau_n)}e^{imt}\vec{\psi}[/imath]

 

I don't know if it's important to understand that deduction. But I can say I understand what you mean by [imath]\vec{\psi}[/imath] having to accommodate shift symmetry just like P. Albeit I haven't really had time to think through the consequences of that little detail in my head. If [imath]\psi[/imath] is seen just as one little step to get to P, then I suppose it wouldn't make any difference. But then it seems [imath]\psi[/imath] has got quite a role in this presentation which - I suppose - is what places that shift symmetry requirement on it...

 

And still related to that, it would be interesting to hear how that sort of differential expression is used in conservation of momentum in quantum mechanics?

 

I thought I had made a pretty good defense of the requirement of shift symmetry to Buffy back in post #129 in November. If you haven’t read that post, I would appreciate your opinion as to its clarity.

 

Well it seems pretty clear to me, but then it is often very hard to pick up on where the confusion really lies, and after that even harder to really figure out what sorts of terms the other party would probably understand as the explanation. We come to handle the same things in our minds through such a different concepts that human communication just becomes a big chore :P

 

When I think about this particular confusion, I'm pretty sure Buffy is thinking; if we have a set of noumena, and we go on to label them, and after that build a probability function based on those labels (like a little computer program), then that function may be perfectly valid for those specific labels, but would break down completely upon the shift of the labels.

 

Perhaps it would be useful to say that in this presentation the probability function is not allowed to depend on the specific labels, and that requirement is being justified by the fact that the labeling procedure cannot add any additional information on the noumena no matter how it is performed.

 

Buffy and Qfwfq seem to have stopped commenting on my assertions. I don’t know what that means. Could mean I convinced them; could mean they are sick of arguing with me. I hope they are still reading this as a little criticism is a very valuable thing.

 

Yes I hope so too.

 

-Anssi

Link to comment
Share on other sites

A small side question...would it not be correct to hold true in all cases that: to explain an outcome of a real event [E] you must always define the real cause(s) of [E] ? Thus, because all explanation of any type requires definition of at least some type, can we not hold true that it is impossible for such a phenomenon as "undefined explanation" of reality ?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...