Jump to content
Science Forums

What can we know of reality?


Recommended Posts

Well I know what commutativity means as in being able to change the order of some elements without changing the end result. But I could not figure out what is going on above. I've been trying to figure out what all the math notation means by reading post #42, the end of #77, and now the further clarification at the end of #83. But once again my limited familiarity with math leaves me with far too many shaky assumptions :(

 

I don't even know what would be meaningful questions, so I'll just try to probe everything without even bothering to provide my own quesses;

Don't worry, you have made your difficulties clear. Your only problem is the dearth of mathematical knowledge; a problem easily remedied. Remember when I said that mathematics was the invention and study of self consistent systems? Well this is just another one of those systems which have made their way into mathematics.

 

Under the normal understanding of mathematics, ab (meaning the multiplication of b by a) is one of the central issues of, addition and multiplication. Under the original definition of multiplication, ab is identical to ba (the difference between the two notations is called “commutation”). Well, it turns out that you can define a situation where ab = -ba (it's called “anti-commutation”). In the simple case of anti-commutation, it is quite clear that ab+ba must be exactly zero. But that brings up the issue of what happens when a=b; in that case, one has the result aa+aa=2aa=zero (a rather simple proof that all these “anti-commutating elements are zero). The somewhat subtle way out of the problem (a method of inventing a new mathematical system) is not to define ab+ba to equal zero but rather to define it to be zero only when a and b are different. Following this tack, the common definition for the result when they are the same is that aa+aa=2aa=1. This definition leads to a whole new collection of internally consistent mathematical relationships. One result of rather significant importance is the whole field of “spin” (obtained in analogy to angular momentum) which I am sure you are familiar. Notice that the common definition which I have just given you yields [imath]a^2=\frac{1}{2}[/imath]. I presume that strikes a bell; I am sure you are familiar with the “spin 1/2” entities in modern physics.

What does "alpha element" or "beta element" refer to?
These are no more than mathematical concepts similar to the square root of -1 called i. Their magnitude is another variable as anything can be multiplied by an ordinary number. That is, all of the additional properties are encompassed in the definition that ab+ba=zero and aa+aa=1. That symbol [imath]\delta_{ij}[/imath] is nothing more than a symbol for the situation just defined: [imath]\delta_{ij}[/imath] is defined to be one if i=j and zero otherwise. We are talking about nothing more than another "internally consistent mathematical structure".
What does it mean that there is "ix" or "jx" suffix to such an element?
Those are “subscripts” not suffixes. The subscript “ix” means that the alpha is associated with the x component of the ith element and nothing more. Look at it this way, [imath]\alpha_{ix}[/imath] means that the “anti-commuting element being referred to has to do with the x component of the ith element; it is no more than a reference notation. [imath]\alpha_{ix}[/imath] means we are referring to the ith (that is a very specific case) alpha associated with the x axis. Clearly, you should understand that the qth alpha refers to a different alpha; however, we are suming over i. It should be obvious to you that, in that sum (which is over all i) there will be one case wher i=q. In that case (when i=q) the reference [imath]\alpha_{ix}[/imath] and the reference [imath]\alpha_{qx}[/imath] refer to exactly the same element: we are then confronted with the fact that [imath]\alpha _{ix}\alpha_{qx}[/imath] is not [imath]-\alpha _{qx}\alpha_{ix}[/imath] but rather is [imath]-\alpha _{qx}\alpha_{ix}+1[/imath]. Think about it!

I notice the symbol [math]delta[/math], does it refer to dirac delta function here also? What does the ij refer to there, and how does it turn into "iq" later (i.e. what does iq mean)?
As I have said, that symbol [imath]\delta_{ij}[/imath] is nothing more than a symbol for the situation just defined: [imath]\delta_{ij}[/imath] is defined to be one if i=j and zero otherwise. The “ij” refers to a particular pair of references [imath](x_i[/imath] and [imath]x_j)[/imath], which are numerical labels for specific ontological elements.
Actually I am so royally lost at this point already that I can't make any sense of the rest of the post yet either. I am probably missing knowledge about some standard mathematical definitions, just I have no idea what those are and how to find material about them :P
We are using these “subscripts” to refer to a specific set of numerical labels (which have been arbitrarily assigned). We are only trying to generate rational constraints on these arbitrary assignments: essentially, the assignments cannot contain information not available in the patterns of those “valid” ontological elements (a pretty loose statement so long as no definition of those ontological elements exists).

I would word it this way; What I sometimes refer to as "semantical world view" is a world view where such concepts as "speed" can be sensible only by the way they relate to other concepts such as "distance" or "time", where those concepts are sensible only how they relate to yet another concepts such as "location" or "change", and in the end the set of concepts only validate each others but not the ontological nature of reality. I.e. where concepts are understood through other concepts.
It is the sensibility of the entire construct which is of interest here.

I Referring to the gravity example that HydrogenBond mentioned; if such a world view where "gravity was due to the repulsion of matter by space" was able to provide us with all the predictions as, say GR, then it would be just as true or untrue as GR. We should just say the concepts/elements these views consists of are a handy way to map the behavior of reality around us.
Once again, you are talking about solutions (epistemological constructs) not the constraints on the underlying ontology.

Why did he suppose that some pre-existing view was the correct one and string theory was just a philosophical bastardization of that correct view is beyond me,
The answer is clearly that he is approaching the problem with a preconceived answer.

I have nothing against models like string theory, but when it goes so far as to people actually start claiming that there must be ontologically real strings that vibrate in 11 dimensions (also taken as "ontologically real" dimensions, whatever "dimension" means! Get it?) is exactly as naive as saying we are conscious because there is a conscious homunculus in our mind. The predictive side of it is pure science, but the ontological mental image of it is pure religion.
All I can say is that you are absolutely correct.

Anyhow, Qfwfq referred to "local shift symmetry" and Doctordick jumped at it, so I thought I'd try to clarify things on my own part as well and say that the shift symmetry is not referring to shift inside some "semantically defined thing" (like "space"), but it just refers to shift among the labels used to refer to "ontological elements" (arbitrary features in a raw data whose meaning is unknown). A shift symmetry of labels inside one's world view, so to speak.
An exact statement of the difficulty.

Actually I think I understand well enough, aside from details, but what I need isn't a lecture in modern mathematics. I fully expected you would be able to catch on to my use of terms, global and local, just as you are using the notion of symmetry. I simply meant phase dependent coordinate values versus a same one for all of them. After all, it is the terminology of gauge symmetry, which is somewhat akin to the phase arbitrarity due to going from P to [imath]psi[/imath].
Qfwfq, I know you are a bright fellow but you are failing to attack the central problem here. Suppose you were given a seriously macroscopic series of symbols (which were totally undefined) and you wanted to come up with an explanation of that series of symbols; what would your attack on the problem be? Until you can take that problem seriously, you miss the entire basis of my approach.
BTW I don't think that Greek guy was confused by the notion of velocity, more that he was working to give the intuitive notion a precise definition for philosophical purposes. Your assumption that he was confused is somewhat like the many people who, reading your arguments, suppose you must have no connection with reality.
I did not say the writer was confused, what I said was that the idea of “speed” was not a concept the writer held as obvious, quite a different thing.
The rest of your arguments appear to mean that you are changing your assumption of shift symmetry from: [probability to general epistemological constructs]
And I would say yes to that conclusion. Please, if you would, analyze the problem of explaining an undefined series of symbols, references, data inputs, whatever and coming up with a procedure for predicting the next sequence. It is a simple problem and it deserves careful analysis.

 

Have fun -- Dick

Link to comment
Share on other sites

Under the normal understanding of mathematics, ab (meaning the multiplication of b by a) is one of the central issues of, addition and multiplication. Under the original definition of multiplication, ab is identical to ba (the difference between the two notations is called “commutation”). Well, it turns out that you can define a situation where ab = -ba (it's called “anti-commutation”). In the simple case of anti-commutation, it is quite clear that ab+ba must be exactly zero. But that brings up the issue of what happens when a=b; in that case, one has the result aa+aa=2aa=zero (a rather simple proof that all these “anti-commutating elements are zero). The somewhat subtle way out of the problem (a method of inventing a new mathematical system) is not to define ab+ba to equal zero but rather to define it to be zero only when a and b are different. Following this tack, the common definition for the result when they are the same is that aa+aa=2aa=1. This definition leads to a whole new collection of internally consistent mathematical relationships.

 

Okay, thanks, that was helpful.

 

One result of rather significant importance is the whole field of “spin” (obtained in analogy to angular momentum) which I am sure you are familiar. Notice that the common definition which I have just given you yields [imath]a^2=frac{1}{2}[/imath]. I presume that strikes a bell; I am sure you are familiar with the “spin 1/2” entities in modern physics.

 

I know what particle spin refers to and how such a concept was conceived and what it's for, but I am not familiar with the mathematical side of it so I must say [imath]a^2=\frac{1}{2}[/imath] does not strike a bell :)

 

These are no more than mathematical concepts similar to the square root of -1 called i. Their magnitude is another variable as anything can be multiplied by an ordinary number. That is, all of the additional properties are encompassed in the definition that ab+ba=zero and aa+aa=1.

 

Hmm... Are you saying that [math]\alpha[/math] and [math]\beta[/math] are defined by ab+ba=0 and aa+aa=1 ? Are the A's and B's referring to "alpha" and "beta" there? Or are you saying rather that [math]\alpha[/math] and [math]\beta[/math] don't have any standard definitions but in this case they are completely defined by the relationships you gave in post #42?

 

I cannot explain how tricky it is for me to try and figure these things out, especially as the information is spread across about 4 different posts by now :P

 

That symbol [imath]delta_{ij}[/imath] is nothing more than a symbol for the situation just defined: [imath]delta_{ij}[/imath] is defined to be one if i=j and zero otherwise. We are talking about nothing more than another "internally consistent mathematical structure".

 

That part I understood.

 

Those are “subscripts” not suffixes. The subscript “ix” means that the alpha is associated with the x component of the ith element and nothing more. Look at it this way, [imath]alpha_{ix}[/imath] means that the “anti-commuting element being referred to has to do with the x component of the ith element; it is no more than a reference notation.

 

I understood that part too. Except for whether [imath]\alpha_{ix}[/imath] gets some real value depending on what that X is. This question popped into my head when I was looking at the definition from post #42: [imath]\vec{\alpha}_i = \alpha_{ix}\hat{x} + \alpha_{i\tau} \hat{\tau}[/imath]

 

I am moving on very thin ice here :P I am 100% certain that I have understood something completely topsy turvy. A standard textbook explanation of the common usage of "alpha" and "beta" would be nice to have :P (any links, anyone?)

 

[imath]alpha_{ix}[/imath] means we are referring to the ith (that is a very specific case) alpha associated with the x axis. Clearly, you should understand that the qth alpha refers to a different alpha; however, we are suming over i. It should be obvious to you that, in that sum (which is over all i) there will be one case wher i=q. In that case (when i=q) the reference [imath]alpha_{ix}[/imath] and the reference [imath]alpha_{qx}[/imath] refer to exactly the same element: we are then confronted with the fact that [imath]alpha _{ix}alpha_{qx}[/imath] is not [imath]-alpha _{qx}alpha_{ix}[/imath] but rather is [imath]-alpha _{qx}alpha_{ix}+1[/imath]. Think about it!

 

The above I understand but it doesn't get me far while trying to understand post #42... Actually before that let me get back to:

 

---QUOTE---

[imath][\alpha_{ix} , \alpha_{jx}] \equiv \alpha_{ix} \alpha_{jx} + \alpha_{jx}\alpha_{ix} = \delta_{ij}[/imath]

can be rearranged to show that [imath]\alpha_{ix}\alpha_{jx} = \delta_{ij} -\alpha_{jx}\alpha_{ix}[/imath] which implies

 

[imath]\alpha_{qx}\alpha_{ix} = \delta_{iq} -\alpha_{ix}\alpha_{qx}[/imath] and [imath]\alpha_{qx}\beta_{ij} = -\beta_{ij}\alpha_{qx}[/imath]

(look at the defined commutation of alpha with beta).

Thus all that happens as [imath]\alpha_{qx}[/imath] is commutated through an alpha or a beta is a sign change except when q=i. In that case, the [imath]\delta_{iq}[/imath] picks up one additional term with no alpha or beta.

---END OF QUOTE

 

I now understand that up to the point the first [imath]\beta[/imath] appears. I don't know where it comes from or why is it there all of a sudden. Since you advice to "look at the defined commutation of alpha with beta", I take it that's probably [imath]\alpha_{qx}\beta_{ij} = -\beta_{ij}\alpha_{qx}[/imath], but it doesn't cause any switches to be thrown in my head :(

 

Also I don't know what it means to "commutate [imath]\alpha_{qx}[/imath] through an alpha or a beta"...

 

After spending this day trying to figure these things out (and that is why this is such a messy post), I must say I feel like I'm struggling more in this step than I have in any earlier steps. Right now I don't know where to start unraveling all these things. Other people feel free to help also! :)

 

Btw, the first sum over i in that fundamental equation, that refers to the delta sub i also, and not just the alpha?

 

-Anssi

Link to comment
Share on other sites

I did not say the writer was confused, what I said was that the idea of “speed” was not a concept the writer held as obvious, quite a different thing.
Alright, let's put it that way, just that you had said:
Today, everyone (except maybe some very primitive peoples out of touch with modern gadgets) understand exactly what one means by speed and it is difficult for us to comprehend that confusion could ever have existed.
In short, I assume the ancient Greek was defining the notion without the use of predefined concepts of direct proportionality and of derivative. Today's students, when beginning kinematics, already know at least the first of those two and usually also the second one. They are however philosophical definitions rather than something obvious from familiarity with modern gadgets.

 

Please, if you would, analyze the problem of explaining an undefined series of symbols, references, data inputs, whatever and coming up with a procedure for predicting the next sequence. It is a simple problem and it deserves careful analysis.
I'm trying to follow your analysis. Don't think I don't get what you're doing, it's only the how that I've been unable to follow in detail. Now I'm not reasoning confined in a groove. Here's a cute illustrative example which you might find amusing especially if you can guess the game:

 

A spy is sent to gather intelligence for the planners of an attack against a castle, they need to know how to fake a patrol returning to the castle and pass ID. The spy manages to hear 3 of the exchanges between the guard corps and the patrol commander. After the halt and the patrols ID being called out, the guard says a number and the patrol commander replies with another number; the first time guard calls out 6 and the reply is 3, the second time 18 is answered with 9, the third time 12 is answered with 6. At that point the spy supposes he has got the game and the planners are also convinced and sort out all other details. When the carry it out, on a very dark night, the guard seems unalarmed until his call of 10 is answered with 5, at which the whole guard corps immediately leaps into action and surrounds them. What should the patrol have replied to 10?

Link to comment
Share on other sites

I know what particle spin refers to and how such a concept was conceived and what it's for, but I am not familiar with the mathematical side of it so I must say aa = 1/2 does not strike a bell :)
Sorry about that. I am again being led off subject by questions which really amount to outside baggage. These things are usually introduced in a very different manner; in a manner implied by the physicist's concept of reality and the mathematics he has discovered applicable. That is really counter to the problem I am looking at as the problem of interest must be looked at from a position of complete ignorance. All we are really concerned with here is the fact that such a counter intuitive thing (anti-commuting entities) can be the basis of a internally consistent system. The only reason I use it is because it allows me to express four different constraints in a form which appears to be a single differential equation. I personally regard it as a convenient mathematical trick; no more than another mathematical operation which can be defined and which provides a valuable service (such as adding or multiplying or taking derivatives are valuable mathematical procedures, this is no more than another).

Hmm... Are you saying that [math]alpha[/math] and [math]beta[/math] are defined by ab+ba=0 and aa+aa=1 ?
No, I am not. The issue here is anti-commutation itself and the fact that we can define such things where the elements being defined need not be zero.
Or are you saying rather that alpha and beta don't have any standard definitions but in this case they are completely defined by the relationships you gave in post #42?
That is correct. The definitions given in post #42 completely define them.

I understood that part too. Except for whether [imath]alpha_{ix}[/imath] gets some real value depending on what that X is. This question popped into my head when I was looking at the definition from post #42: [imath]vec{alpha}_i = alpha_{ix}hat{x} + alpha_{itau} hat{tau}[/imath]
No, they are merely “things” that anti-commute with one another. There is a different alpha for every index [imath]x_i[/imath] or [imath]\tau_i[/imath] and there is a [imath]\beta_{ij}[/imath] for every pair of points in that [imath](x,\tau)[/imath] space used to represent our ontological elements. Think of them as subtle complications inserted into that differential equation; complications which have the power to force the solutions to that equation to obey the original constraints and serve no other purpose. As I said, a mere mathematical trick which allows me to write the constraints in one apparently simple differential equation.
I am moving on very thin ice here :P I am 100% certain that I have understood something completely topsy turvy. A standard textbook explanation of the common usage of "alpha" and "beta" would be nice to have :P (any links, anyone?)
You will find no textbook explanation of “the common usage of 'alpha' and 'beta'" as there is none (common usage that is). If you insist, you might take a look at “Pauli's spin matrices”. I doubt you will find that presentation any more meaningful than mine. His definitions constitute a matrix representation of anti-commuting entities. If you go any deeper, you will see the physics reasoning behind using them. I really don't care for that presentation because it requires one to believe the physics presentation is a valid representation of reality; a highly presumptive place to start. I think my presentation is much more to the point of the problem we are trying to solve. As I have said many times, I define mathematics as the invention and study of internally consistent systems and these alphas and betas are no more than defined mathematical entities which obey some strange rules (the exact rules are given in post #42 ). What it is or why it is doesn't really bear on the issue here. All that is important is that it serves the purpose for which it was introduced: it allows me to write my constraints in what appears to be one simple equation.
Also I don't know what it means to "commutate [imath]alpha_{qx}[/imath] through an alpha or a beta"...
Order of the terms in that differential equation is an important issue. Note that [imath]\frac{\partial}{\partial x_i}\vec{\psi}[/imath] is a meaningful expression (it means “take the derivative of [imath]\vec{\psi}[/imath]”) but that [imath]\vec{\psi}\frac{\partial}{\partial x_i}[/imath] is not; what is the function which is to be differentiated? The difference between those two expressions is the order of the terms themselves. The symbol [math]\sum_i \alpha_{ix}\vec{\psi}[/math] has been defined to be zero so, if we can achieve that symbol set via defined algebraic operations on my fundamental equation, then we can replace it by zero. Now when we multiplied by [imath] \alpha_{qx}[/imath], it was clearly on the wrong side of [imath] \alpha_{ix}[/imath]; in order to get in directly on [imath]\vec{\psi}[/imath] (in order to be able to replace it with zero, after we sum over q), it had to be commuted with [imath] \alpha_{ix}[/imath] and [imath]\frac{\partial}{\partial x_i}[/imath]. We had to “commute it through those terms” so that, when we summed over q, it would be in the explicit form, [math]\alpha_{qx}\vec{\psi}[/math] (that is, nothing is between the alpha sub q and the [imath]\vec{\psi}[/imath] and we can directly use the fact that the sum is defined to be zero.

Btw, the first sum over i in that fundamental equation, that refers to the delta sub i also, and not just the alpha?
There is no “[imath]\delta_i[/imath]" in the fundamental equation. I suspect you are referring to the “Nabla” (looks like an upside down capital delta). Remember, [imath]\vec{\nabla_i}[/imath] was define to be a vector differential

[imath]\frac{\partial}{\partial x_i} \hat{x} +\frac{\partial}{\partial \tau_i} \hat{\tau}[/imath] and [imath]\vec{\alpha_i}[/imath] was defined to be [imath]\alpha_{ix}\hat{x} + \alpha_{i\tau} \hat{\tau}[/imath] thus that first sum becomes:

[math]\sum_{i=1}^n \vec{\alpha_i}\cdot \vec{\nabla_i} = \alpha_{1x}\frac{\partial}{\partial x_1}+\alpha_{1\tau}\frac{\partial}{\partial \tau_1}+\alpha_{2x}\frac{\partial}{\partial x_2}+\alpha_{2\tau}\frac{\partial}{\partial \tau_2}+\cdots+\alpha_{nx}\frac{\partial}{\partial x_n}+\alpha_{n\tau}\frac{\partial}{\partial \tau_n}[/math].

 

If you multiply that expression by [imath]\alpha_{qx}[/imath] you will have:

[math]\alpha_{qx}\sum_{i=1}^n \vec{\alpha_i}\cdot \vec{\nabla_i} = \alpha_{qx}\alpha_{1x}\frac{\partial}{\partial x_1}+\alpha_{qx}\alpha_{1\tau}\frac{\partial}{\partial \tau_1}+\alpha_{qx}\alpha_{2x}\frac{\partial}{\partial x_2}+\alpha_{qx}\alpha_{2\tau}\frac{\partial}{\partial \tau_2}+\cdots+\alpha_{qx}\alpha_{nx}\frac{\partial}{\partial x_n}+\alpha_{qx}\alpha_{n\tau}\frac{\partial}{\partial \tau_n}[/math].

 

Using the definition of its commutation with [imath]\alpha_{ix}[/imath], you may commute it with the explicit alphas there and obtain:

[math]\alpha_{qx}\sum_{i=1}^n \vec{\alpha_i}\cdot \vec{\nabla_i} = -\alpha_{1x}\alpha_{qx}\frac{\partial}{\partial x_1}-\alpha_{1\tau}\alpha_{qx}\frac{\partial}{\partial \tau_1}-\alpha_{2x}\alpha_{qx}\frac{\partial}{\partial x_2}-\alpha_{2\tau}\alpha_{qx}\frac{\partial}{\partial \tau_2}+\cdots-\alpha_{nx}\alpha_{qx}\frac{\partial}{\partial x_n}-\alpha_{n\tau}\alpha_{qx}\frac{\partial}{\partial \tau_n}+\frac{\partial}{\partial x_q}[/math]

 

Since it commutes with the partial dirivative, the final result can be written:

[math]\alpha_{qx}\sum_{i=1}^n \vec{\alpha_i}\cdot \vec{\nabla_i} = \left\{\alpha_{1x}\frac{\partial}{\partial x_1}+\alpha_{1\tau}\frac{\partial}{\partial \tau_1}+\alpha_{2x}\frac{\partial}{\partial x_2}+\alpha_{2\tau}\frac{\partial}{\partial \tau_2}+\cdots+\alpha_{nx}\frac{\partial}{\partial x_n}+\alpha_{n\tau}\frac{\partial}{\partial \tau_n}\right\}(-\alpha_{qx}) +\frac{\partial}{\partial x_q}[/math],

 

Which can be written:

[math]\alpha_{qx}\sum_{i=1}^n \vec{\alpha_i}\cdot \vec{\nabla_i} = \left\{\sum_{i=1}^n \vec{\alpha_i}\cdot \vec{\nabla_i} \right\}(-\alpha_{qx}) +\frac{\partial}{\partial x_q}[/math],

 

That last single term, all by itself arises when i=q and that event only occurs once;

 

You might be confused by exactly what these i,j,q,p,l,k subscripts are all about. They are no more than letters standing for the appropriate x and tau reference labels to be in the sum. The term being summed is [imath]\vec{\alpha_i}\cdot \vec{\nabla_i}[/imath] and each “i” yields a different term in that sum but every term is operating on the same [imath]\vec{\psi}[/imath].

Here's a cute illustrative example which you might find amusing especially if you can guess the game:
I don't understand why you are putting that down. Those kinds of questions on intelligence tests always bugged the hell out of me. Not that I couldn't pick up on the series they wanted but rather that it was awfully presumptive of them to think that their answer was “correct”. Anybody with any sense at all knows that there are an infinite number of series with exactly the same first n terms and a totally different n plus first term. Based on the information available to him, five is a fine expectation. Any answer could be wrong; as I have said many times, “the future is what we do not know”!

 

What is significant here is that I have no idea as to why you brought up that little story. It certainly is not an example of the problem I am discussing. I suspect very strongly that it is no more than evidence that you have no idea of what I am doing. It reminds me very much of a post Rade made in Feburary of this year on “physicsforums.com”. You can see my response to that post here. If you can understand that response, you might be a lot closer to understanding what I am doing than I think you are. It might be worth reading. Rade's base of information to be explained was at least presented as undefined. Your example is chock full of assumptions (presumed valid ontological concepts) which you take no trouble to explicitly list (because the size of such a list would probably be beyond accomplishment).

 

Now let's make the problem more analogous to what I am talking about. Let the spy obtain a billion exchanges between the guard corps (sounds, motions, light signals, ... “smells”, etc) but he is dealing with total aliens and has utterly no idea of what meanings the aliens attach to these things. How would you suggest he decide what the billionth and first exchange should be? What you don't seem to comprehend is that the answer to that question is the result of an epistemological solution to the series, a subject of no interest to me. All I am concerned with is the constraints I can place on the “interpretation” problem; which happen to be exactly the constraints I have specified. Think of it this way, if I give the problem to a million people and after a thousand years they individually (without communicating with one another) come up with a solution which exactly matches the known exchanges which they all believe is the simplest solution possible. Now amongst all those solutions, let us say that there are a good number of which are actually the same solution: i.e., the only difference is the particular symbols they used to refer to the specific exchanges. Think about it, do you seriously think that there exist no constraints on the “simplest interpretation” of those symbols? That is, the interpretation to be placed on the symbols these people use to represent those specific exchanges?

 

If you can not follow what I just said, you do not understand the problem I am talking about.

 

Have fun -- Dick

Link to comment
Share on other sites

Those kinds of questions on intelligence tests always bugged the hell out of me. Not that I couldn't pick up on the series they wanted but rather that it was awfully presumptive of them to think that their answer was “correct”. Anybody with any sense at all knows that there are an infinite number of series with exactly the same first n terms and a totally different n plus first term. Based on the information available to him, five is a fine expectation. Any answer could be wrong; as I have said many times, “the future is what we do not know”!
But it's not that kind of question on intelligence tests at all. That is an assumption of yours which is not in line with how I posed the matter and shows how you so often tend to capsize people's meaning when you interpret what they post.

 

What is significant here is that I have no idea as to why you brought up that little story.
To show, as I said, that I know what you mean about assumptions and reasoning in the groove.

 

Your example is chock full of assumptions (presumed valid ontological concepts) which you take no trouble to explicitly list (because the size of such a list would probably be beyond accomplishment).
That depends on what you mean by "assumptions"; in a story (or fictitious example) the truth is what is posited as such. The guard corps and the patrols are ontological elements and thus valid, not presumed, and the convention between them is thus correct. The spy must guess it and the plan's success relies upon his reckoning, which is either right or wrong. The only assumption in my story is made by the spy, which turns out to be wrong.

 

Many a person hearing the riddle, after the exchanges between the real patrols and before the ending, will tend to assume the criterion being "divide by two" but this turns out to be wrong, a hasty leap to a seemingly obvious conclusion. One assumption remains typical, and much more radicated, and I would expect you to know which I'm alluding to. Can't you see my example is actually an illustration many of the things you always say? It shows exactly what you so sternly replied to it!

 

By the way the criterion, which the riddle obviously leaves unspecified, isn't easy to guess due to the radicated assumption I alluded to and yet is very simple. It's quite amusing to find it out and I would expect you to find it highly entertaining if you knew it. The answer to 10 should have been 3, the answer to 16 would be 7 and the answer to 14 would be 8. Want more clues?

 

Let the spy obtain a billion exchanges between the guard corps (sounds, motions, light signals, ... “smells”, etc) but he is dealing with total aliens and has utterly no idea of what meanings the aliens attach to these things.
Now that comes into my castle example more than you believe! :hyper:

 

Think about it, do you seriously think that there exist no constraints on the “simplest interpretation” of those symbols? That is, the interpretation to be placed on the symbols these people use to represent those specific exchanges?
I don't think that there exist no constraints on the “simplest interpretation” of those symbols, at all. The first problem is in defining a "simplicity function" over the whole space of interpretations and minimizing it, not an easy task. We usually go by an intuitive attribution of "simplicity" to such things.

 

If you can not follow what I just said, you do not understand the problem I am talking about.
I follow what you said but still struggle to follow the details of putting it into practice, including justification of your fundamental equation. I would be somewhat interested in knowing how it could attack the riddle, given an available source of kosher answers to chosen numbers. Give me a number, reasonable for the guards and patrol to work out, and I'll give you the correct reply for it.
Link to comment
Share on other sites

No Qfwfq, you clearly misunderstand what I am doing. The fact that you would even consider making the comment,

I would be somewhat interested in knowing how it could attack the riddle, given an available source of kosher answers to chosen numbers.
You are clearly concerned with finding epistemological solutions and not at all concerned with the issue that Anssi and I are talking about. I am talking about an undefined ontology and the problem of interpreting what it means: i.e., defining concepts convenient to understanding it. Essentially this is the central issue of establishing language comprehension; an issue normally left almost entirely to “intuition” (we can do it so why worry about how in can be done).
That depends on what you mean by "assumptions"; in a story (or fictitious example) the truth is what is posited as such.
The moment you posit anything, you are stepping across the problem I am talking about. Did you ever see that movie “Betelgeuse”? It's a ghost story and in it some comments that the living cannot see ghosts because they ignore them. There is a question here which everyone just seems to ignore.

 

You present stories to explain your point; how about you let me present a hypothetical story to explain my point. Suppose you die and go to heaven (for the sake of the story, we will just assume the common man's idea of heaven is valid). Now you are going to be there a long time so, since you have the time, you ask God to explain to you how he created the universe. He agrees to your request but specifies that he does not like wasting his time and there will be a timed test when he finishes and, if you fail the test, you're off to hell. The problem is that God is the greatest in every way; in fact, he is the greatest bore ever conceived and you fall asleep in his first lecture even before he gets through the introduction. You wake up just as he concludes the final lecture. (Have you ever had the dream that the test is tomorrow and you forgot to go to class all year? Well think of this as the extreme case of such a thing.) Now you have to take the test .

 

The test is as follows: he will create a universe which has never existed before and place you in it. You will remember everything you know but it will be remembered as if it were a dream and he guarantees that none of it will have anything to do with what happens in that universe. The rules of that universe (what he uses to set it up) will be the most complex thing he can conceive of, designed to make the test as difficult as possible. Knowing what you know, what preparation for that test could you make? How would you approach such a problem?

 

I'll even give you a hint. It certainly makes no difference what that universe is or how god made it, after you are placed in it you will be confronted with something (if there is nothing, there is nothing to explain). What it will be you cannot know, but, whatever it is, it could be thought of as undefined entries in a ”what is”, is “what is” table: i.e., you can use the concept of such a table to think about what it is you have to understand without knowing what it is. And you probably won't be aware of everything immediately as then the solution is trivial: “it is what it is” and there is nothing more to know.

 

How would you attack that problem?

 

Have fun -- Dick

Link to comment
Share on other sites

Which simply begs one of my original questions here: if you refuse to apply your model to anything, how is it going to have any "profound implications?"

 

If anything, in this latest post you seem to be saying that you are modeling a model--yes, understanding that its a second level of abstraction--that is not only undefined but that cannot in fact *be* defined.

 

I think we get the fact that your math needs to impose no constraints on the ontologies in your truth-probability-vectors, but I don't see how you can refuse to at least entertain us with a practical application in a specific case. Such a specific example could be quite enlightening, as your own example does indeed point to! But it would be nice to get a specific mapping rather than making a game out of it with which to taunt people. (I could say the same for Q's story here too, but he's not the one proposing the model....)

 

I thought that your example of anti-commutative systems being self-consistent was very helpful in enunciating what you're talking about (even though I'd really gotten that point before), and I don't quite understand why you don't exercise the use of example more given the "trouble" most of us are having in "getting" what you're talking about.

 

I share Q's general concern with your tendency to simply say "you just don't understand" to questions asking for clarification: I don't think you're ever going to get anyone to understand it if you continue to do this. Anissi may in fact really get what you're talking about while the rest of us don't, however without any of us being able to follow your model or what it *means* or even come up with a mathematical translation for it, its just plain hard to tell!

 

Reality is a crutch for people who can't cope with drugs, :shrug:

Buffy

Link to comment
Share on other sites

Which simply begs one of my original questions here: if you refuse to apply your model to anything, how is it going to have any "profound implications?"
In a nutshell, it applies only to itself. It is the solutions of that equation which are so facinating.
If anything, in this latest post you seem to be saying that you are modeling a model--yes, understanding that its a second level of abstraction--that is not only undefined but that cannot in fact *be* defined.
In many respects, that is quite true. On the other hand, I do define things as I progress but only those things which can be defined without knowing what is being referred to. For example, I have defined time as an index on the ontological elements available to us, essentially what we know of the reality we are trying to explain. As you say, it is a second level of abstraction in that we do not actually have these indices; they will arise only when we have an epistemological solution to model.

 

The problem with Rade, Qfwfq and I suspect you is that you all want something to apply the model to: i.e., an actual epistemological solution whereas the problem is to maintain the generality of the model so that any absolutely any epistemological solution to any possible reality (which I have defined to be the same thing as a collection of valid ontological elements: what, as far as I know, exists). The issue is, is there anything I can say about reality before knowing anything about that collection of possible ontological elements. Well, I can define time; the past being the reality available to us to create epistemological solutions (the collection of valid ontological elements available to us), the future being the reality not available to us (the collection of valid ontological elements not available to us) and the present as the boundary between the past and future. Since the present only becomes available to you as it becomes available, the past can be thought of as a closed set (the boundary is part of the set).

I think we get the fact that your math needs to impose no constraints on the ontologies in your truth-probability-vectors, but I don't see how you can refuse to at least entertain us with a practical application in a specific case.
The issue here is absolute objectivity and you all want to examine a case where we both know objectivity is being laid aside. My only conclusion is to presume that you do not understand what I am doing.
I don't quite understand why you don't exercise the use of example more given the "trouble" most of us are having in "getting" what you're talking about.
For the very simple reason that my central purpose is “getting you to understand what I am talking about”: a truly objective approach to the problem. Do you really believe objectivity is an issue to be avoided?
But it would be nice to get a specific mapping rather than making a game out of it with which to taunt people.
Buffy, please believe me that I am not trying to taunt anyone here. I am trying very hard to direct your attention to the issue of being absolutely objective.
Anissi may in fact really get what you're talking about while the rest of us don't, however without any of us being able to follow your model or what it *means* or even come up with a mathematical translation for it, its just plain hard to tell!
That translation you are talking about is a translation to an nonobjective perspective and that is the real problem here.

 

Have fun -- Dick

Link to comment
Share on other sites

The problem with Rade, Qfwfq and I suspect you is that you all want something to apply the model to: i.e., an actual epistemological solution whereas the problem is to maintain the generality of the model so that any absolutely any epistemological solution to any possible reality (which I have defined to be the same thing as a collection of valid ontological elements: what, as far as I know, exists). The issue is, is there anything I can say about reality before knowing anything about that collection of possible ontological elements.
I won't speak for the others, but for me, this "application of the model" is an aid to learning what the model does. It *might* assist in validating some limited application of the model, but that is not the *goal*: I definitely realize that any such justification would be an instance of the model and not the model itself, and thus not be relevant to your main goal.
The issue here is absolute objectivity and you all want to examine a case where we both know objectivity is being laid aside. My only conclusion is to presume that you do not understand what I am doing.
I think that's only because you jump to the conclusion that by using an example at all, we don't understand that the application of the example is not what you're after, and that is quite mistaken.

 

Just because we want to try an application to better understand what the thing means does not mean that we have to use that application as an *additional non-ontological assumption*.

 

I argue that its somewhat akin to saying "in order to *really* understand blue, it must be explained abstractly, and if you insist on seeing an example of the color blue, you will never understand what color means."

 

Its also true of course that we might stumble upon a specific application that that contradicted a *specific instance* of reality, although if "we didn't understand what you were doing" the argument that that showed a weakness in your model could be dismissed rather easily, so I can see the motivation for framing the lack of understanding in that way....

 

So when you ask:

Do you really believe objectivity is an issue to be avoided?
I say most definitely no! But my question is by avoiding examples as a learning mechanism separate and unrelated and--most importantly--unusable as a foundational element of the model itself, you are making it much harder to explain what you're trying to do!

 

Slow, but learns a lot faster by example, :phones:

Buffy

Link to comment
Share on other sites

I feel like only one simple assumption must be made in order to start the process.

 

That is, if something has happened frequently in the past, it is likely to happen again.

 

In other words, inductive reasoning.

 

I feel like the fact that our minds have evolved to be fundamentally dependent on this concept means that it is useful thus we need not argue for it.

 

I also feel like we cannot reason without it, so we our only options are not to think or assume it is true.

 

Not that we really could decide to reject it anyways, since our subconsious reasons based on it, which is not under our conscious mind's direct control.

 

I feel like deductive reasoning is just something that we see is always the case thus is a subset of inductive reasoning in our mind and requires no specific faculty.

 

And I feel like between the two a simple system could explain all human knowledge.

Link to comment
Share on other sites

I won't speak for the others, but for me, this "application of the model" is an aid to learning what the model does.
At this moment, in this conversation, the model is not yet complete. The model is the equation; the issue is that there must always exist an interpretation of any flaw-free explanation which satisfies that equation. The “application of the model” is finding applications of the solutions to that equation. But, before we can start down that road, there are a couple of subtle issues with regard to that equation which must be addressed. Those issues concern the fact that though the number of “valid” ontological elements (what we know of reality) must be finite, there is no such constraint on the “invalid” ontological elements as they are mere figments of our imagination which are required by that epistemological construct (our explanation of reality).
It *might* assist in validating some limited application of the model, but that is not the *goal*: I definitely realize that any such justification would be an instance of the model and not the model itself, and thus not be relevant to your main goal.
Essentially, there is no “limited” application of the model. There is no way to interpret what the terms of the equation represent (i.e., relate it to reality as we perceive reality) without knowing something about the solutions. Looking at the equation from a physicists perspective, it is essentially a many bodied equation and we all should be well aware of the difficulty of solving such things: particularly when the number of variables are sufficient to represent the whole of even the most simple problem you can conceive of. We are talking here about the internal consistency of the whole.

 

As I have said before, I concluded the equation had to be valid when I was still a graduate student (sometime around 1966) and I didn't manage to drag out my first solution until some ten years later. I only kept occasionally trying to solve it because it seemed the solution ought to have an application somewhere. The main thing I am trying to do right now is convince someone that there must exist an interpretation of reality which obeys that equation (i.e., there exists a perspective of the problem of understanding reality, or any collection of ontological elements, within which that equation has to be valid). In my head that is a simple issue as it only has two essentially very trivial parts.

 

First, that any representation of reality (the symbols used to express what we know or think we know) are free parameters invented by us and thus can be altered in any way without invalidating the epistemological solution we are looking for. I have shown that such a fact implies shift symmetry must be a characteristic of any valid epistemological solution (and no one should find that disturbing). Second, there exists no way to separate valid from invalid ontological elements (if there were, you could prove solipsism to be false). Thus the “invalid” ontological elements are totally free creations invented by us. It is via these entities through which the “rule” (that flaw-free epistemological construct we have also invented) is to yield expectations consistent with our past (see my definition of “the past”). There is quite clearly a relationship between what exists and what the rule has to be. You can almost always trade one off against the other. I have used that freedom to show that there will always exist a collection of “invalid” ontological elements which will allow the rule, [imath]\sum_{i \neq j}\delta(x_i-x_j)\delta(\tau_i-\tau_j)=0[/imath], to constrain the entire collection of “valid” ontological elements to whatever they happen to be.

 

I then use a mathematical trick to put those four constraints into a single equation. This procedure is nothing if not trivial and yet absolutely everyone fights me tooth and nail to deny its validity. It reminds me very much of a cartoon I saw in the “American Scientist” one time. It showed a child standing in front of his teacher (I think there was some mathematics on the black board). The caption was “I burned my math book because there was no mention of God in it!” No one wants to think about that equation unless I can relate it to their beliefs.

 

What examples do you want? The differential constraint? For all practical purposes, that is the same constraint arising from shift symmetry in standard quantum mechanics and will generate something analogous to conservation of momentum. Or maybe you want an example of the trade off between what exists and the rule. Well, a very common example would be religion. If what exists includes the Gods, and the rule is “what happens is what they want to happen”; that is a pretty flaw-free explanation of reality. The only problem with it is that is that it isn't very valuable as a prediction device unless you have someone who “knows the mind of god” and that's a hard thing to be sure of. Another good example of the trade off between what exists and the rules, one which I would really prefer to avoid because I know it will just boil the blood of physics authorities, is electromagnetic theory and photons. Try to prove photons exist without using any aspects of the theory. I am not saying that electromagnetic theory is flawed, what I am saying is that its validity is a belief set.

 

Look at the history of physics; it's chock full of changes in “rules” and “what exists”. I have just proposed a new paradigm and done a pretty good job of defending the ability of that paradigm to be binding on reality. When you tell me it's not applicable to reality you should tell me why not. Now if you were to say "so what", it doesn't seem to say anything of significance I wouldn't argue with you; it certainly doesn't seem to contain any earth shaking propositions. Certainly the common interpretation of what those constraints seem to imply is pretty well expected.

I argue that its somewhat akin to saying "in order to *really* understand blue, it must be explained abstractly, and if you insist on seeing an example of the color blue, you will never understand what color means."
And you will never understand what my equation means until after you have examined some of the solutions. I would like very much to begin showing how I managed to find solutions to the equation and what I think those solutions mean but I certainly will not start down that path until I have some consensus on the objective validity of my deduction. So long as it is held to be a frivolous hypothesis, I have no intention of going forward. I have been there many times and I won't go there again. The fact that the equation is purely deductive (no inductive steps) is a very important issue here.
But my question is by avoiding examples as a learning mechanism separate and unrelated and--most importantly--unusable as a foundational element of the model itself, you are making it much harder to explain what you're trying to do!
I am trying to convince someone that there always exists an interpretation of reality which obeys that equation. That the deduction is valid under the terms I have defined.

 

I am sorry if my attitude puts you off but I am an old man with a lot of bad memories of authoritarian attitudes. Perhaps a little kindness might be effective. I would really like to communicate what I have discovered before I die; but I really do not like authoritarian attitudes.

 

Have fun -- Dick

Link to comment
Share on other sites

You are clearly concerned with finding epistemological solutions and not at all concerned with the issue that Anssi and I are talking about.
No, that wasn't my query. Given those pairs of numbers, How would you build the "what is is what is" table and then proceed to use it for expectations of the most likely correct answer to the next number?

 

I am talking about an undefined ontology and the problem of interpreting what it means: i.e., defining concepts convenient to understanding it.
You seem to mean that your model is good for any ontology. Fine. Desn't this mean it should be valid for the one of my example?

 

The moment you posit anything, you are stepping across the problem I am talking about.
Not really, it's called examining a specific instance of problem you are talking about.

 

How would you attack that problem?
I asked YOU how YOU would attack the spy's problem.

 

Essentially, there is no “limited” application of the model.
What does this mean? It's valid in general, but not in any specific case? Is it valid in the specific cases you set, such as God's test, or the flashing light in the cave?
Link to comment
Share on other sites

Sounds like he is talking about the plurality of coherentism. Can't be sure since he isn't a minimalist and I don't feel like wading through all his posts...

 

In other words, for any given set of beliefs defined by certain constraints, there are infinite variations where things that do not logically contradict the given constraints vary.

 

If simple arithmetic was a set of beliefs, then talking about it in different languages, different number bases etc would all be such variations.

 

However, to relate it to skepticism, some of those variations may invalidate the original belief set without contradicting them. These would consist of reasons why the original belief set was really an illusion. The number of such invalidating variations is also infinite, and they are not necessarily all ridiculously far fetched.

 

Our only defense is that the beliefs are probably valid for the group of experiences we considered in the past, so if the invalidating variation requires a large change in experiences we can simply define the belief set as referencing the original set of experiences.

 

Example: Simple arithmetic is really false because this is all a VR program for angels and outside it math is much more complicated. Response: Then math is defined for use inside the VR program, or more generally in the world we exist in.

 

Sometimes however invalidating variations are capable of creating undesired results for something we thought the original belief set applied to. Which of course is the whole reason we brought it up, and is related to skepticism.

 

Example: Your brother's friend Mackenzie who is a high level executive in a major company is coming in town and your brother asked you to hang out with "Mackenzie" for one night while he is at a previous engagement. You decide to take the person out to a local sports bar to see a big football game that is coming on, reasoning that most guys like football. However when Mackenzie gets there, she turns out to be a woman.

Link to comment
Share on other sites

Qfwfq, what part of “undefined” do you not understand? I get the definite impression that we are back to that “I burned my math book because it had no mention of God” thing.

 

As I said, except for the fact that Rade at least accommodated the “undefined” issue your example is quite similar to his except for the fact that you haven't even acquiesced to that issue. I will try to show you what I am talking about.

Given those pairs of numbers, How would you build the "what is is what is" table and then proceed to use it for expectations of the most likely correct answer to the next number?
Your comment is that all I know of reality is “three pairs of numbers” or, if one includes post #90 you could add three more pairs. That means the information consists of twelve valid ontological elements. Since you have apparently decided that the order of the pairs and the order of the exchange is known (i.e., the information on which to build your model has increased), there exists but one element in each [imath](x,\tau)_t[/imath] plane. The only rational conclusion is that there will probably be another down the road. My analysis needs to be working with “all of the information” available and the problem you have presented contains such a dearth of information that practically no conclusions of value can be arrived at. There isn't even enough information there to conclude those ontological elements are numbers. This is exactly what I was talking about when I said, “Your example is chock full of assumptions (presumed valid ontological concepts) which you take no trouble to explicitly list (because the size of such a list would probably be beyond accomplishment)”.

 

You actually mentioned, “a spy”, “is sent”, “to gather”, “intelligence”, “for”, “planners”, “attack”, “against”, “a castle”, “they”, “need”, “to know”, “how”, “fake”, “patrol” “guard”, a “patrol commander”... . If you want to include all the ontological elements you mentioned as part of the universe being explained, we still only have about ten or twelve entries for “reality” and most of them in different [imath](x,\tau)_t[/imath] planes. It's still a virtual dearth of information.

 

However, you are clearly working with an epistemological construct as you have given multiple occurrences of the same element “the spy” for one “the patrol” for another. And you came to this epistemological construct with no more information than what is explicitly listed in the paragraph? I think not; I think there are massive amounts of information behind your epistemological construct which you simply have failed to list. The only conclusion I can come to is that you have utterly no idea as to what I am talking about.

You seem to mean that your model is good for any ontology. Fine. Desn't this mean it should be valid for the one of my example?
Your example is not “an example” because it is based on information not present. As I said, you have omitted a vast quantity of information for the very simple reason that the size of a valid list of the information (in the form of undefined ontological elements) would probably be beyond physical accomplishment. That is why we can only talk about these things in the abstract; the quantity of information necessary for even the very simplest example would be far beyond anything you could physically manage to actually list.
What does this mean? It's valid in general, but not in any specific case? Is it valid in the specific cases you set, such as God's test, or the flashing light in the cave?
It's valid when the sum is over the entire universe of information available to you! You omit any information possibly standing behind that epistemological construct [imath]\vec{\psi}[/imath] is supposed to model and the interpretation of your epistemological construct which I say exists may still exist but it may very well be a misinterpretation of what you meant.

 

The point you all miss, over and over again, is that all I am saying is that “there always exists an interpretation of any flaw-free explanation of reality which will obey my equation. And, everyone fights me tooth and nail to deny that possibility.

What examples do you want? The differential constraint? For all practical purposes, that is the same constraint arising from shift symmetry in standard quantum mechanics and will generate something analogous to conservation of momentum. Or maybe you want an example of the trade off between what exists and the rule. Well, a very common example would be religion. If what exists includes the Gods, and the rule is “what happens is what they want to happen”; that is a pretty flaw-free explanation of reality. The only problem with it is that is that it isn't very valuable as a prediction device unless you have someone who “knows the mind of god” and that's a hard thing to be sure of. Another good example of the trade off between what exists and the rules, one which I would really prefer to avoid because I know it will just boil the blood of physics authorities, is electromagnetic theory and photons. Try to prove photons exist without using any aspects of the theory. I am not saying that electromagnetic theory is flawed, what I am saying is that its validity is a belief set.
Or, what is dark matter if it is not something we need to exist so we won't have to change the rules. All I am saying is that, if you include the entire universe, there exist an interpretation (a paradigm if you will) where my equation is valid. What in that do you find so untenable, that shift symmetry is not a characteristic of symbolic representation? Or that the existence of shift symmetry does not imply a conservation law of some kind? Do you seriously hold that conservation of momentum, if you include the whole universe is a crock?

 

Or I suppose you can't believe that the rule [imath]\sum_{i \neq j }\delta(x_i-x_j)\delta(\tau_i-\tau_j)=0[/imath] together with the freedom to create “invalid” (imaginary) ontological elements is capable of yielding exactly what is observed? If that is the case, go read my comments to Anssi.

 

This whole exchange is nothing if not tiresome. You all seem to think that I am saying something that I am not saying; you are all looking for applications of my ideas within your personal paradigm. All I ask is that you admit that it appears to be a true representation of reality under my definitions: i.e., there exists a paradigm where my equation is a valid representation of certain specific constraints on reality.

 

Have fun -- Dick

Link to comment
Share on other sites

Haven't had a lot of time in the past days to contribute, but reading the latests posts I thought I'd make a quick comment.

 

I don't think Qfwfq is so much doubting the validity of your attack as he is trying to figure out what it "is", i.e. what does it do and why expect that it works, as a construct. I suspet that's not scepticism as much as it is trying to find out if this topic is worth spending time with. Internet is full of interesting topics after all :beer-fresh:

 

Even if one recognizes that their worldview is chocked full of "arbitrary" choices (as far as how did we define various "things" to exist and what properties do we consequently attach to them etc.), they do have to use that worldview to understand new ideas, as painful as it may be :hihi: (Yes Buffy, I know what you are saying also)

 

Typically when we try to understand reality, we, in our mind, break it into specific "things" that behave in specific ways (i.e. we define their characteristics), and if our expectations are met by our observations, we decide we struck the nail on the head (for time being). E.g. relativity was a new way to define "time" (and gravity). Need I mention QM?

 

All the contributors to this thread seem to understand the philosophy that says; one always has many equally valid ways to define the reality around us. I.e. given all the observations we have accumulated, there are many possible (internally coherent) worldviews we could adopt. Just like you can interpret relativity and QM in many different ways ontologically. Each worldview or model functions by us having first defined its components, without having any idea about what is the "real way" to define reality into components (or rather, without having any reason to believe reality actually is, ontologically speaking, a set of components) Well this issue goes very very deep and has to do with any perception we could ever have about anything at all, so let's skip onwards.

 

Now Doctordick has defined that x, tau, t -plane and few other things. Their purpose is not to give you a way to make "absolutely best predictions possible". I.e. it does not give you the correct number to shout at the guards, and it doesn't even give you the probabilities for all the numbers you know.

 

EDIT: Let me reiterate, it doesn't give you the probabilities in that the probability function has not been defined. It is not important what kind of function we might devise or how did we choose to break reality into "ontological elements". What is more important is "what requirements the probability function must meet if it is part of a self-coherent worldview?" I.e. what is common to ALL logically viable probability functions. The definitions (x,tau,t etc) are given only because with them we have a way to express those constraints. Obviously it would be possible to express the same constraints with different definitions, there is nothing magical about them.

 

All it gives you is a way to express some constraints that need to be met by any internally coherent worldview you could ever make about reality (when that worldview is expressed in the form of the x, tau, t-plane).

 

I.e. it is not an attempt to specify an ontology; it does not claim reality is made of an "x, tau t"-plane. And strictly speaking, it is not even an attempt to give ontologically ambiguous model of reality, since the issue to be stressed is NOT how one would express reality in the "X, tau, t" -plane, but what necessary (but not readily obvious) constraints there are when you do.

 

I did not have much time to write this post so I hope it is not too confusing. Nevertheless I think I understand what the confusion is and I can probably explain what is going on, if this post didn't clarify things already :doh:

 

As an amusing (albeit perhaps irrelevant) side-note, I thought this was fascinating (and just goddamn funny) case of self-conflicting worldview:

Verizon Wireless Has Trouble With Math video

 

Apparently (and this was news to me) there is a meme where people word something like $0.01 as "point zero 1 cents". The logic being it is less than a dollar, so instead of saying "dollars" in the end they say the sum is cents. It would be possible to communicate sums that way otherwise, except in our number system it causes internal conflict. In this case the conflict is readily obvious. Is it always?

 

Anyway, I'll try to get back to the math soon... (And be more competent than the Verizon customer support :partycheers:

 

-Anssi

Link to comment
Share on other sites

I feel like minimalism is an important product of epistemology. Meaning that in order to be able to understand our surroundings, we must assign to the definitions of ideas the normal context in which they appear.

 

It is a more general form of what makes science, math and even simple logic useful tools.

 

Science deals with objects and finding all properties of those objects so that we can better determine how they interact with other things. Math and simple logic deal with simple ideas that retain a minimalist definition because it is obvious to append them with additional information would change their essence.

 

Ideas like honesty and even infinity are different because they complex enough that people can begin to append additional information on to them.

 

If I define honesty as "what people aren't when they knowingly fail to provide correct and useful information that someone needs to act upon".

 

When you collect a large number of such contextual definitions, philosophical matters become trivial and as precise as mathematics.

 

According to this reasoning, I wouldn't spend time reading a thread where philosophical issues were being referred to metaphorically with all kinds of made up formulas etc...

 

I am not writing this to be antagonistic. I don't like it when people give others the idea that philosophy is some necessarily arcane investigation that all but the most intellectual people are barred from. It isn't. It is something that everyone needs to study and understand so that the efficiency of the human race can be increased.

Link to comment
Share on other sites

I don't like it when people give others the idea that philosophy is some necessarily arcane investigation that all but the most intellectual people are barred from. It isn't. It is something that everyone needs to study and understand so that the efficiency of the human race can be increased.

 

Yes, definitely.

 

I feel like minimalism is an important product of epistemology. Meaning that in order to be able to understand our surroundings, we must assign to the definitions of ideas the normal context in which they appear.

 

It is a more general form of what makes science, math and even simple logic useful tools.

 

Science deals with objects and finding all properties of those objects so that we can better determine how they interact with other things. Math and simple logic deal with simple ideas that retain a minimalist definition because it is obvious to append them with additional information would change their essence.

 

And in terms of our worldview, whatever math you use to describe reality, that math is describing the behaviour of some elements that you have decided to tack with identity. The problem being, we did not begin science with a set of "objects" whose behaviour we are just trying to probe. Instead, we decide what constitutes an "object" (or any defined entity like "space" or "time"), the large deciding factor in these definitions being "what makes it simple to understand/predict the behaviour of reality"

 

I mentioned space and time; notice that in most cases that math is describing the behaviour of those "sensible things" in some "space" (whose properties/essence you defined), and in some "time" (also that you defined). As a related note, it should be clear to see that there's no point in defending some arbitrarily chosen ontological take on spacetime, just because it happens to seem particularly aesthetical or elegant together with the rest of one's worldview.

 

Ideas like honesty and even infinity are different because they complex enough that people can begin to append additional information on to them.

 

If I define honesty as "what people aren't when they knowingly fail to provide correct and useful information that someone needs to act upon".

 

When you collect a large number of such contextual definitions, philosophical matters become trivial and as precise as mathematics.

 

Well, a lot of craziness ensues when people have defined some specific semantics on concepts like "good and evil" or "freedom", and then try to force their view as the true and correct one. Anything you say about such concepts might make a perfect sense inside your worldview, like good and evil make perfect sense if one believes there is a god who decides who goes to heaven and who goes to hell (or some other cosmic justice). But then it is once again plain to see that a lot of "arbitrary" assumptions have been made before we ended up with those definitions for "good" and "evil". Relevant as this may be, it is leading us away from the core of the topic...

 

According to this reasoning, I wouldn't spend time reading a thread where philosophical issues were being referred to metaphorically with all kinds of made up formulas etc...

 

The formulas are there just to extend our logical abilities. A mere tool. Like the formula for quadratic equation.

 

-Anssi

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...