Jump to content
Science Forums

An explanation of what I am talking about.


Doctordick

Recommended Posts

There are actually two different geometries used in common physics....

Thank you DD for the response you gave to my question. I want to be sure I have it correct. Here is a summary of what I "think" you explained:

 

1. Newton...... (x,y,z,"t"), where "t" = "which specific three dimensional drawing describes the entity at time t "

 

....with regard to Newton's picture of what is going on, one should always remember that t is actually a parameter used to specify change (t merely defines which particular displacement in x is being talked about)---(your comments about Newton "t").

====

 

2. Einstein......(x,y,z, "ict"), where "i" = sq root -1, "c" = speed of light, "t" = (the concept of Newton above).

 

"This was Einstein's “space-time continuum”; a four dimensional space (x,y,z,ict)"

 

====

 

3. Doctor Dick....(x,y,z, "tau", "t")---a five dimensional geometry

 

And, here are your comments about your geometry...

 

"Now Gamow explained that “time” (what clocks measure) was another dimension orthogonal to x, y and z. He also commented that clocks actually measured “proper time”, a thing Einstein called “tau”."

 

"At any rate, my mental picture of the world was through that four dimensional picture together with time as an evolution parameter and the tau axis being projected out."

 

"It gave all the correct answers with regard to relativity so I presumed it was what Einstein had in mind."

 

"Some problems were easier in Einstein's picture and some were easier in my picture so I always used the picture most convenient to the problem in front of me (note that my picture was essentially a four dimensional Newtonian picture which allowed Newtonian dynamics to give valid answers not an easy problem in Einstein's picture)."

 

"When I was taught quantum mechanics, it seemed to me that setting mass to be momentum in the tau direction was a simple solution to that projection problem."

 

"So all our experiments are performed with objects made of entities momentum quantized in the tau direction and they are performed in laboratories constructed of entities which are also momentum quantized in the tau direction."

 

====

 

I have a question, DD.

 

From the above, you indicate that your geometry is a 5-dimensional type, but that of Einstein is 4-dimensional.

 

But, it seems to me that one can view the Einstein approach also as being 5-dimensional. That is, the three (x,y,z), then another dimension ("i" x "c"), finally the fifth = "t", which is the Newton "t".

 

So, is it correct to say that your concept of the "tau" projection is the quantized entity with mass moving in a dimension ("i" x "c") ?? ---and that this is what clocks measure ??

 

Clearly, the dimension ("i" x "c") is not the "t" (time) of Newton.

 

====

 

Thanks in advance for helping me understand your thinking on this topic.

Link to comment
Share on other sites

I am moved to make a minor correction in your English. In #1 the thing should be, “In that case, one can use the parameter “t” to identify which specific three dimensional drawing describes the entity at time t.” Note the changes in square brackets.

1. Newton...... (x,y,z,"t"), where "t" = "which specific three dimensional drawing [is being described] at time t "

 

....with regard to Newton's picture of what is going on, one should always remember that t is actually a parameter used to specify change (t merely defines which particular displacement in x is being talked about [in a case where x changes])---(your comments about Newton "t").

What Einstein really did was to make Newton’s parameter another dimension of space (essentially the space-time continuum). In essence a four dimensional static space (essentially, static because Einstein’s physics has no "parameter" for change). The future and the past are merely different places in Einstein's space-time continuum. That is the source of the "many worlds" idea behind quantum mechanics interpretations.
3. Doctor Dick....(x,y,z, "tau", "t")---a five dimensional geometry
No, mine is a four dimensional space together with a parameter to reference change.
But, it seems to me that one can view the Einstein approach also as being 5-dimensional. That is, the three (x,y,z), then another dimension ("i" x "c"), finally the fifth = "t", which is the Newton "t".
No, the factor (“i” x ”c”) is merely a scaling factor being imaginary in order to yield the proper relativistic transformations. In Einstein’s physics, evolution is defined by changes in that fourth (imaginary) dimension.
Clearly, the dimension ("i" x "c") is not the "t" (time) of Newton.
("i" x "c") is a mathematical value, not a dimension!

 

Sorry about being slow to answer but I have been quite busy with other issues at the moment.

 

Have fun -- Dick

Link to comment
Share on other sites

...No, mine is a four dimensional space together with a parameter to reference change

 

Thank you for correcting my understanding.

 

So, based on your comments above, as I now understand the issue, both your geometry and that of Einstein is a four dimensional approach. OK, so here is my revised understanding:

 

1. Einstein......(x,y,z, "ict"), where "i" = sq root -1, "c" = speed of light, "t" = (the concept of Newton above).

 

2 Doctor Dick....(x,y,z, "tau") + "t" which is a "parameter to reference change ((note--clocks measure "tau", and not "t"))

 

===

The main difference of the two above being that Einstein has the evolution in the fourth dimension, his "ict"

"In Einstein’s physics, evolution is defined by changes in that fourth (imaginary) dimension."

 

whereas...in your approach above you add a Newtonian "t" parameter to (x,y,z,tau) and let the "t" be a reference to change.

...No, mine is a four dimensional space together with a parameter to reference change

 

Thus, in short, you remove "t" from "space", and you let your "tau" + "x,y,z" = "space" ?? --would this be correct ??

 

==

 

Finally, I need help understanding your logic for this statement..

 

...No, the factor (“i” x ”c”) is merely a scaling factor being imaginary in order to yield the proper relativistic transformations....("i" x "c") is a mathematical value, not a dimension!
OK, but how then can the claim be main that the "ict" of Einstein is the 4th dimension, since clearly the (i and c) factors are part of the Einstein 4th dimension ? Using your logic, the "ict" of Einstein is not a "dimension", but a scaling factor that allows for the proper relativistic transformation ?? I am sure I am missing some important detail.

 

Have a wonderful holiday season Doctor Dick.

Link to comment
Share on other sites

Okay maybe I'll make a quick comment here also;

 

Thank you for correcting my understanding.

 

So, based on your comments above, as I now understand the issue, both your geometry and that of Einstein is a four dimensional approach. OK, so here is my revised understanding:

 

1. Einstein......(x,y,z, "ict"), where "i" = sq root -1, "c" = speed of light, "t" = (the concept of Newton above).

 

2 Doctor Dick....(x,y,z, "tau") + "t" which is a "parameter to reference change ((note--clocks measure "tau", and not "t"))

 

===

The main difference of the two above being that Einstein has the evolution in the fourth dimension, his "ict"

 

There are a lot of differences, and it might be helpful if you take a good look at standard special relativistic transformations. (I.e. Lorentz transformation). You can find a lot of information from the internet.

 

But most of all it might be helpful to remind yourself what we were really talking about;

 

The x,y,z,tau space is a way to represent any possible worldview (different worldviews that define different sorts of elements, would just map differently moving elements, given the same source data).

 

If you look at the OP at:

http://hypography.com/forums/philosophy-of-science/18861-an-analytical-metaphysical-take-special-relativity.html

 

It starts from the point where it has been established that, given the symmetry arguments, all the defined elements, no matter what the specific definitions are, must be moving at a constant velocity in terms of the evolution parameter "t", inside the x,y,z,tau space.

 

That is, when the defined elements are viewed from a coordinate system where the momentum of the entire universe vanishes.

 

Then it is discussed than if you want to view the same elements from some different coordinate system (i.e. one where the center of mass of the universe is moving), and still have all the elements move at the same constant velocity to all directions, the transformation involved is exactly Lorentz transformation.

 

Then, having arrived at the standard physics definitions for circumstances that are taken to mean "mass", we are analyzing the expectations for a dynamic system, in a worldview, where one has defined massless oscillator (like a photon), and massive mirrors. In particular, we are interested of how that construction racks up complete cycles.

 

It is shown why and how the expectation is that the cycle count (when compared to "t") will be a function of chosen coordinate system (inertial frame), and the expectations are exactly the same as in the theory of relativity. Only, in theory of relativity the cycle count is often equated with metaphysical "time", rather than taken as an expected dynamic behaviour of carefully defined set of elements.

 

You have to really think about what that connection between "symmetry arguments to object definitions", and "relativistic time relationships" means. It means relativity is a character that arises - in one form or another - to a self-coherent worldview, due to the way it structures data (i.e. transforms unknown data patterns into clearly defined elements). We are talking about immaterial definitions on top of unknown reality. I.e. relativity is not about the actual behaviour of the underlying data, but about how it must be structured into a, let's say "meaningful form".

 

"t", at all times, stands for the parameter keeping track of "change", including when you track changes in that construction of mirrors and photons. (Think a bit about how "c" is also parameter having to do with "speed", and underlies our expectations of a clock, in completely standard physics)

 

I think this should be helpful for you also;

 

Post #48:

http://hypography.com/forums/philosophy-of-science/18861-an-analytical-metaphysical-take-special-relativity-5.html#post271521

 

whereas...in your approach above you add a Newtonian "t" parameter to (x,y,z,tau) and let the "t" be a reference to change.

 

Thus, in short, you remove "t" from "space", and you let your "tau" + "x,y,z" = "space" ?? --would this be correct ??

 

It should be clear when it is discussed in that other thread. Just keep in mind that x,y,z,tau -space with motion ("t") is merely a way to represent and analyze any worldview.

 

Finally, I need help understanding your logic for this statement..

 

OK, but how then can the claim be main that the "ict" of Einstein is the 4th dimension, since clearly the (i and c) factors are part of the Einstein 4th dimension ? Using your logic, the "ict" of Einstein is not a "dimension", but a scaling factor that allows for the proper relativistic transformation ?? I am sure I am missing some important detail.

 

That all has to do with the absolutely standard way relativity is usually handled. Minkowski suggested that relativity be handled in terms of "relativistic spacetime", where "t"(time) is conceived as a dimension of its own, and the transformation that gets you from one inertial frame to another is Lorentz transformation. I.e. the point is that it transforms (scales) not only the spatial dimensions, but "time" dimension also. That all led to the implication of ontologically real spacetime where "time" is a dimension, and since it is transformed (in a way that the notion of simultaneity is different for each inertial frame), people talk about how relativity implies "past and future exists all the time", in static manner. (Just think about how two different observers have a differently "skewed" spacetime around them, but still they are part of the same reality, and you should get it)

 

Anyway, to make a long story short, the analysis about special relativity has a lot to do with expectations that arise from object definitions in terms of dynamic structures we call "clocks". It's best you don't take Minkowski representation, where "time" is taken to transform in metaphysical sense due to choosing different coordinate system, as an ontological representation of reality. Likewise, it's best you don't take DD's definitions as any sort of argument about "alternative ontology"; it is instead an analysis ABOUT relativity.

 

Sorry if I'm slightly sloppy, just writing this all quickly... Merry Christmas all!

-Anssi

Link to comment
Share on other sites

OK, but how then can the claim be main that the "ict" of Einstein is the 4th dimension, since clearly the (i and c) factors are part of the Einstein 4th dimension ? Using your logic, the "ict" of Einstein is not a "dimension", but a scaling factor that allows for the proper relativistic transformation ?? I am sure I am missing some important detail.
Your problem is that you do not have much experience in doing analytical geometry. Geometry is concerned with representations of things in a defined space. When one says one is dealing with (x,y,z) space, one means that their picture is a three dimensional space (that would be three orthogonal directions in a conceptual space). The letters x, y and z are a reference to what variations in those three different directions are going to be called. The actual directions are commonly referred to as [imath]\hat{x}[/imath], [imath]\hat{y}[/imath] and [imath]\hat{z}[/imath]. In some papers these directions are simply referenced by bolding the variables such as (x,y,z); however, that notation though easy is often missed.

 

The exact correct notation for a point in Newton’s three dimensional picture would be [imath]\vec{X}(t)=x(t)\hat{x}+y(t)\hat{y}+z(t)\hat{z}[/imath]. A point in Einstein’s four dimensional picture would be represented by [imath]\vec{X}(t)=x\hat{x}+y\hat{y}+z\hat{z}+ct\hat{t}[/imath]. The “c” is nothing more than a scale factor due to the fact that the units with which time is measured are not the same as the units with which x, y and z are measured. The “i” is no more then a notification that the time axis has “imaginary” status as apposed to the other axes: i.e., [imath]\hat{x}\cdot\hat{x}= \hat{y}\cdot\hat{y}=\hat{z}\cdot\hat{z}=unity[/imath] whereas [imath]\hat{t}\cdot\hat{t}=-1[/imath]. Of issue is the fact that time is a direction in Einstein’s space time continuum. It serves as a parameter of evolution when one examines how things change in time but is certainly not the simple parameter conceived of by Newton.

 

In my four dimensional picture a point is represented by the expression [imath]\vec{X}(t)=x(t)\hat{x}+y(t)\hat{y}+z(t)\hat{z}+\tau(t)\hat{\tau}[/imath]. It is explicitly a four dimensional space and tau is of exactly the same real nature as the other directions and time is, once again, a simple parameter of evolution; however, so long as one’s interest is restricted to entities momentum quantized in the tau direction (i.e., massive entities) that dimension is simply projected out by the uncertainty principle (i.e., if momentum in the tau direction is exactly known, the position in the tau direction is unknowable).

 

Have a wonderful holiday season yourself -- Dick

Link to comment
Share on other sites

....The x,y,z,tau space is a way to represent any possible worldview (different worldviews that define different sorts of elements, would just map differently moving elements, given the same source data)....[
Anssi, thank you for a very useful explanation in your last post. We have gone over the statement above before in other posts (which I think is of fundamental importance) and it does makes sense. So, suppose you have observers Q and Q' and Q''....the (x,y,z,tau) space +"t" is a way for each of them to "map" moving elements, even if they had different definitions for the elements (e.g., different worldviews), which they all agree are elements to start with.

 

But, does this explanation really show how the approach of Doctor Dick differs from that of Einstein ??--that is, is not the geometry of Einstein, his (x,y,z,"ict"), also a way to represent any possible worldview about undefined elements for observers Q, Q', and Q'' ? That is, does not (x,y,z,"ict") accomplish the same end result as (x,y,z,tau)+"t" ? I believe Doctor Dick indicated he prefers (x,y,z,"ict") approach for some applications in physics (see a previous post of his in this thread).

 

I think perhaps too much is being made about the uniqueness of the approach of Doctor Dick to map "any worldview" (such is done also by Einstein). For me, what I find unique is that Doctor Dick does the above mapping by the separation of "space" and "time", whereas for Einstein it is "spacetime" all the way down to the essence of existence itself. The reason I like the approach of Doctor Dick is because it follows my definitions of space and time, that is, time is that which is intermediate between moments, space is that which is intermediate between existents. So, for me, space and time are separate concepts by definition, and NOT connected by necessity as "spacetime". They only "may be" connected when any observer Q, Q', Q''... decides to connect them by thinking about that which is intermediate between two moments of existents. That is, only when you attempt to connect moments and existents into your worldview would you follow the approach of Einstein. So, we could say that Einstein approach is a limited case of the more general approach of Doctor Dick (when the human mind thinks about moments and existence united)--well, this is the way I look at this issue. So, this is why I like the approach of Doctor Dick because it helps with my understanding of relationship between space and time as separate (in the general) and united (in the specific) concepts dealing with existence, and the human attempt to explain it.

 

But, perhaps I am missing something very basic.

Link to comment
Share on other sites

A point in Einstein’s four dimensional picture would be represented by [imath]\vec{X}(t)=x\hat{x}+y\hat{y}+z\hat{z}+ct\hat{t}[/imath]. The “c” is nothing more than a scale factor due to the fact that the units with which time is measured are not the same as the units with which x, y and z are measured. The “i” is no more then a notification that the time axis has “imaginary” status as apposed to the other axes...
Doctor Dick, thanks for the explanation, but should not the Einstein geometry be [imath]\vec{X}(t)=x\hat{x}+y\hat{y}+z\hat{z}+ci\hat{t}[/imath] ?, and not as you stated above, given that we need to include the "i" somewhere in Einstein ? [edit: if t*t="i", then, to get "ict" we would need (t*t, c, "t") --correct?]

 

Also, you show your four dimensional geometry a bit differently here, than as before ? Before it was given as [(x,y,z,tau) + "t"], but now you show [imath]\vec{X}(t)=x(t)\hat{x}+y(t)\hat{y}+z(t)\hat{z}+\tau(t)\hat{\tau}[/imath] and it seems to imply that the "t" and "tau" are united in a 4th dimension, when in your previous explanation of [(x,y,z,tau)+"t"] space and time (the evolution parameter concept of time) are separate. I find this approach much more to my liking, but now the question comes, which is it for you -- or are you saying they are the same ?

 

(1) [(x,y,z,tau)+"t"]

or

(2) [imath]\vec{X}(t)=x(t)\hat{x}+y(t)\hat{y}+z(t)\hat{z}+\tau(t)\hat{\tau}[/imath]

 

Please bear with me, I am getting to a final understanding I hope.

Link to comment
Share on other sites

Doctor Dick, thanks for the explanation, but should not the Einstein geometry be [imath]\vec{X}(t)=x\hat{x}+y\hat{y}+z\hat{z}+ci\hat{t}[/imath] ?
First, what you have written is patently wrong. To begin with, in Einstein’s geometry, time is one of the components defining a location in the “space-time continuum” and not a parameter on the vector locating a point in his space: i.e., you should have [imath]\vec{X}[/imath], not [imath]\vec{X}(t)[/imath]. Note further that the other three components of [imath]\vec{X}[/imath] (as you have written them) consist of two parts, the magnitude of the component (indicated by the variables named x, y and z) and the direction of the component (indicated by the unit vectors named [imath]\hat{x}[/imath], [imath]\hat{y}[/imath] and [imath]\hat{z}[/imath]. In your time component, you have a unit vector called [imath]\hat{t}[/imath] but your “magnitude” of that time component is written as “ci” which is a constant and never changes therefore your vector [imath]\vec{X}[/imath] does not point to a general point in your space time continuum but only to the points where t=1 (or perhaps where t=ci depending on one's interpretation).
, and not as you stated above, given that we need to include the "i" somewhere in Einstein ?
The problem here is that people absolutely never denote positions in Einstein’s space-time continuum with old fashion vector notation; they simply use the common convention of denoting a point as (x,y,z,t). As a consequence my attempt to do so is subject to a very real lack of an accepted protocol for denoting that imaginary component. Perhaps a much more acceptable notation would be [imath]\vec{X}=x\hat{x}+y\hat{y}+z\hat{z}+ct\hat{t}[/imath] together with the statement that [imath]\hat{x}\cdot\hat{x}=\hat{y}\cdot\hat{y}=\hat{z}\cdot\hat{z}=-\hat{t}\cdot\hat{t}=1[/imath]. That “ct” instead of “t” for the magnitude of the time component is necessary because the units of measure for “t” are not defined in the same units as are x, y and z.
[edit: if t*t="i", then, to get "ict" we would need (t*t, c, "t") --correct?]
I don’t think you are familiar with the various meanings of the operator “*”. At any rate what you wrote is not correct in any definition I am aware of.
Also, you show your four dimensional geometry a bit differently here, than as before ? Before it was given as [(x,y,z,tau) + "t"], but now you show [imath]\vec{X}(t)=x(t)\hat{x}+y(t)\hat{y}+z(t)\hat{z}+\tau(t)\hat{\tau}[/imath] and it seems to imply that the "t" and "tau" are united in a 4th dimension, when in your previous explanation of [(x,y,z,tau)+"t"] space and time (the evolution parameter concept of time) are separate.
I think you are misunderstanding something there. My geometry is (and always has been) a four dimensional Euclidean geometry (consisting of the components x, y, z and tau) where t is a parameter of change not a direction in the geometry.
I find this approach much more to my liking, but now the question comes, which is it for you -- or are you saying they are the same ?

 

(1) [(x,y,z,tau)+"t"]

or

(2) [imath]\vec{X}(t)=x(t)\hat{x}+y(t)\hat{y}+z(t)\hat{z}+\tau(t)\hat{\tau}[/imath]

I don’t know where you got that expression [(x,y,z,tau)+”t”]I may have written something like that down but I suspect you are omitting some details expressed through context. I would interpret what you have quoted to mean a point in that four dimensional space represented by (x,y,z,tau) being observed at time “t”. That would make the two expressions you quote to be exactly the same thing.

 

There are two very different uses of geometry which are often confused with one another. The first is a geometry used for the purpose of displaying observed relationships between independently measured variables. And the second would be a geometry used for the purpose of displaying relationships between interconnected measured variables.

 

The first arises in science because the human mind is a highly visual analyzing machine. That is, it is much easier for most all humans to see how two variables are related if they are given a visual graph of them rather than a simple list of paired numbers. That is, if they are shown a line in a two dimensional space where the two directions are each tied to one of the variables and moving their attention along the line connecting those paired numbers yields a much more comprehensive demonstration than the simple list of those two variables.

 

It is very important in such a graph that lines of fixed values of either of those two variables do not intersect because, if they do, that means that the graph allows no difference in the orthogonal variable at such a point: i.e., at such a non Euclidean intersection point, the form of the graphic display itself forces a relationship between the two variables and thus does not allow for all possibilities. This is essentially the old “parallel lines do not intersect” axiom of Euclidean geometry.

 

Thus you will never find experimentalists using anything except Euclidean geometry as a mechanism to display their results when they do not know the relationships between those variables.

 

The second case (non-Euclidean geometry) is used to display relationships within the data which are known to be correct. For example, spherical geometry shows the correct relationships between data measurements constrained to the surface of a sphere. If the measurements are not taken from the surface of a sphere, then the relationships between the variables deduced from that geometric analysis will not be correct. It should be noted that the coordinate variables measured on the surface of a sphere are “not” independent variables.

 

Likewise measurements in Einstein’s space-time continuum are not independent variables. That is why it is called a theory: i.e., the theory is that the measurements are indeed consistent with Einstein’s space-time continuum! It seems to be correct because the relationships deduced from that theory (from analysis of his non-Euclidean geometry) are, to date, exactly what is observed.

 

My presentation is not a theory. It is general way of examining absolutely independent variables. That is why it has to be cast in a Euclidean geometry. It is not the same as Einstein’s theory at all and that is why the fact that it reproduces exactly the same results is so fascinating. My analysis is an absolute tautology and the results are no more than a direct consequence of how the various entities and the measurements are defined.

 

Have fun -- Dick

Link to comment
Share on other sites

Anssi, thank you for a very useful explanation in your last post. We have gone over the statement above before in other posts (which I think is of fundamental importance) and it does makes sense. So, suppose you have observers Q and Q' and Q''....the (x,y,z,tau) space +"t" is a way for each of them to "map" moving elements, even if they had different definitions for the elements (e.g., different worldviews), which they all agree are elements to start with.

 

But, does this explanation really show how the approach of Doctor Dick differs from that of Einstein ??--that is, is not the geometry of Einstein, his (x,y,z,"ict"), also a way to represent any possible worldview about undefined elements for observers Q, Q', and Q'' ? That is, does not (x,y,z,"ict") accomplish the same end result as (x,y,z,tau)+"t" ?

 

No, the definitions of the geometry do not accomplish the same end results at all.

 

When you say Einstein's geometry is also a way to represent any possible worldview, you must be thinking that any sorts of entities moving in any sorts of funky ways can be drawn into a spacetime diagram. But you are forgetting that the geometry itself is a statement about relativistic time relationships between moving coordinate systems (= moving observers).

 

This is really just standard relativity but maybe it's helpful to see it visualized:

 

Here's a pre-relativity galilean transformation:

File:Galilean transform of world line.gif - Wikipedia, the free encyclopedia

(Galilean transformation - Wikipedia, the free encyclopedia)

 

I.e. when time is drawn as an axis of its own, the events from the future are approaching you at the same speed even when changing coordinate system (i.e. when the worldline curves).

 

Compare to relativistic transformation:

File:Lorentz transform of world line.gif - Wikipedia, the free encyclopedia

 

You have to imagine the horizontal line there to get to the implied "simultaneity". Notice how the events are closer to you or further away from you and your simultaneity when you change coordinate system. That's all just a logical consequence of treating C as isotropic. Lorentz transformation is the transformation that you see there, and it is the transformation that preserves C as isotropic when you change from one coordinate system to another. That is sort of the point of Minkowski's spacetime geometry.

 

So, the reason people say Minkowski geometry "unifies space and time" is because it is a statement about the relationship between space and time when you change coordinate systems.

 

As oppose to this, the [imath]x,y,z,\tau[/imath] space is just euclidean space and the relativistic time relationships are not embedded to the geometry. They arise from the analysis about the impact that various man-made definitions have on each others. "t" is somewhat fundamental part of a worldview because it is simply the parameter tracking "changes". But whatever clocks do or measure, is not considered fundamental; it is a function of the definitions underlying the construction we call "a clock".

 

Also the [imath]x,y,z,\tau[/imath] space did not arise just for being able to analyze relativity, it arose from the definitions behind "Universal Explanation". "Universal" as in entirely general (like Universal Turing Machine is a general computer). The constant velocity of the defined elements arose as a logical consequence of the symmetry requirements early on in the analysis, but also had nothing to do with relativity at that point.

 

You could say that relativistic time relationships arise, in part, from the definitions of massless elements, and how those massless elements must be treated in self-coherent ways when changing coordinate system. That should ring a bell to anyone who knows how relativity was conceived, as a logical consequence of Maxwell's equations, and the need to treat them self-coherently when changing coordinate system.

 

The epistemological analysis is an exact demonstration of the connection between epistemological necessities (the symmetries) resulting into relativistic definitions.

 

I believe Doctor Dick indicated he prefers (x,y,z,"ict") approach for some applications in physics (see a previous post of his in this thread).

 

He is saying it is very handy for certain types of problems. That should be expected of course, just like different QM mechanical interpretations are handy for different types of problems. I would also readily agree with him that it's good mental hygiene to keep that sort of "fundamental evolution parameter" (whatever tracks changes, and is part of the definition of speed "C") as explicitly separate from the measured clock cycles of electromagnetic constructions.

 

The reason I like the approach of Doctor Dick is because it follows my definitions of space and time, that is, time is that which is intermediate between moments, space is that which is intermediate between existents.

 

Well I wouldn't know anything about that.

 

So, for me, space and time are separate concepts by definition, and NOT connected by necessity as "spacetime". They only "may be" connected when any observer Q, Q', Q''... decides to connect them by thinking about that which is intermediate between two moments of existents. That is, only when you attempt to connect moments and existents into your worldview would you follow the approach of Einstein. So, we could say that Einstein approach is a limited case of the more general approach of Doctor Dick (when the human mind thinks about moments and existence united)--well, this is the way I look at this issue. So, this is why I like the approach of Doctor Dick because it helps with my understanding of relationship between space and time as separate (in the general) and united (in the specific) concepts dealing with existence, and the human attempt to explain it.

 

I guess if you can follow the analysis from the beginning to special relativity, i.e. if you can understand how relativistic time relationships arise in epistemological sense, then yes you could get your answer about how space and time are different types of concepts underlying our comprehension of reality, and how it is still valid to "unify" them in the way relativity does.

 

-Anssi

ps, I also could not figure out at all why did you say [imath]\vec{X}(t)=x(t)\hat{x}+y(t)\hat{y}+z(t)\hat{z}+\tau(t)\hat{\tau}[/imath] implies that "t" and [imath]\tau[/imath] are united in the 4th dimension.

Link to comment
Share on other sites

ps, I also could not figure out at all why did you say [imath]\vec{X}(t)=x(t)\hat{x}+y(t)\hat{y}+z(t)\hat{z}+\tau(t)\hat{\tau}[/imath] implies that "t" and [imath]\tau[/imath] are united in the 4th dimension.
This was a false interpretation of mine as was clarified by Doctor Dick in his post previous to yours--it is found in this statement:

 

Originally Posted by Rade:I find this approach much more to my liking, but now the question comes, which is it for you -- or are you saying they are the same ?

 

(1) [(x,y,z,tau)+”t”]

or

(2) [imath]\vec{X}(t)=x(t)\hat{x}+y(t)\hat{y}+z(t)\hat{z}+\tau(t)\hat{\tau}[/imath]

 

I don’t know where you got that expression [(x,y,z,tau)+”t”]I may have written something like that down but I suspect you are omitting some details expressed through context. I would interpret what you have quoted to mean a point in that four dimensional space represented by (x,y,z,tau) being observed at time “t”. That would make the two expressions you quote to be exactly the same thing.

 

So, [imath]\vec{X}(t)=x(t)\hat{x}+y(t)\hat{y}+z(t)\hat{z}+\tau(t)\hat{\tau}[/imath] , according to DD is the same as the expression [(x,y,z,tau)+”t”]. That is, space (x,y,z,tau) and time ("t") are separate--there is no concept of "spacetime" until the human mind decides to connect them by thinking about the relationship of "moments" and "existents" (this is my way of looking at it). The expression [(x,y,z,tau)+”t”] is more easy for me to visualize than [imath]\vec{X}(t)=x(t)\hat{x}+y(t)\hat{y}+z(t)\hat{z}+\tau(t)\hat{\tau}[/imath] , and since they are "exactly the same" according to DD I will use the first, although I agree it is not in proper mathematical format.

 

I also must keep in mind what the DD Fundamental Equation means to me (which you agreed with Ansshi some posts ago):

 

The Fundamental Equation (of Doctor Dick) is our interpretation of how nomena are transformed into ontological elements and has absolutely nothing to do with reality (Rade comment).

 

So, nomena have a place in space (x,y,z,tau), and that which is intermediate between nomena we call time ("t"). This is true for any observer Q, Q', Q'' no matter how fast or slow they are moving. We use the Fundamental Equation of DoctorDick to form a worldview of this process (e.g., we mentally transform nomena into ontological elements). This is how I see it, let me know where I am missing something.

Link to comment
Share on other sites

The Fundamental Equation (of Doctor Dick) is our interpretation of how nomena are transformed into ontological elements and has absolutely nothing to do with reality (Rade comment).
Not exactly. All explanations make presumptions as to what nomena transform to what ontological elements (and can be presumed to include ontological elements which are not actually bona fide nomena extant in reality) and are designed to make one’s expectations as close as possible to what actually occurs. My equation merely specifies the dynamic relationships which must exist between those ontological elements (or rather, must exist within an internally consistent interpretation of that explanation).
So, nomena have a place in space (x,y,z,tau), and that which is intermediate between nomena we call time ("t").
Nomea are merely denoted via numerical labels so that they may be identified with the ontological elements of that explanation: i.e., neither space nor time are included in the initial concept. Time arises because the nomena standing behind that explanation may not be fixed: i.e., what is being explained (the nomena) may increase and the explanation must handle that event. What is being explained is what is known (what I have called the past). Prediction is a result of the explanation, not the nomena. The nomena being explained is always finite but the explanation (as it includes prediction) is essentially infinite.

 

It follows from that view of the information to be explained that the information to be explained consists of a collection of numerical labels identified with a numerical index t (indicating the order with which the information became available). Thus, what is to be explained is a finite list of collections of numbers. That is not an easy problem to solve (explaining some list of collections of numbers). Because human minds seem to more easily spot correlations in graphical representations than they seem to do with mere lists of numbers, I make the step of representing those numbers in a graphical representation: i.e., plotted in a Euclidean space (a mental construct of convenience). In the same vein, “t” is a mental construct of convenience.

 

It is the explanation which yields predictions, not the information being explained. As such, the explanation essentially amounts to interpolation for points in that Euclidean space (associated with interpolations of the index “t”). That is to say, the explanation essentially tells one what to expect (or presume) outside the finite collection of data on which the explanation is based.

 

There exists an infinite number of ways to make such an interpolation and analysis of those possibilities yield the relationship defined by my fundamental equation (which essentially must yield a probabilistic result).

This is true for any observer Q, Q', Q'' no matter how fast or slow they are moving.
This is true for any observer Q,Q’,Q” no matter what nomena their personal explanations are based upon. There always exists an interpretation (an identification with internally defined ontological elements) which is 100% consistent with both my fundamental equation and the nomena upon which the explanation is based. This presumes the explanation being interpreted is itself internally consistent. Such an interpretation might be impossible if the explanation itself were not internally consistent.

 

I don’t think this is perfectly consistent with your current interpretation of what I am saying but I could be misinterpreting something you are saying.

 

Have fun -- Dick

Link to comment
Share on other sites

Not exactly.... My equation merely specifies the dynamic relationships which must exist between those ontological elements (or rather, must exist within an internally consistent interpretation of that explanation).
Doctor Dick--this is excellent feedback as it provides me a new possibility to understand your FE (Fundamental Equation)--but I need to ask questions to see if I am on the correct path.

 

It is best for me to use some symbols to ask my questions. So, let N = the universe set of all Nomena and OE = the universe set of all Ontological Elements. Here is my current understanding:

 

1. Clearly, your FE says absolutely nothing "directly" about N (the set of all finite nomena).

 

2. Based on your above comment, your FE also says absolutely nothing about the "process of mental transformation" of any specific N to any specific OE. This is why you do not agree with my comment that "your FE is our interpretation of how N are transformed into OE".

 

3. Thus, your FE says nothing about human "perception"--that is, how the human mind grasps the reality of any specific N and then how it transforms that which is perceived (the set of all N experienced) and places numerical labels to form specific ontological elements.

 

4. So,based on your above comment...."[My equation merely specifies the dynamic relationships which must exist between those ontological elements]" your FE deals exclusively with the relationships between the set of all possible OE after the human has placed labels on the N upon which the reality of the OE rests.

 

This is a completely new understanding for me of your FE, and I agree with it completely, because, what it means to me is that your FE deals directly with only one aspect of "concept formation itself".

 

You see, for me, there are two aspects of the human mind "forming a concept". The first we can call the differentiation process, the second the integration process. That these two words (differentiation, integration) also are the foundation of the calculus should lead one to think that it may be possible to put the process of "concept formation itself" in mathematical form. How does this relate to your FE, which clearly is in mathematical form ?

 

Well, based on your answers above, it is my understanding that your FE puts into mathematical form the second part of the general process of concept formation--the integration part. When we say a human mind "forms a new concept from existing concepts", what this means to me is that the mind takes an existing OE (that is, an OE from the process of differentiation of nomena) and "integrates" it with other OE. Thus, your FE provides the "dynamic relationship that must exist between OE" during this aspect of "concept formation itself". That is, your FE puts into mathematical form the integration step of "concept formation itself".

 

But, of equal importance to me, your FE really says nothing at all about the first step in the process of concept formation, the differentiation (e.g., the process of putting numerical labels onto nomena, ---the transformation of a specific N --> specific OE).

 

If the above discussion is correct, it means we have no mathematical understanding of the differentiation process of concept formation--the process of placing the numerical labels onto nomena to form ontological elements. Would this be correct ?

 

Nomea are merely denoted via numerical labels so that they may be identified with the ontological elements of that explanation: i.e., neither space nor time are included in the initial concept.
How can space not be included in the initial concept of the nomena ? They must be located "somewhere" within the geometry of (x,y,z,tau) before they can be transformed into ontological elements by placing a numerical label on them. Or, are you saying that space and time are not included in the initial concept of each ontological element ?--this I would agree with.

 

...Time arises because the nomena standing behind that explanation may not be fixed: i.e., what is being explained (the nomena) may increase and the explanation must handle that event.
Yes, I agree, if the set of all nomena N were always fixed, time would not exist. You relate "time" to an increase in the "number" of nomena experienced between any two moments. So, at moment #1 say we have 10 nomena being explained, at moment #2 we have 25 nomena. Thus, time is that which is intermediate between these two moments--between the two moments of this event in change in number of nomena that must be explained. I agree with this because for me, time is nothing more than a number, the number that is intermediate between two moments. So, my definition of time includes your understanding of time as an "increase in number of nomena to be explained" (a type of motion). My definition of time also includes the more conventional definition dealing with motion (or rest) of nomena between two positions in three dimensional space (x,y,z). So, I have absolutely no problem with your definition of time as an increase in the number of nomena to be explained between two moments, such a definition it is a subset of the more general one I use. The key for understanding time (for me) is "motion" (either as increase in number of a set of nomena to be explained) or (as movement of nomena in 3-dimensional space)--for without potential for motion of some thing (a specific nomena), time cannot exist. Clearly, human language needs to use two different words to define these two different conceptions of "time".

 

...What is being explained is what is known (what I have called the past). Prediction is a result of the explanation, not the nomena. The nomena being explained is always finite but the explanation (as it includes prediction) is essentially infinite.
Are there really a finite number of quarks and gluons and photons and electrons in the universe ? It is known that virtual quarks can appear at any time within the proton as a probability event--do we know there is a finite number of them ? Is the prediction of virtual quarks a result of the explanation of the event, or the nomena that stand behind the reality of quarks ? Is it correct to suggest that "explanation" is somehow more complex than that which stands behind what is being explained ?

 

Also, I would say it is more correct for you to claim that prediction is only "indirectly" a result of the nomena. To say that "what is being explained is what is known", means one must define "to know". From above, it seems logical to me that (for you) the concept "to know" is a two step process: (1) perception of nomena that are differentiated from each other and then transformed into specific ontological elements to which are given numerical labels, (2) the dynamic mental process via the the Fundamental Equation of integrating specific ontological elements into new concepts. It would appear to be an axiom of logic that prediction via explanation is impossible if first is not something that exists (nomena) to be explained.

 

...I don’t think this is perfectly consistent with your current interpretation of what I am saying but I could be misinterpreting something you are saying.
I think I better understand your approach--how I can integrate it into my philosophy of concept formation. This thread exchange has been very useful to me. Usually in such exchanges it is the one with more knowledge (you) that has much less to gain from the interaction. Have a very happy and healthy new year.
Link to comment
Share on other sites

Rade, I would appreciate it if you would not use FE, N and OE for fundamental equation, nomena and Ontological elements as I am an old man very near to senility. Every time I see those initials I have to go back and find out what they mean. First of all, you must recognize that “the universe set of all nomena” and “the nomena you are explaining” are not at all the same thing. Neither are the “universe set of all ontological elements” and the ontological elements behind your world view the same thing.

 

On very important issue in my presentation is that both the nomena you are explaining and the “valid” ontological elements “behind your world view” are finite.

 

1. Clearly, your FE says absolutely nothing "directly" about N (the set of all finite nomena).
That is true. The nomena are completely undefined. My only point is that, since the number behind any explanation is finite, they may be labeled. (The fact that I use numerical labels actually says absolutely nothing about these nomena).
2. Based on your above comment, your FE also says absolutely nothing about the "process of mental transformation" of any specific N to any specific OE.
That transformation is very much a part of the explanation. Without an explanation, no such transformation is possible. My point is that the valid ontological elements (those required by all possible explanations of the “known” nomena) can also be labeled (totally independent of what that explanation might be) and that the explanation requires a mapping of those valid ontological elements into the set of nomena your explanation has been designed to explain.

 

In addition, there exists another set of ontological elements which are not “valid” (i.e., do not correspond to any nomena actually being explained) but are rather ontological elements required by your explanation. The number of elements in this set may be infinite as they are no more than figments of your imagination just as your explanation itself is a figment of your imagination.

 

It should be clear from the above that there exists another set of “presumed nomena” which are presumed by your explanation to stand behind those supposed ontological elements. Once again, this set may be infinite and there also exists a mapping between these presumed nomena and those invalid “ontological” elements. If your explanation is internally self consistent, all those ontological elements must obey the same rules the valid ontological elements obey: i.e., there can not exist any way of determining whether or not any specific ontological element is actually valid. Notice that this fact requires even a solipsistic explanation (which contains exactly zero valid ontological elements) to obey my equation.

This is why you do not agree with my comment that "your FE is our interpretation of how N are transformed into OE".
At the moment, I have absolutely nothing to say about how nomena are transformed into ontological elements as that is the central issue of artificial intelligence. Once you comprehend the validity of my fundamental equation, there arises a very interesting issue bearing upon that question which I suspect will lead to answers; however, I think that issue should not be brought up until both the validity of my equation and its underlying consequences are fully understood.
3. Thus, your FE says nothing about human "perception"--that is, how the human mind grasps the reality of any specific N and then how it transforms that which is perceived (the set of all N experienced) and places numerical labels to form specific ontological elements.
Again, “perception” is an aspect of your world view: i.e., it is an aspect of your explanation and simply does not exist without “an explanation”. Perception is a figment of your imagination just as the explanation itself is a figment of your imagination. Without an explanation perception is a meaningless concept.
4. So,based on your above comment...."[My equation merely specifies the dynamic relationships which must exist between those ontological elements]" your FE deals exclusively with the relationships between the set of all possible OE after the human has placed labels on the N upon which the reality of the OE rests.
I would agree with that except for one fact which seems to be outside your consideration. Without any labeled ontological elements, there exists no explanation. Without an explanation to examine, “after the human has placed labels on the nomena” has utterly no meaning and, once one has an explanation to examine, some human has already placed labels on those nomena. The issue you bring up just has no place in this discussion.
This is a completely new understanding for me of your FE, and I agree with it completely, because, what it means to me is that your FE deals directly with only one aspect of "concept formation itself".
My equation simply has nothing to do with “concept formation itself”. It has only to do with relationships internal to those concepts after they have been formulated.
You see, for me, there are two aspects of the human mind "forming a concept". The first we can call the differentiation process, the second the integration process. That these two words (differentiation, integration) also are the foundation of the calculus should lead one to think that it may be possible to put the process of "concept formation itself" in mathematical form. How does this relate to your FE, which clearly is in mathematical form ?
Yes I do think that the process you refer to can be put into mathematical form (its called Artificial Intelligence by the way) and I have a strong suspicion that I know how to do that; however, as I said above, that issue should not be brought up until both the validity of my equation and its underlying consequences are fully understood.
If the above discussion is correct, it means we have no mathematical understanding of the differentiation process of concept formation--the process of placing the numerical labels onto nomena to form ontological elements. Would this be correct ?
I guess so. I would suggest that there may very well exist an infinite number of ways of accomplishing that result and “how it IS done” is really of no significance (how it can be done is another issue). It is the result of such a process which is important, not the process itself.
How can space not be included in the initial concept of the nomena ? They must be located "somewhere" within the geometry of (x,y,z,tau) before they can be transformed into ontological elements by placing a numerical label on them.
So I put the label “it” or perhaps “abadaba” on my first “nomena” of interest. Did that label require the existence of the geometry? Does the fact that I instead label it “24” make a difference?
Or, are you saying that space and time are not included in the initial concept of each ontological element ?--this I would agree with.
Oh, I thought you considered space and time fundamental. I consider them no more than a convenient way of organizing massive amounts of data.
Thus, time is that which is intermediate between these two moments--between the two moments of this event in change in number of nomena that must be explained.
You are presuming time is a continuous field. That means that you are presuming the existence of an opening for a label between any two valid ontological elements in order to insert a presumed element. Think about exactly what that means. That means you have an explanation in mind! As I have said, the explanation is a figment of your imagination thus it is that the space between any two labeled elements is also a figment of your imagination.
The key for understanding time (for me) is "motion" (either as increase in number of a set of nomena to be explained) or (as movement of nomena in 3-dimensional space)--for without potential for motion of some thing (a specific nomena), time cannot exist.
Motion is also a figment of your imagination. Nomena cannot “move” as motion requires persistent existence. Ontological elements can be persistent (persistence of elements requires definition of those elements; something nomena lack).
Are there really a finite number of quarks and gluons and photons and electrons in the universe ?
You are speaking from the perspective of “an explanation” and there is no limit upon the presumed entities in the universe. There is also no limit upon the possible nomena. The finite nature is a statement about the nomena which stand behind your explanation. That number cannot be infinite for the simple reason that having an explanation violates the definition of infinite. Essentially if the number of nomena upon which your explanation rests is infinite, no matter how many instances you have taken into account, you are not finished (there are more). I like to refer to an instance of a specific nomena as an event. If you know all the events of the universe, you must be “all-knowing”; however, even then, how can you be sure you know them all? It follows that we must presume their number is actually infinite.
It is known that virtual quarks can appear at any time within the proton as a probability event--do we know there is a finite number of them ?
It is known? Isn’t it rather “presumed”. You are working with “an explanation” here. One that does seem to be quite good; however, I am quite sure that you have not examined each and every possible event that explanation requires. Thus it is that you must accept the fact that such a presumption might be in error even if you have found no flaw. Any prediction at all is the consequence of an explanation. Without an explanation, you cannot make predictions. Furthermore, a prediction is an event you have not yet seen so you cannot possibly have seen all the events your explanation puts forth. If you have, the explanation is somewhat worthless.
Also, I would say it is more correct for you to claim that prediction is only "indirectly" a result of the nomena.
How can you possibly expect to predict “the undefined”. If it is “undefined” what are you predicting. To predict something it has to exist at least twice: i.e., it has to have some kind of persistence and it has to be identified.
To say that "what is being explained is what is known", means one must define "to know".
Absolutely correct. That means “what is known” has been defined: i.e., an explanation must exist.
From above, it seems logical to me that (for you) the concept "to know" is a two step process: (1) perception of nomena that are differentiated from each other and then transformed into specific ontological elements to which are given numerical labels,
To me, the concept “to know” is undefined in the absence of an explanation; however, once an explanation exists, it seems to me that the explanation can be seen as based upon something and I would call “having such a basis defined” amounts to exactly what we commonly call knowing what the explanation is based on. Sure, its circular but it seems to me such a relationship must exist. Can you conceive of an explanation which explains nothing?
(2) the dynamic mental process via the the Fundamental Equation of integrating specific ontological elements into new concepts. It would appear to be an axiom of logic that prediction via explanation is impossible if first is not something that exists (nomena) to be explained.
”Via the fundamental equation”? The fundamental equation has absolutely nothing to do with the process of inventing an explanation; it is a simple required constraint upon the outcome of that process.
I think I better understand your approach--how I can integrate it into my philosophy of concept formation.
I think it does say something about concept formation but I think of it as a rather broad overview which is capable of opening one’s eyes to things not ordinarily thought of as productive. Particularly with regard to artificial intelligence.

 

Have fun -- Dick

Link to comment
Share on other sites

Hi Rade. You are getting closer. I thought I could also try and provide some clarification. DD already replied, but perhaps it is helpful if I comment in my own words also.

 

1. Clearly, your FE says absolutely nothing "directly" about N (the set of all finite nomena).

 

2. Based on your above comment, your FE also says absolutely nothing about the "process of mental transformation" of any specific N to any specific OE. This is why you do not agree with my comment that "your FE is our interpretation of how N are transformed into OE".

 

3. Thus, your FE says nothing about human "perception"--that is, how the human mind grasps the reality of any specific N and then how it transforms that which is perceived (the set of all N experienced) and places numerical labels to form specific ontological elements.

 

4. So,based on your above comment...."[My equation merely specifies the dynamic relationships which must exist between those ontological elements]" your FE deals exclusively with the relationships between the set of all possible OE after the human has placed labels on the N upon which the reality of the OE rests.

 

That's right, it deals with relationships that are generally true for any set of defined ontological elements.

 

The relationships involved are generally true because they are consequences of the transformation process that turns "unknown reality" into a "discrete set of defined persistent objects".

 

That is why the concept of "noumena" is used here; it is important that the reader understands that any supposed behaviour of reality (in the noumena form) does not play any role here.

 

Just to get something to mentally grasp on, I tend to think of the noumena just as arbitrary data points, onto which the defined ontological elements will be related to in some ways (depending on the exact definitions one happens to make).

 

DD tends to communicate in a fashion where those noumena are, prior to definitions, seen in the sense where each occurrence of something is treated as a completely new entity; i.e. nothing persists from one moment to the next. The specific definitions that get you to "defined ontological elements", are essentially saying which noumena at "t=1" are considered to be which noumena at "t=2". I.e. only after some definitions you have ideas of some objects that carry persistent identities to themselves.

 

I.e. prior to definitions, there's no sensical "motion" of anything either. Obiously, as no time-wise persistence of any object has been defined. But there are "changes" to the set of those arbitrary data points. "t" is simply the parameter keeping track of those changes analytically.

 

Which ever way you like to think of the noumena, what is actually important is that after the definitions, reality is treated in terms of defined objects, and the raw data that got you there did not contain explicit knowledge of what objects exist behind the data. The transformation process involved must obey the symmetries, which the fundamental equation is an expression of.

 

Those symmetries ensure that, for instance, the entire set of defined ontological elements must obey conservation of momentum. There will be many ways to define persistent elements, but if your definitions are without self-contradiction, the involved symmetries will make sure you will perceive objects that conserve their momentum (at least when the entire set is considered).

 

Most people tend to start the queries to reality by experimenting with (defined) objects, and then say those objects conserve their momentum. But of course when they do that, they have already defined what constitutes "an object", and they are not even talking about the same subject as the epistemological analysis, and thus they will have no chance of understanding the symmetry arguments that force us to conceive/define objects in the ways that their momentum will be conserved.

 

I'm just saying all that because I'm starting to get the feeling that you are getting closer to the subject we are talking about, and perhaps you can see how frustrating it can get when people persistently skip the epistemological side of the subject entirely, and go directly to some individual definitions they already have in their mind, in an attempt to make some argument... ...that is already besides the point at the get-go.

 

But, of equal importance to me, your FE really says nothing at all about the first step in the process of concept formation, the differentiation (e.g., the process of putting numerical labels onto nomena, ---the transformation of a specific N --> specific OE).

 

Well it doesn't say anything about any specific solution. But the general constraints that are involved with any valid solution, are surprisingly "specific" by themselves when you take them all into account, albeit their consequences can be embedded to a specific solution in many not-so-obvious ways.

 

How can space not be included in the initial concept of the nomena ? They must be located "somewhere" within the geometry of (x,y,z,tau) before they can be transformed into ontological elements by placing a numerical label on them.

 

He means that any sort of "ontological space of actual reality" is not included in the initial concept, in the sense that the x,y,z,tau space is an imaginary space keeping track of the data points, or after definitions, the defined ontological elements. But it is not by itself a feature of reality.

 

Under careful analysis, the x,y,z,tau space, and self-coherently defined set of ontological elements, can contain many quite unintuitive features to themselves. For instance exactly those features that make quantum mechanics and relativity seem very elusive to us. They seem elusive when the defined elements are believed to be actual objects with actual identity to itself, rather than immaterial references to data points... ...or more properly, such immaterial references that obey the symmetry arguments.

 

We have already had few discussion about what the epistemological analysis tells us about quantum mechanics and the collapse of the wave function. I remember Qfwfq once commented that it's not enough to just argue that the collapse of the wave function is something that occurs on ones mind, because there still exists experiments (i.e. Bell experiment) that go against that idea.

 

That comment is true as long as people persistently want there to exist some set of actually real ontological elements with real persistent identity, standing behind your solution to reality (and they want those elements to exist inside an ontologically real space in some sense also). It will turn out that no matter how you try to explain to yourself HOW the information from one observation affects the results of another observation, you are always trying to force in some idea of "actually real elements" that somehow mediate that information, in very non-realist ways (i.e. there always appears to be some odd idealistic features to your idea of reality)

 

Now the point with the epistemological analysis is in the symmetry arguments that give us means for creating sensical definitions, i.e. the immaterial references to arbitrary data points, and the analysis tells us exactly why those immaterial references behave in quantum mechanical ways. They must behave in quantum mechanical ways without any elements mediating any information between some other supposedly real elements. So the reason we can validly at this point say that the collapse of the wave function occurs in our mind due to the accumulation of new information, is precisely because that result was not obtained as a function of ANY actual persistent elements at all.

 

You can think of this as first imagining just random noise, i.e. those "arbitrary data points". The symmetry arguments (the fundamental equation) tell us how that noise can be transformed into a set of defined elements in a self-coherent fashion. I.e. what sorts of features of that noise can become seen as "being caused by such and such persistent elements in motion". Those defined elements will behave in quantum mechanical fashion for the very reason that they arose from the epistemological symmetries. There is no reason to invent any means for mediating the information from one observation to another, as the quantum mechanical expectations are grounded to the epistemological means for getting to those object definitions, as oppose to some theoretial "nature of reality" standing behind our definitions.

 

That is why it is important that people don't think their specific definitions are also actual ontological elements. They claim they don't, but then they turn around and make an argument in terms of exactly those elements. And that eventually gives you the strange features of quantum mechanics. Note that it is the symmetry arguments, that tell us why there's no reason to think our definitions are actual ontological elements, while our definitions do produce valid predictions.

 

This is not to argue that reality is idealistic. It is just to say that we have created immaterial references to refer to unknown reality. Entirely different argument, I hope everyone understand that by now.

 

Yes, I agree, if the set of all nomena N were always fixed, time would not exist. You relate "time" to an increase in the "number" of nomena experienced between any two moments. So, at moment #1 say we have 10 nomena being explained, at moment #2 we have 25 nomena. Thus, time is that which is intermediate between these two moments--between the two moments of this event in change in number of nomena that must be explained.

 

I always get the feeling that you are trying to find some sort of "nature of time" with that sort of definition to time (I don't know what does "in between" mean etc).

 

At any rate, what DD is concerned of is first of all "an analytical way to track changes to the noumena (arbitrary data points)" - that's the parameter "t". And second of all to derive the validity of relativistic time relationships from epistemological fundamentals.

 

Neither topic touches ontological nature of time per se.

 

My definition of time also includes the more conventional definition dealing with motion (or rest) of nomena between two positions in three dimensional space (x,y,z).

 

To have "motion" to noumena would be a misnomer, because in order to say how something moved, would require a definition for what constitutes that something. I.e. it would not be a noumena anymore.

 

You can think of the noumena as a set of elements that always exist for one single instant only. Under that idea, a single defined ontological element will be referring to a large number of noumena. (but be careful with that sort of idea also, as the point is not to argue about the "nature of noumena", the point is in the requirement of self-coherence to the definitions of discrete/persistent elements)

 

When DD said "increase of noumena", that simply refers to accumulation of data. I.e. the volume of the "arbitrary data points to be explained" is increasing. (Your explanation must explain all your past, not just the present moment. I.e. even if the ontological rules of reality were changing constantly, your explanation would have to define how exactly those rules have been changing so to explain all of your "past data" too)

 

So, I have absolutely no problem with your definition of time as an increase in the number of nomena to be explained between two moments, such a definition it is a subset of the more general one I use. The key for understanding time (for me) is "motion" (either as increase in number of a set of nomena to be explained) or (as movement of nomena in 3-dimensional space)--for without potential for motion of some thing (a specific nomena), time cannot exist. Clearly, human language needs to use two different words to define these two different conceptions of "time".

 

I think here you are starting to slip little bit. You can't take motion as fundamental, because then you are already thinking of some defined things. To get on track with the analysis, you should substitute "motion" with more neutral "changes", as that includes the possibility that nothing ever actually moves, but rather things come to existence and go out of existence.

 

This is important as long as we are talking about the "arbitrary data points that our world view explains". The data is simply being accumulated, i.e. new data simply "comes into existence". A specific solution can assign "motion" as a part of an explanation for "what that data supposedly means". (Of course a specific explanation rarely thinks the reality behind the data actually came into existence when the data was presented, but the analysis works with the undefined noumena, so in that sense new noumena simply accumulates and needs to be explained)

 

Are there really a finite number of quarks and gluons and photons and electrons in the universe ?

 

When he refers to a finite number of noumena, he means the volume of the accumulated "data to be explained" is always finite.

 

Quarks and gluons and photons are part of a specific explanation. The idea that reality is made of those things, is a function of a finite amount of data, that we have come to explain in those terms.

 

I'm writing this in a little bit of a hurry, but I hope this is helpful.

 

-Anssi

Link to comment
Share on other sites

That's right, it deals with relationships that are generally true for any set of defined ontological elements.
Not generally true; I would say, absolutely true (so long as your world view is internally consistent).
The relationships involved are [absolutely] true because they are consequences of the transformation process that turns "unknown reality" into a "discrete set of defined persistent objects".
[--] is my edit!

 

Rather because they are reflections of the fundamental ignorance lying behind that transformation process. In essence, the solution to a problem cannot contain information which is not available in the statement of the problem to be solved.

That is why the concept of "noumena" is used here; it is important that the reader understands that any supposed behaviour of reality (in the noumena form) does not play any role here.
Yes; they are undefined! Any characteristics we give them are entirely hypothetical: i.e., totally figments of our imagination.

 

Even the scientific community will agree they are hypothetical until experiment demonstrates the hypothesis is consistent with additional information. The problem they omit is the fact that the additional information they are referring to stands under exactly the same umbrella: i.e., it is also entirely hypothetical. The standard scientific position is entirely circular. The procedure used by the scientific community can not be defended and their results can only be defended via a holistic approach (which, by the way, is exactly what they use when they say, “look at the whole thing, it all fits together and makes sense”). The difference is, my approach is holistic from the very beginning. I work directly with the consequences of that fundamental ignorance and thus arrive at a defendable position.

Just to get something to mentally grasp on, I tend to think of the noumena just as arbitrary data points, onto which the defined ontological elements will be related to in some ways (depending on the exact definitions one happens to make).
In my mind, arbitrary data points are still a little too defined as data tends to imply measurement of some kind. I would rather think of the thing in the form, “something happened; I know not what but it was something”. An “event” of some kind, but an event usually implies some “phenomena” and we can’t even define a phenomena; so we remove the “phe” and talk about the “nomena”.
DD tends to communicate in a fashion where those noumena are, prior to definitions, seen in the sense where each occurrence of something is treated as a completely new entity; i.e. nothing persists from one moment to the next. The specific definitions that get you to "defined ontological elements", are essentially saying which noumena at "t=1" are considered to be which noumena at "t=2". I.e. only after some definitions you have ideas of some objects that carry persistent identities to themselves.
Anssi is absolutely correct. How can one have identical nomena if you have utterly no idea as to what any specific “nomena” is?
I.e. prior to definitions, there's no sensical "motion" of anything either. Obiously, as no time-wise persistence of any object has been defined. But there are "changes" to the set of those arbitrary data points. "t" is simply the parameter keeping track of those changes analytically.
No matter what your explanation might be, if you have an explanation, you have identified these nomena in your head and thus can use the index t to refer to the specific set of nomena you have became aware of. (You should also realize that, at that moment, these nomena have been identified with ontological elements in your world view and thus the possibility of two being identical arises.)
Those symmetries ensure that, for instance, the entire set of defined ontological elements must obey conservation of momentum. There will be many ways to define persistent elements, but if your definitions are without self-contradiction, the involved symmetries will make sure you will perceive objects that conserve their momentum (at least when the entire set is considered).
Conservation of momentum arises when choose to represent the numerical labels as points in a graphical structure. It is then that ignorance of the actual position arises (there is no way of establishing exactly what numerical label should be used). This particular ignorance is called “shift symmetry” and shift symmetry is the symmetry which yields conservation of momentum.
Most people tend to start the queries to reality by experimenting with (defined) objects, and then say those objects conserve their momentum.
Starting with defined objects is essentially a state of denial. One is denying the fact that their definition implies they know what they are talking about: i.e., my point is that they simply are not cognizant of the fact that they are working with unproved hypotheses. That is exactly the problem I was having with Qfwfq and others when I told them they were bringing too much baggage to the discussion.
But of course when they do that, they have already defined what constitutes "an object", and they are not even talking about the same subject as the epistemological analysis, and thus they will have no chance of understanding the symmetry arguments that force us to conceive/define objects in the ways that their momentum will be conserved.
Exactly on point.
It will turn out that no matter how you try to explain to yourself HOW the information from one observation affects the results of another observation, you are always trying to force in some idea of "actually real elements" that somehow mediate that information, in very non-realist ways (i.e. there always appears to be some odd idealistic features to your idea of reality)
There is a second consequence of my work which says something very serious about the solutions to my fundamental equation. That is the fact that, no matter how you define these undefined nomena, so long as your world view is internally consistent, you will find your world view containing elements which obey exactly the same rules as those fundamental particles of modern physics.

 

I don’t mean that your world view must contain these elements but rather that it will contain the hypothetical objects which can be constructed of those fundamental elements. That is to say, objects which obey Newtonian physics on an anthropomorphic level, mixtures which obey the rules of chemistry and, likewise, entities which obey the rules of bio-chemistry and bio-physics. Now, does this mean that those objects exist? That water, carbon dioxide, dirt and trees exist? Well, that entirely depends upon what you mean by the word “exists”. If you simply mean that they will inevitably be part of your world view then they certainly exist. It does not mean that the nomena behind your world view are correctly defined by your world view; there might exist an alternate set of definitions which yield exactly that same result.

 

That has some profound consequences which are worth looking at; particularly if one is interested in AI.

 

Have fun -- Dick

Link to comment
Share on other sites

Not generally true; I would say, absolutely true (so long as your world view is internally consistent).

 

Oop, sorry about the sloppy language. But indeed, by "generally true for any set", I meant "universally true for any set".

 

.

.

.

That has some profound consequences which are worth looking at; particularly if one is interested in AI.

 

It would be interesting to discuss that topic as well at some point.

 

-Anssi

Link to comment
Share on other sites

you must recognize that “the universe set of all nomena” and “the nomena you are explaining” are not at all the same thing. Neither are the “universe set of all ontological elements” and the ontological elements behind your world view the same thing.
DoctorDick--thank you for your comments and many explanations. Both of your comments above are clear to me. What anyone explains clearly can only be a sub-set of the "universe sets" for both nomena and ontological elements. In fact, I find it completely in line with my thinking that one needs the concepts of "nomena" and "ontological elements" for the process of "concept formation".

 

One very important issue in my presentation is that both the nomena you are explaining and the “valid” ontological elements “behind your world view” are finite.
Yes, it must be since they are a sub-set of the universal. It is the universal sets that are infinite.

 

That is true. The nomena are completely undefined.
It is good to see I have understanding on this point.

 

That transformation is very much a part of the explanation. Without an explanation, no such transformation is possible.
Here the transformation is put label on some nomena to transform it into an ontological element. OK, I can see how this process would be first step of what you define as explanation.

 

In addition, there exists another set of ontological elements which are not “valid” (i.e., do not correspond to any nomena actually being explained) but are rather ontological elements required by your explanation. The number of elements in this set may be infinite as they are no more than figments of your imagination just as your explanation itself is a figment of your imagination.
OK, this makes sense, it is the same as saying your mind has the ability to place labels on nomena that have no relationship to that which exists.

 

It should be clear from the above that there exists another set of “presumed nomena” which are presumed by your explanation to stand behind those supposed ontological elements. Once again, this set may be infinite and there also exists a mapping between these presumed nomena and those invalid “ontological” elements. If your explanation is internally self consistent, all those ontological elements must obey the same rules the valid ontological elements obey: i.e., there can not exist any way of determining whether or not any specific ontological element is actually valid. Notice that this fact requires even a solipsistic explanation (which contains exactly zero valid ontological elements) to obey my equation.
OK, your explanation here requires four "sets" (1) nomena (2) presumed nomena (3) valid ontological elements (4) non-valid ontological elements. So, (3) maps (1) and (4) maps (2), and your Fundamental Equation is applied to both types of mapping.

 

At the moment, I have absolutely nothing to say about how nomena are transformed into ontological elements as that is the central issue of artificial intelligence.
I think I am ready to move into this line of thinking, "how" nomena are transformed into ontological elements, for I find this process to be the first step in what I would define as "concept formation". That is, one cannot form a concept of any thing until one maps nomena by forming ontological elements--and as is known, the map is not the same as the territory--e.g., the nomena differ fundamentally from the ontological elements. Of course, this discussion would need to include two topics: (1) how nomena are transformed into valid ontological elements and (2) how presumed nomena are transformed into non-valid ontological elements.

 

Again, “perception” is an aspect of your world view: i.e., it is an aspect of your explanation and simply does not exist without “an explanation”. Perception is a figment of your imagination just as the explanation itself is a figment of your imagination.
Well, here I cannot agree, at least not how I define "perception" as being "a group of sensations automatically retained and integrated by the brain of a living organism". For me, perception allows a brain (human, animal) to "be aware of" undefined entities (e.g., nomena). That is, perception is prior to any "explanation" of the undefined nomena--it is what must come before you can explain any thing. What would be a "figment of imagination" would be the the process of mapping presumed nomena into non-valid ontological elements.

 

So, clearly, I have a road block in understanding--and it will require you to define for me exactly what you mean by the term "perception". I do not recall that you have presented such a definition.

 

Also, you appear to have a concept of perception that is in conflict with what you stated above---that:

At the moment, I have absolutely nothing to say about how nomena are transformed into ontological elements as that is the central issue of artificial intelligence.
For me, perception is exactly this process that you have nothing to say about--the "how" nomena are transformed into ontological elements. I do hope you can see my thinking on this issue and why it is important for my understanding of what you are talking about that you define what you think the word "perception" means and how it differs from the definition I have given above.

 

Without an explanation perception is a meaningless concept.
Here I cannot agree and clearly have no understanding what you claim, not as I defined perception above. I need you to define "perception" so I can see where we disagree.

 

My equation simply has nothing to do with “concept formation itself”. It has only to do with relationships internal to those concepts after they have been formulated.
Well, as you wish, but I think your fundamental equation has important link to concept formation itself--for me, it is concept formation itself (minus the transformation process) put into mathematical language. I am just confused why you do not think this is so.

 

So I put the label “it” or perhaps “abadaba” on my first “nomena” of interest. Did that label require the existence of the geometry? Does the fact that I instead label it “24” make a difference?
My point was not that it makes a difference any "specific" label, only that in general, before one can "label" anything first there must be a (x,y,z,tau) geometry present.

 

 

Oh, I thought you considered space and time fundamental. I consider them no more than a convenient way of organizing massive amounts of data.
. I would agree, first must be "data", which implies that first must be undefined nomena. I consider undefined nomena to be fundamental, not space or time.

 

You are presuming time is a continuous field.
? No, not at all. Time is not a "field" as the term is used in physics. Time is a type of number, and yes, it is continuous, but not because a field is continuous, but because the "potential for undefined motion is continuous". You do agree that there is undefined motion that is prior to time--correct ?

 

That means that you are presuming the existence of an opening for a label between any two valid ontological elements in order to insert a presumed element.
No, not at all. Time is that which is intermediate in the transformation of nomena into valid ontological elements. A transformation requires a "before" the transformation and an "after" the transformation. Time is what is intermediate between these two moments. It is not a "field" into which one inserts ontological elements. What I find interesting here is that I can clearly see how my definition of time relates to your fundamental equation but I do not appear to be doing a good job explaining myself.

 

Nomena cannot “move” as motion requires persistent existence. Ontological elements can be persistent (persistence of elements requires definition of those elements; something nomena lack).
Well, I think I have a major roadblock here. You claim that ontological elements "have potential of motion" because they have "persistent existence" -- which I would agree with. But I would disagree that "undefined nomena" also do not have persistent existence and hence motion. It is not enough that you claim that persistence "requires" definition--this is the worldview you hold, not a logical premise of the definition of the term persistence. It is completely logical that one could hold a valid worldview that undefined nomena have persistence and motion, there is just no way they would "know" it.

 

How can you possibly expect to predict “the undefined”. If it is “undefined” what are you predicting.
? It was my understanding that one predicts the undefined nomena by transformation of them into valid ontological elements--am I missing something here?

 

That means “what is known” has been defined: i.e., an explanation must exist.
OK, this makes sense. So I can come to "know" undefined nomena by their transformation into valid ontological elements--the transformation process is the process of placing a definition onto to undefined.

 

To me, the concept “to know” is undefined in the absence of an explanation; however, once an explanation exists, it seems to me that the explanation can be seen as based upon something and I would call “having such a basis defined” amounts to exactly what we commonly call knowing what the explanation is based on.
OK--but is this not the same as saying that when you claim you "know" something is to say you have a mental grasp of the existence of some undefined nomena upon which you then place a label and transform into an ontological element ? And then, once this explanation of the undefined nomena exists (e.g., the ontological element), we say we "know what the explanation is based on"--that is, we know that some undefined nomena exists which forms the "basis" of the explanation.

 

The fundamental equation has absolutely nothing to do with the process of inventing an explanation; it is a simple required constraint upon the outcome of that process.
I guess all I was trying to say is that where such constraint exists, advantage can be taken of it, and clearly the fundamental equation is used to advantage in the process of forming any explanation--since it is the equation of explanation itself--correct ?
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...