# Some Subtle Aspects Of Relativity.

48 replies to this topic

### #18 Doctordick

Doctordick

Explaining

• Members
• 1092 posts

Posted 30 August 2008 - 10:21 AM

Sorry Will, I had no intention of insulting you; I was just trying to get you to examine the thought experiment I had set up. Since you have chosen not to pursue that issue, I will present an analysis that I think most of the people here have the insight to understand. I am of course talking about the two children ten feet apart playing catch with that magic time shifting ball.

The question was, if the ball contained a time travel device which moved it into the future one second for every foot it moved in the rest frame of the observer, how would it appear to behave. I will draw upon the old fashion Newtonian x,t diagram of the actual motion of the ball (the path it would follow if the time machine did not exist) at a velocity of 10'/sec, 50'/sec and 1000'/sec. Ten feet per second is reasonable for two children playing catch, fifty feet per second is a little slow for a baseball pitcher but was easier to draw and one thousand feet per second is pretty reasonable for a cannon shot. These are the blue, green and black lines seen at the bottom of the chart.[attachment=1664:1/8/9/2/2449.attach]
(Apparently you have to click on the chart to see it, and magnify it at least once to see the detail.)

Now, because of the time machine in the ball which will move it into the future, these will not be the observed paths. The time machine is set to advance the ball into the future at a rate of one second for each foot it travels in the rest frame of the observer. All one need do is add one second to the time specified to the balls position for each foot it has moved since it started on its path. Since the distance between the two points of interest is ten feet, the ball will have advanced an extra ten seconds into the future during the travel over the specified ten foot path. The lines for these apparent paths are also shown in the figure. The blue path is transformed into the aqua path where the ball appears to cover the ten feet in eleven seconds, yielding an apparent velocity of about .9 feet per second. The green path is transformed into the mint green path where the ball appears to cover the ten feet in ten point two seconds, an apparent velocity of about .98 feet per second. And, finally, the black path of the cannon shot is transformed into the gray path where the ball appears to cover the same ten feet in 10.01 seconds, which corresponds to an apparent velocity of about .999 feet per second.

It should be quite obvious to the reader that this ball appears to live in a universe where the maximum allowed velocity is ten feet per second. Let us consider the dynamics of this circumstance. If you think about the kinetic energy the children, the pitcher and the canon have applied to the ball, you should also comprehend that, to the observers playing with the ball, its apparent mass is rising precipitously (though its apparent velocity has changed by very little, the momentum to be transfered goes out of sight quickly). In fact the behavior of the ball is quite analogous to common relativistic behavior in almost every way. I find it to be a very interesting phenomena though no physicist I have ever talked to has shown even even the slightest interest. Forty five years ago, when I first raised this issue with a professor when I was a graduate student, I was told, “well of course you are right, but don't show it to any of the other students because it will just confuse them.” Being an idiot at the time, I did indeed keep it to myself.

One question which comes up with the above analysis is, “if we advance into the future because we a moving in that rest frame, why do we move into the future when we aren't moving in that rest frame?” Well, suppose we are moving in a fourth dimension, which we are unaware of, at some fixed velocity. In fact, suppose we are moving at a fixed rate through that four dimensional universe and that, when we are moving in the rest frame, that motion is only a component of our actual motion: i.e., the path length which yields that advance in time is the actual four dimensional path in that four dimensional Euclidean universe. It turns out that the correction required by that simple hypothesis yields consequences exactly (and I mean exactly) the same as those required by “special relativity”.

Now this is a rather funny circumstance. Einstein has set up a four dimensional representation of reality where entities (and that includes us observers) move along paths in that space. A space which is rather strange in that its four components do not have the same qualities. One has this quality that it is “imaginary”. He has to do this in order to make the measurements we perform come out consistent with the experimental results. (It's a tough world and sometimes one must go to extremes to get results which agree with experiment.) You should note that any rational analysis of the circumstance still requires a parameter to specify exactly where we are on that path (something the physicists don't like to talk about). Here I have added a simple dimension orthogonal to x, y and z but otherwise identical in nature and found that the consequences are exactly the same as Einstein's rather more complex scheme. I used to think simplicity was of value in physics.

We do, none the less, have a difficulty with this picture. Why can't we detect this fourth dimension? Here quantum mechanics comes to the rescue. Every experiment performed by every scientist who has ever lived has been done with equipment constructed with mass quantized entities. Not only that, they all work in laboratories constructed entirely from mass quantized entities. Suppose the kinetic energy of an entity due to the motion in this fourth dimension is what we call “mass”? If that is the case then the fact that all our equipment is built from entities with quantized mass (including the laboratories themselves), then momentum in the tau direction is almost universally quantized. The dimension canonical to that momentum would be the dimension within which that momentum is defined and, by virtue of the uncertainty principal, that dimension would become absolutely undetectable. Gee guys, that seems to me to be a rather obvious solution to the difficulty inherent in this picture.

Doesn't the way this all fits together bother you at all? I am afraid that professor I spoke to forty years ago was right; “it will only confuse them!” Doubt in one's beliefs often leads to confusion.

There is another rather important issue embedded in this perspective. The perspective (except for the addition of this fourth axis) is totally in accordance with the old Newtonian view of reality. This is very interesting because of another problem which arose after Newton proposed his theory of dynamics. In Newton's picture, one was dealing with what he defined as inertial frames (essentially defined by the fact that F=ma was to be valid; if that equation is not valid, you're not in an inertial frame). Of particular interest is what in the good old days used to be called pseudo forces. These are forces which actually don't exist but are in fact mere consequences of the fact that you are not in an inertial frame: for example, the force which tips over your coffee sitting on the dash when you turn your car through a sharp turn. There is no real force there, it is no more than the fact that your coffee would continue in a straight line if the friction with the dash wasn't there.

There are all kinds of pseudo forces which can be generated by the simple fact of working with “the wrong frame of reference”. The one fact which is almost a universal indicator that one is dealing with a pseudo force is the fact that, in all pseudo forces, the acceleration is exactly the same for all masses: i.e., the apparent force is always exactly proportional to the mass of the object. That occurs for the very simple fact that no acceleration is actually taking place; all apparent acceleration is due entirely to the acceleration of the frame of reference. Two of the most common pseudo forces well known to any physicist are centrifugal and Coriolis forces, both of which are entirely due to doing one's doing their calculations in a rotating coordinate system.

Now gravity has exactly the same quality that the apparent force (the force causing the acceleration) is always exactly proportional to mass. This thought leads any thinking scientist to the idea that gravity is a pseudo force; that gravity exists for the simple fact that the geometry used to calculate the paths of entities under the influence of gravity simply is not the proper inertial frame. Much work involving subtle changes in the geometric representation of physics problems was applied in an attempt to find a geometric transformation which would yield gravity as a pseudo force. The search was an utter failure and, in the mid 1700's, a French mathematician, Pierre-Louis Moreau de Maupertuis, proved that no such transformation existed. Physicists gave up on the prospect of finding that geometry but their work did not go unappreciated; many very important relationships had been discovered when that range of possible transformations were studied.

The search had led down paths which have become central to most all of modern physics. Out of this work we get many of the mathematical relationships accepted as fundamental to modern classical mechanics (Lagrangians, Jacobians, Hamiltonian mechanics just to name a few). In addition this work led directly to the early formulation of quantum mechanics so it was certainly not a waste of time. You may ask, why does Dick bring up this esoteric garbage? Well the answer is actually quite simple, if you examine Maupertuis' proof, you will discover that one of the central issues was the fact that objects with different velocities followed different paths. Now, if you look at what I have just presented, you will discover that, since I have identified mass with momentum in that unobservable tau direction, mass is no longer a quality of our entity but rather a statement of its dynamics (as determined by its total energy and its kinetic energy ([imath]E_k = E-mc^2[/imath]). Everything in my picture travels at exactly the same velocity of 1/K (the free variable describing the dynamic evolution the scale, a scale which is established entirely by the analyst performing the analysis). Gee whiz; there are no “different velocities” and Maupertuis' proof entirely fails.

Add to this a rather common position held by the physics community when I was a graduate student (a position which, I believe, is taken as fact today), and a somewhat important issue is raised. According to established authority (see Adler, Bazin and Schiffer, "Introduction to General Relativity", McGraw-Hill Co., New York, 1965, p. 7), "Einstein proved that "a reduction of gravitational theory to geodesic motion in an appropriate geometry could be carried out only in the four-dimensional space-time continuum of [Einstein's] relativity theory". If that statement is true then Einstein certainly has strong support that his picture is worth the effort; but, the real question is: is it true? I think I have certainly reopened the question and the issue should certainly be reexamined; especially in view of the problems Einstein's GR has with quantum mechanics.

I have no such problems and, in spite of Erasmas00's conclusion that my picture (though correct) is useless, I would suggest that it is a very valuable attack and well worth the effort to understand. General relativity is a rather straight forward issue in my picture and I will present it to anyone who has the fortitude to follow my exposition. The approach is not at all per the current Einsteinian catechism but is rather quite solidly based on the classical physics approach to the analysis of phenomena.

Thanks to you all -- Dick

### #19 AnssiH

AnssiH

Understanding

• Members
• 828 posts

Posted 31 August 2008 - 02:13 PM

From what I can tell, that seems like a very reasonable/useful way to plot information. Especially since it ties "relativistic time evolution" and quantum mechanics under single paradigm rather nicely. I do not understand why people are so hesitant to look at it. Could it be they are little bit emotionally attached to some personal ontological view, which they see as contradictory to this treatment? (which it may or may not be)

I don't claim to understand it completely but at least I'm working on it. It is little bit frustrating that my math skills are not up to par to work on it faster (especially with having limited time to spend on the issue). It'd be nice to see other people, more competent in math, seriously taking a look at it. It's nice to see Bombadil doing it, and I'm sorry to see Erasmus getting frustrated and leaving.

Erasmus, if you are still reading this, you need to excuse DD little bit; he's frustrated by always receiving objections that are based on some feature found from some specific worldview/model. If you can appreciate the fact that any information can be mapped/modeled in many different ways, you can probably also understand that a feature that is implied by one of all those possible mapping methods, cannot really be considered "the way reality is".

About the scale invariance that was causing some confusion. It refers to scale invariance that exists to your model of reality as a whole. In a sense, you could say that if the whole reality - absolutely everything in it - was to scale up ten times suddenly, it would be undetectable. I.e. while your model of reality does assign numerical values to describe the relationships between things, scaling absolutely everything changes absolutely nothing.

So, in an ontological sense we don't know what is the "size of reality" (it is not even sensical notion), which is the specific ignorance that makes any worldview scale symmetrical (as long as it does not contain an undefendable assumption about some specific scale being ontologically true).

Do you guys understand how this all has to do with completely general properties that arise from those forced symmetries (of which "scale" is just one) to our models of reality? I.e. properties that are not really true "because reality is like that" but rather because "we are mapping information in such and such ways"?

-Anssi

### #20 ughaibu

ughaibu

Creating

• Members
• 1471 posts

Posted 31 August 2008 - 03:35 PM

Euclidean hyperspace is incoherent by definition, what advantage is accrued by one incoherent model over another?

### #21 Erasmus00

Erasmus00

Creating

• Members
• 1561 posts

Posted 01 September 2008 - 02:53 PM

If that statement is true then Einstein certainly has strong support that his picture is worth the effort; but, the real question is: is it true? I think I have certainly reopened the question and the issue should certainly be reexamined; especially in view of the problems Einstein's GR has with quantum mechanics.

But your picture is the same picture as Einstein's, simply rearranged! You haven't reopened the question, you've rephrased the solution.

And you've never given any convincing reason that reparameterizing GR in your way does anything to help with quantum mechanics! Its not even obvious to me that there is a simple way to recast Einstein's field equations because in your reparameterization the stress energy tensor isn't a tensor but transforms non-trivially observer to observer.

Also, you still haven't responded to my question- if the universe were governed by scale invariant equations, then it should be scale invariant- how come none of the fundamental forces demonstrate this feature? Why isn't the CMB background truly scale invariant instead of merely approximately scale invariant? etc.

If you can appreciate the fact that any information can be mapped/modeled in many different ways, you can probably also understand that a feature that is implied by one of all those possible mapping methods, cannot really be considered "the way reality is".

I agree- which is why Einstein's relativity parameterization is the one I prefer. Einstein/Minkowski SR can be set in a coordinate invariant way- the "maps" as it were can be completely abstracted out. In Dick's formalism, this is simply not so- everything is phrased in terms of things that depend on the coordinates.

. It refers to scale invariance that exists to your model of reality as a whole. In a sense, you could say that if the whole reality - absolutely everything in it - was to scale up ten times suddenly, it would be undetectable. I.e. while your model of reality does assign numerical values to describe the relationships between things, scaling absolutely everything changes absolutely nothing.

I'm aware of what scale invariance means. The thing is it, scale invariance implies testable predictions that we simply don't see around us. None of the physics we've already nailed down is scale invariant.
-Will

### #22 Doctordick

Doctordick

Explaining

• Members
• 1092 posts

Posted 02 September 2008 - 08:31 PM

But your picture is the same picture as Einstein's, simply rearranged!

That is simply a false statement. The experimental results verifying the transformation equations of special relativistic results may be identical to Einstein's predictions but the “picture” which yields this as a rational attack is quite violently different from his. Try and find a representation of “tachyons” in my picture. As I say, you just have that brain clamp on too tight and just can not see my picture.

None of the physics we've already nailed down is scale invariant.

That is very simple issue, none of the physics which has been “nailed down” takes into account the entire universe. It is all compartmentalized; cast in a form which requires ones understanding of the rest of the universe to be correct.

Have fun -- Dick

### #23 Doctordick

Doctordick

Explaining

• Members
• 1092 posts

Posted 02 September 2008 - 09:30 PM

Sorry Anssi, I never got a notification that you had posted. But you come through as usual and I appreciate it.

Euclidean hyperspace is incoherent by definition, what advantage is accrued by one incoherent model over another?

I have no idea as to what you have in mind. I believe the term “hyperspace” is a science fiction term. I think you will have to define what you mean by the term. As far as I am concerned, I am using a simple four dimensional Euclidean space and there is nothing incoherent about it at all.

Have fun -- Dick

### #24 Erasmus00

Erasmus00

Creating

• Members
• 1561 posts

Posted 02 September 2008 - 11:36 PM

That is simply a false statement. The experimental results verifying the transformation equations of special relativistic results may be identical to Einstein's predictions but the “picture” which yields this as a rational attack is quite violently different from his. Try and find a representation of “tachyons” in my picture. As I say, you just have that brain clamp on too tight and just can not see my picture.

Consider your statement of everything moves at c to Einstein/Minkowski's statement that everything has a four velocity with magnitude c. The idea that mass is the conjugate to tau can be of thought as a rearrangement of

$d\tau^2 = dt^2-dx^2$
$m^2 = E^2 - p^2$

Traditionally, energy is conjugate to time and momentum to x, in your picture you rearrange both sides, but it amounts to the same identities!

$m\psi = i\hbar\frac{d\psi}{d\tau}$

And I'm pretty sure that this doesn't yield much by way of physics, though I've only played around with it for a few minutes, it doesn't seem to yield anything useful. (mostly because mass is a central charge so doesn't traditionally have a nice operator associated with it, unlike energy which has the hamiltonian).

Also, you cannot have tachyons in Einstein's picture either, without violating the rule of particles lines outside the light cone. You can't put tachyons into your picture without violating your everything moves at c rule.

Also, tachyons are, in field theory, now believed to be a sign that you've done a calculation wrong (you've picked the wrong vacuum). Hence, there may be no need to fit them into either theory.

That is very simple issue, none of the physics which has been “nailed down” takes into account the entire universe. It is all compartmentalized; cast in a form which requires ones understanding of the rest of the universe to be correct.

So you can dismiss out of hand any experimental evidence that disagrees with you? Please, name a single piece of evidence that the universe is scale invariant. Its a bold prediction of your theory that seems to fly in the face of everything that has ever been measured, saying "its because the measurement is compartmentalized" seems a cop out. If the universe is scale invariant shouldn't we have some observable evidence? Shouldn't the CMB be scale invariant? What measurement COULD be done to demonstrate this scale invariance?
-Will

### #25 AnssiH

AnssiH

Understanding

• Members
• 828 posts

Posted 04 September 2008 - 10:35 AM

if the universe were governed by scale invariant equations, then it should be scale invariant- how come none of the fundamental forces demonstrate this feature? Why isn't the CMB background truly scale invariant instead of merely approximately scale invariant? etc.

You have misunderstood what exactly is claimed to be scale invariant. Also your comment implies you are looking at this as if it told you what really governs the universe. You should not.

What DD's epistemological analysis (which is the root of this relativity conundrum) tells you is rather what sorts of laws are common to all reasonable predictive models of any datastream whose meaning is fundamentally unknown. The interesting bit being, that those (entirely general) laws appear to be almost exactly what our best physical models take as the laws of the nature.

Let me expand on that before I get to scale invariance. Suppose a newborn baby, or any sort of mechanical learning system, that does not have any information about what reality is like or what to expect from reality. That means, while the learning system is receiving information from its sensory system, it has absolutely no idea about how to interpret any of it in any meaningful way. Nothing can even be "perceived" without being able to interpret that data stream. I.e, it is receiving a datastream whose meaning is fundamentally unknown.

Furthermore, its survival depends on it being able to predict that datastream. You could say, it needs to be able to model reality - or how it believes reality is behind that data stream - so to be able to expect dangers that lie in the future. In simple terms you could say that it tacks identity to certain patterns, and comprehends them as "objects", supposing they are governed by whatever laws explain your perception of how they move.

Now, since the meaning of the datastream is fundamentally unknown, there exists many valid ways to tack identity to those patterns & making up appropriate laws that explain why such "objects" do what they do. At the end of the day, many features of such worldview will be defined through circular logic, which is completely unproblematic as far as the predictive powers go. Each worldview simply handles the same reality with different sorts of terms. (In fact this can be seen as a rather useful feature as it yields semantics, but let's not get into that now)

This much was rather obvious to me before I talked to DD, but I myself had no idea how to even begin to figure out what sorts of mechanisms allow us to start building a worldview from raw data. I just knew it must be possible one way or another since I know nothing about reality for certain, but still I can interpret my sensory data in useful ways.

Now, the important bit with DD's analysis is that there also exists certain features that are common to any possible "identity tacking" scheme. These are the symmetries that DD refers to, and the epistemological analysis merely investigates the logical consequences of those symmetries, with rather surprising results. At least initially surprising; it actually does make a lot of sense once you wrestle it in.

One of those symmetries is scale symmetry, and don't fail to notice it does indeed refer to scale symmetry to the assignment of labels to ontological elements in the x,tau,t -space. I.e. If the raw data is mapped onto the x,tau,t -table, and then your problem is to come up with an explanation as to what that data means, then that problem is completely unchanged if you scale the mapped data one way or another. (If you are wondering, any specific mapping method is a function of your explanation and vice versa, but whatever they are, the scale symmetry exists)

So the scale symmetry does NOT refer to some specific feature of some specific worldview being scale symmetric. It is instead analogous to the entire universe being scaled one way or another. And obviously that would not be observable, as you and all your measuring devices are also part of universe.

I agree- which is why Einstein's relativity parameterization is the one I prefer. Einstein/Minkowski SR can be set in a coordinate invariant way- the "maps" as it were can be completely abstracted out. In Dick's formalism, this is simply not so- everything is phrased in terms of things that depend on the coordinates.

Well different mapping implies different reality, and some ontological interpretations make certain logical consequences more obvious than others. People certainly tend to base their arguments on how they believe reality exists. For example, if you took relativistic spacetime as ontologically true, you may be compelled to investigate whether it can curve into itself in such a sense that one could travel backwards in time. Otherwise you probably would not bother with such ideas.

-Anssi

### #26 Erasmus00

Erasmus00

Creating

• Members
• 1561 posts

Posted 04 September 2008 - 11:39 AM

So the scale symmetry does NOT refer to some specific feature of some specific worldview being scale symmetric. It is instead analogous to the entire universe being scaled one way or another. And obviously that would not be observable, as you and all your measuring devices are also part of universe.

It would be observable! All you have to do is make measurements at different scales! As you have said, saying the universe is scale invariant is akin to saying it has no scale- which in turn implies measurements at different scales should be the same. They are not. Imagine using a small telescope to look out at the sky, and then to build another telescope exactly the same but twice as large and using that to look out at the sky. What should the data look like?

Further, one can check if the EXPLANATIONS we have are scale invariant (which is fairly straightforward). Despite the fact that Dick insists his master equation is the backbone of any flaw free explanation, none of the scientific theories we have are scale invariant. This leaves us with two choices:

1. Dick is right, and the other theories wrong.

2. the "standard" theories are right, and Dick has erred somewhere.

To decide between 1 and 2, the simplest way should be to figure out predictions made by the two theories and where they differ, do some experiments. I contend scale invariance is one place.

In regards to Einstein's spacetime, I fear you are once more missing or talking around my point. Assume the following:

Something exists in need of describing

Now, there are two ways we can describe this object- the first is to drop down a bunch of arbitrary labels all around it and come up with relationships between the arbitrary labels.

The second is to (using the miracle of mathematics) figure out properties of your object that are INDEPENDENT of the labels. i.e. no matter how we change the labels (which are arbitrary), these properties are fixed.

Which is better? Dick is doing the first, Einstein the second. I contend the second is better.
-Will

### #27 AnssiH

AnssiH

Understanding

• Members
• 828 posts

Posted 04 September 2008 - 12:20 PM

It would be observable! All you have to do is make measurements at different scales! As you have said, saying the universe is scale invariant is akin to saying it has no scale- which in turn implies measurements at different scales should be the same. They are not. Imagine using a small telescope to look out at the sky, and then to build another telescope exactly the same but twice as large and using that to look out at the sky. What should the data look like?

That has got nothing to do with scale invariance to the reference labels to ontological elements in the x,tau,t-space. If you have a model of reality mapped onto an x,tau,t-space, then scaling the whole thing means your measurement stick gets scaled as well, i.e. it still measures everything at exactly the same length it measured them before.

To scale a telescope but nothing else, is just like scaling only that measurement stick but nothing else. Of course it would measure things differently, but that is only one very specific thing inside your worldview being scaled, not the "whole worldview".

Further, one can check if the EXPLANATIONS we have are scale invariant (which is fairly straightforward). Despite the fact that Dick insists his master equation is the backbone of any flaw free explanation, none of the scientific theories we have are scale invariant. This leaves us with two choices:

1. Dick is right, and the other theories wrong.

2. the "standard" theories are right, and Dick has erred somewhere.

Or 3. the scale invariancy doesn't refer to any single relationship being scale invariant, but rather your whole self-coherent set of explanations being scale invariant if you just managed to scale them all the same way (to put it like that).

It's much like having your self-coherent set of scientific theories in an algebraic equation. They have certain relationships to each others which makes it impossible to just take one term and change it willy nilly. Instead you'd need to carefully change other terms in the equation accordingly too to keep the whole thing valid. Or there can be certain operations that don't change the relationships in any way, such as multiplying each and every term by some X amount.

In regards to Einstein's spacetime, I fear you are once more missing or talking around my point. Assume the following:

Something exists in need of describing

Now, there are two ways we can describe this object- the first is to drop down a bunch of arbitrary labels all around it and come up with relationships between the arbitrary labels.

The second is to (using the miracle of mathematics) figure out properties of your object that are INDEPENDENT of the labels. i.e. no matter how we change the labels (which are arbitrary), these properties are fixed.

Which is better? Dick is doing the first, Einstein the second. I contend the second is better.

I'm afraid I don't really understand what you are saying
Hmmm, or perhaps I am. The thing is that "what constitutes an object" is part and parcel of our worldview. The latter (Einstein) view is essentially assuming that how we perceive reality, is how the ontological reality really is (which is not unproblematic since you can make multitude of assumptions regarding what happens beyond your observations).

The former (DD) view is investigating properties that are found from our model of reality, but not necessarily from reality itself (since we probe reality according to an interpretation that is a function of what we believe reality is like). It could probably be deemed useless if it didn't yield any interesting results. But I do think it is rather interesting that it yields relativistic time evolution to entities, when they are just defined in specific way (which is forced upon us if we are to remain objective). That by itself is an explanation of how relativistic time evolution arises as a feature of a world model, without any knowledge about whether or not it is a feature of reality itself. Without any requirement of it being a feature of ontological reality as is. How's that for surprising?

-Anssi

### #28 Erasmus00

Erasmus00

Creating

• Members
• 1561 posts

Posted 04 September 2008 - 11:08 PM

AnssiH, to avoid talking past each other, please let me know which of the following steps you disagree with

1. The universe is scale invariant
2. This implies the universe has no scale
3. This implies that any scale in a measurement comes from the device
4. Therefore, scale dependence in a measurement should be a function of the scale of the device
-Will

### #29 AnssiH

AnssiH

Understanding

• Members
• 828 posts

Posted 06 September 2008 - 02:50 PM

AnssiH, to avoid talking past each other, please let me know which of the following steps you disagree with

1. The universe is scale invariant
2. This implies the universe has no scale
3. This implies that any scale in a measurement comes from the device
4. Therefore, scale dependence in a measurement should be a function of the scale of the device
-Will

I have a lot of trouble interpreting you unambiguously

I mean, I could agree with #1 it if it meant "the universe - when taken as a whole - is scale invariant", i.e. when not referring to the idea that "everything in universe looks the same from all the distances", which seems to be what you are thinking of (judging from your earlier posts). I believe here's exactly where the miscommunication lies.

I understand that something being scale invariant would in usual physics context mean that it would behave exactly the same way when built in different sizes (against lightspeed or whatever we'd want to use to define size).

That is somewhat different issue than the x,tau,t-mapping being scale invariant. Since the x,tau,t-mapping contains your entire (modelled) universe, scaling the entire thing changes nothing in it, much like scaling an entire "spacetime block" (representing the whole universe) would change nothing in it; nothing inside that spacetime block could determine whether that spacetime was scaled or not.

I.e. scaling your entire universe in its x,tau,t-mapping doesn't mean its size would go from 10 billion lightyears to 100 billion lightyears, since you'd be scaling that light as well, so to speak.

-Anssi

### #30 Erasmus00

Erasmus00

Creating

• Members
• 1561 posts

Posted 06 September 2008 - 11:25 PM

I.e. scaling your entire universe in its x,tau,t-mapping doesn't mean its size would go from 10 billion lightyears to 100 billion lightyears, since you'd be scaling that light as well, so to speak.

You are scaling the light's speed, you can't scale particles, as they are points. But as you are defining it, "invariance" is tautological (defined in such a way as to always be true).

Any statement that has real content make a prediction. So, lets try this: does the universe have an associated length scale in DD's model?
-Will

### #31 Doctordick

Doctordick

Explaining

• Members
• 1092 posts

Posted 07 September 2008 - 04:40 AM

Hi Anssi, you are without a doubt a unique person. As far as I am aware you are the only person who actually comprehends the problem I have attacked. If there is anyone else out there who believes they understand that problem, I wish you would chime in. For others who are reading this, if you look back at the last few posts by Erasmus00 you will find him using the word “theory” in reference to my work. This is clear evidence that he has utterly no idea of what Anssi and I are talking about. Nothing I put forth constitutes a “theory”; my fundamental equation is a direct deduction from definitions which impose no constraint whatsoever on the data referred to by the numerical labels xi. The result is a fact, not a theory.

I had deduced this equation in the middle sixties but, as Anssi has commented, “it could probably be deemed useless if it didn't yield any interesting results”. It was roughly ten years before I was able to pull down anything of any value. Since then a lot more stuff has managed to drip out of that relationship. The fact is, the equation is true; the problem is, how to interpret our world view such as to make it in compliance with the equation (what I have proved is that such an interpretation always exists, not that it is easy to find that interpretation). That fact brings up another interesting observation. I am sure many of you have heard that old philosophic question, “how do I know you are experiencing the same thing when you say you see 'green' as I experience when I look at something green?” The answer lies in the fact of the existence of that interpretation I am talking about. The question itself actually becomes immaterial.

Since there always exists an interpretation of any flaw free explanation which satisfies my equation, it follows that no matter what internal world view you have developed to explain your experiences or what world view I have developed to explain my experiences, there always exists a mapping of your view into my view (so long as both views are flaw free). It seems to me that it is exactly those aspects of our world view which are sufficiently flaw free to be identified with solutions to that equation which we seem to find almost universal agreement: i.e., our minds have managed to perform that mapping.

If we can map our experiences into a world view, we can certainly map communication efforts into that world view, relating another's experience with our own. That mapping (which amounts to understanding a language) would be most accurate with regard to truly flaw free explanations; this translates into science: physics, chemistry, biology and so on down the line. Issues commonly referred to as “reality” as opposed to “delusions”.

But back to Erasmus00 and his complaints I will begin with the one which seems to bother him so much.

AnssiH, to avoid talking past each other, please let me know which of the following steps you disagree with

1. The universe is scale invariant
2. This implies the universe has no scale
3. This implies that any scale in a measurement comes from the device
4. Therefore, scale dependence in a measurement should be a function of the scale of the device

He seems to have left out a very important step: #5, the fact that the scale of the device is a function of the scale of the universe. I think his problem is that he cannot conceive of a circumstance where nothing he knows is of value: i.e., he cannot even comprehend the problem of starting with nothing. As I have said many times, we all have a complete world view years before we are confronted with logical analysis; we have clearly made millions upon millions of assumptions long before we get the first inkling of what it is we are working with. It is the very height of arrogance to hold that world view as “well thought out”.

I have deduced this equation and proved that any and all explanations can be interpreted as solutions to this equation. That being the case, perhaps the most important aspect of understanding this thing is to conceive of the simplest physical system which is represented by such an equation as, once we understand that system, we essentially understand all systems. First, I will point out that the geometry of the system is entirely Euclidean by construction. That is quite nice as even grade school students have a certain understanding of Euclidean systems.

The “interaction” terms in the equation are two fold; first, there is the Dirac delta function which can only have an impact in the limit where the difference in position of two reference points goes to zero, and second, there is the requirement of antisymmetric exchange of the solution [imath]\vec{\Psi}[/imath] insofar as the valid elements are concerned (the fictional elements required by the explanation can be of either symmetry). Both of these factors can be neglected if we look at the equation for sufficiently small density of these reference points: i.e., sufficiently small scale analysis. If we do that, the equation becomes exactly what a competent physicist would write down as a quantum mechanical representation of a gas consisting of massless infinitesimal quantized entities interacting only through contact interactions which arise through the quantum mechanical exchange of those fictional elements.

And finally, since the tau axis is totally fictional, the mechanism for calculating the expectations must include a mechanism for eliminating tau from the final result (no valid expectations can depend upon the tau index). The simplest rule for eliminating tau is to require momentum in the tau direction to be quantized thus yielding infinite uncertainty in the actual value of tau. The final system is a rather simple system and yet it fulfills the need that it requires the fundamental equation. This yields us a physical system into which any and all explanations may be mapped. Not a theory; but a fact. All that is left is to analyze what our expectations should be for such a system. I am afraid that is not a trivial issue as my fundamental equation is a many body equation and thus quite difficult to solve in general (one might say impossible); however, I have discovered a number of significant solutions for specific circumstances.

Any statement that has real content make a prediction.

And my work makes a number of predictions. The first one is laid out in detail for anyone who chooses to examine it.

it turns out that the equation of interest (without the introduction of a single free parameter: please note that no parameters not defined in the derivation of the equation have been introduced) is exactly one of the most fundamental equations of modern physics.

$\left\{-\left(\frac{\hbar^2}{2m}\right)\frac{\partial^2}{\partial x^2}+ V(x)\right\}\vec{\phi}(x,t)=i\hbar\frac{\partial}{\partial t}\vec{\phi}(x,t)$

This is, in fact, exactly Schroedinger's equation in one dimension.

So I have made a prediction, Schroedinger's equation is a useful way of obtaining approximate solutions to the behavior of any single element of any possible universe; all you need is the proper interaction function which yields your expectations for the rest of the universe. Which, by the way, is exactly the assumption made by any scientist who goes to use Schroedinger's equation.

So, lets try this: does the universe have an associated length scale in DD's model?

Just more indication that Erasmus00 has utterly no concept of what I am doing. He clearly thinks I am putting forward a theory of some kind. I am not! Any “associated length scale” must arise in the explanation of the universe and it is the explanation I am modeling, not the universe! At this moment, what is astounding is the number of relationships taken as fundamental statements about reality that are actually no more than relationships embedded in my fundamental equation: i.e., things that could not possibly be otherwise. To date, I have found no solution to my equation which does not have a counterpart in scientific conclusions already accepted as true: i.e., it turns out that these things are inevitable consequences of defined concepts and not statements about reality at all.

Have fun -- Dick

### #32 AnssiH

AnssiH

Understanding

• Members
• 828 posts

Posted 07 September 2008 - 08:50 AM

Any statement that has real content make a prediction.

DD already replied, but I would just like to quickly reiterate - since some confusion seems to exist - that there are no direct statements made about ontological reality at all.

Perhaps you are wondering, what point is it to talk about mere logical construction?

Well the point is that those beforementioned symmetries (to our worldview mapping) have surprising logical consequences, completely independent from the "content" (or "true meaning" or "source") of the raw sensory data. The "predictions" that DD refers to are exactly those consequences, and indeed they have been found to be true so far...

...and more to the point, since we are talking about mere logical construction, it should be possible to prove unequivocally that those consequences will always be found to be true, since their source does not lie in the content of that raw data, but in the symmetries in the mapping of that data.

I think you may have understood what was meant by scale invariance in this context, so perhaps you can go back to wherever that unfortunate misinterpretation caused the discussion to go haywire.

Thanks,
-Anssi

Questioning

• Members
• 180 posts

Posted 07 September 2008 - 02:28 PM

Seeing as there have been so many posts since last time I posted there is more here that I could respond to, but seeing as I have some questions that I haven’t had a chance to post I’m just responding to the answers that you gave to my last questions.

That is one of my major complaints about Einstein's picture. He also sees objects as following paths through his four dimensional “space-time” geometry and uses the concept of time to talk about evolution of structures along those paths. In essence he simultaneously uses time as a coordinate of his geometry and as a concept of position along those paths (essentially he is confusing two very different issues).

Then in a way you have separated these into two axis’s instead of one. That is, your tau axis is a coordinate of the geometry while your t axis is just a position on the paths that the elements take?

Perhaps I should not have put that issue forward in this thread as I am afraid it is confusing you (that is why I changed the title of the post to “Answers to Bombadil's questions!”. The central issue here is that my equation “is just not valid” if the total sum of all momentum of all entities being described by that equation does not vanish. However, in spite of that fact, if I do have a specific solution \vec{\Psi} to that equation, quantum mechanics does provide me with a mathematical mechanism for transforming that solution to a solution where that sum is not zero (this is a subtly a very different issue). Please note my definition of momentum; there is no alpha term in the definition. A secondary issue here is that, in deriving Schroedinger's equation, I am clearly stating that Schroedinger's equation is an approximation to my fundamental equation and is thus only valid when those approximations are valid. It is a well known fact that Schroedinger's equation is not in conformance with special relativity so, technically speaking, any shift in reference frame can be seen as invalidating Schroedinger's equation: i.e., the mathematical mechanism discussed above does not technically give the correct answer; it only yields an approximately correct answer.

I’m not sure of how you are saying that the transformation to a system where the differentials do not vanish is performed. How I understand it is, we start with the fundamental equation:

[imath]\left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) \right\}\vec{\Psi} = K\frac{\partial}{\partial t}\vec{\Psi}[/imath]

We then do the substitution: [imath]\Psi (x) = \psi (x)Ae^{i\frac{ Kx}{\hbar}}[/imath]
and substitute this into the fundamental equation. After substituting this in the fundamental equation, and some rearrangement, we transform the fundamental equation into the equation:

[imath] \left\{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j}\beta_{ij}\delta(x_i -x_j)\delta(\tau_i - \tau_j) + \sum _i \frac{ikn}{\hbar}\alpha _x \right\}\vec{\Psi} = K\frac{\partial}{\partial t}\vec{\Psi} [/imath]

Which would be the modified version of the fundamental equation for when momentum does not vanish.

The scaling factor occurs directly as a consequence of the fact that the “form of the equation” must be exactly the same in all three relevant frames even when each is moving with respect to the other. The Dirac delta functions are of no consequence in that analysis as they are not influenced at all by a scale change. Omitting them, the remainder of the equation has the time evolution of an expanding sphere. Actual events described by that equation must conform to that self same expanding sphere. The only way that can be true is if observers who go to use that equation in those different frames use a different coordinate system. The solution to that problem is exactly the same solution required to make Maxwell's equation valid in everyone's frame (they must all see a flash bulb as producing an expanding sphere). This problem was solved years before Einstein published his theory and is exactly the changes which his theory was concocted to explain. The scaling solution is simple high school algebra and I will show it to you if you wish. The problem is exactly the same in four dimensions as it was in three dimensions.

I am wondering just how have you defined the movement of a frame of reference.

I am interested in the derivation of the necessary transformation although I’m still not sure of how the problem comes about. Perhaps understanding how the derivation is performed would be helpful. If you think that it would be too confusing or not useful until the reason that it is needed is understood you can wait to put it up, otherwise it may be helpful to see the derivation.

I don't understand your question at all. I have already defined the mass and energy operators in terms of the fundamental equation itself. They are not added to the equation, they are already there; I have done no more than define what aspects of the equation I am referring to when “I” use the terms “momentum”, “mass” and “energy”.

How I’m understanding it is that due to the possibility that some frames exist that have a nonzero sum for the momentum operator the Lorenz transformation becomes necessary. So what I’m asking is, are we going to consider the possibility of frames where the mass or energy operator doesn’t vanish.

That falls directly out of my fundamental equation and is essentially a differential description of that expanding sphere I just mentioned above.

I suspect that this is where part of the problem may be coming from, how I understand this is that it is the description of an expanding sphere and that the problem is that the derivation of this equation is unchanging no mater if we start with nonzero momentum or if we start with a frame where the momentum vanishes so that what we need is a transformation that leaves the equation with a constant expansion rate when moving from a frame without momentum to one with momentum.

If you have any questions, post to that thread, with a quote of the particular post you are referring to, and I will do my best to make the issues clear (there are a lot of things I could put quite differently). That is one of the reasons I think I am right; there are six ways from Sunday to attack all these things.

I haven’t gotten a chance to read through much of those although much of this I have read before, so I don’t think that I will run into any problems. If I do I’ll post to that thread.

### #34 Doctordick

Doctordick

Explaining

• Members
• 1092 posts

Posted 08 September 2008 - 10:14 AM

Then in a way you have separated these into two axis’s instead of one. That is, your tau axis is a coordinate of the geometry while your t axis is just a position on the paths that the elements take?

I would agree with that except for one point; there is no “t axis” there as t is nothing more than an evolution parameter. The parameter t is only used as an axes in that (x,t) plot which displays how x changes with t (a side issue having to do with motion analysis), it is not an axis in the basic data display.

I’m not sure of how you are saying that the transformation to a system where the differentials do not vanish is performed.

What you are referring to is not a “transformation to a system where the differentials do not vanish”. It is a transformation on the solution $\vec{\Psi}$ which yields a new function $\vec{\Phi}$ with exactly the same expectations except for the fact that the momentum does not sum to zero. Actually, this has utterly nothing to do with “transformation to a system where the differentials do not vanish”. As I said originally, these are two subtly different issues. Now, what you are proposing (changing the form of the equation) is a possibility; however, I don't think you have done it correctly and, even if done correctly, such a thing would be of little value as we are talking about a many body equation here and probably the only way to crank out a solution would be to first transform it back to the original form (you want to keep things as simple as possible here).

This new solution is still a solution in the frame of reference where the momentum sums to zero, it is not the correct solution in a frame of reference where the momentum does not sum to zero; it is merely a new representation of the original solution and is not, in a straight forward way, the same solution one would obtain were the fundamental equation solved in that other frame. That is a question of the symmetries involved as the elements involved in the two solutions would have to be different. Remember the equation is actually valid only for the entire universe. Sorry for the confusion.

I am wondering just how have you defined the movement of a frame of reference.

Remember, we are talking about our expectations, we are not talking about reality. This fact leads to the different frames of reference. The issue is that the observer may not be aware of the entire universe and/or it's impact upon his observations: i.e., if distant entities are to be ignored as insignificant, it is entirely possible that two observers may use a different coordinate system. More important, in comparing their coordinate systems, they may not agree that the sum of all momentums vanishes in the other's coordinate system. At the same time, in order to obtain valid results, they must both operate with the fundamental equation under the constraint that the sum of the momentum of the universe vanishes. It turns out that this means their coordinate systems will not be the same.

I am interested in the derivation of the necessary transformation although I’m still not sure of how the problem comes about.

The issue is the form of the fundamental equation. If you can ignore the Dirac delta functions (and, since they are point interactions it is always possible to analyze the situation at a small enough scale to make them dynamically irrelevant) then the fundamental equation reduces to a collection of independent wave equations each propagating with a velocity of 1/K. That makes that propagation an expanding sphere. Everyone using that equation must see that same result. This is exactly the same conundrum presented by the Michelson-Morley results and the solution is exactly the same: their coordinate systems must display exactly the same transformations Lorentz and Fitzgerald hypothesized.

Now the Michelson-Morley results had to do with light and they showed that the Maxwell equations required exactly that same contraction due to the electromagnetic interactions essential to the stable structure of measuring instruments. However, they lost to Einstein because their theory yielded the possibility that other (non electromagnetic) interactions were not required to yield this contraction. This lead Einstein to the hypothesis that space itself was the source of this apparent contraction (the velocity of light IS a constant; certainly something which can not be proved). My equation, which is true by construction, requires that all interactions will yield this exact contraction: i.e., all observers observing any solution must see that unconstrained expansion of $\vec{\Psi}$ to be as sphere expanding at a velocity of 1/K. It is just a simple high school algebra problem.

How I’m understanding it is that due to the possibility that some frames exist that have a nonzero sum for the momentum operator the Lorenz transformation becomes necessary. So what I’m asking is, are we going to consider the possibility of frames where the mass or energy operator doesn’t vanish.

Again the issue is that the fundamental equation is only valid when these things vanish; however, once we have a solution, it is quite easy to transform that solution to a solution where the these things do not vanish. For convenience, we can set our zero energy levels to different values; physicists do it all the time.

I suspect that this is where part of the problem may be coming from, how I understand this is that it is the description of an expanding sphere and that the problem is that the derivation of this equation is unchanging no mater if we start with nonzero momentum or if we start with a frame where the momentum vanishes so that what we need is a transformation that leaves the equation with a constant expansion rate when moving from a frame without momentum to one with momentum.

The derivation of the equation presumes the entire universe is included and the requirement that those differentials must vanish is required by symmetry issues. The problem is that our expectations may be based on less than the entire universe. If we presume we are including the entire universe then two people who are making the same assumption with different basic information must also be using different measures (it must be part of their different basic information). The measures must obey the Lorentz-Fitzgerald contraction.

Good luck; I hope I have cleared some things up.

Have fun -- Dick

Edited by Doctordick, 01 February 2016 - 03:57 AM.