Jump to content
Science Forums

Redshift z


Recommended Posts

Modest, I think your discrepancy with cc arises from the fact that you interpret the data after the tweaking has been done, and therefore you are already biased towards a certain result.

 

Look at what I said:

 

If the lines represent the proper distance then two spaceships at the red galaxy traveling the same speed in opposite directions (one toward green and the other toward blue) would both reach their destinations in the same amount of time. This is the kind of distance cosmologists mean when talking about the shape of the universe and in this case the universe appears flat. Standard cosmology in proper cosmological distance would be option B.

 

If the lines represent the light travel time distance which is to say that light took the same amount of time to get from green to red as it did red to blue (where red to blue happened later than green to red) then I believe C would be the correct rendition.

 

Depending on what the lines are supposed to represent, or you might say, depending on how one defines distance, either option B or C would be appropriate. This is exactly the opposite of being "biased towards a certain result".

 

By the way, if someone can support an assertion with evidence then the assertion is not bias. "Bias" is an opinion that is not based on evidence. I have explained my reasoning and I can provide evidence to support it.

 

Try looking at the graph as if you knew nothing about LCDM or DM and DE. I know it must be hard, but those are the rules of the game as I see it.

 

It is not possible to pick A, B, or C without using some type of model to reach the conclusion. The lines in CC's diagram represent distances which cannot be directly measured with our current level of technology. One must, therefore, interpret data such as redshift, angular size, and luminosity and convert that data into a proper distance. This cannot be done without a model.

 

Try explaining what observations fit diagram A. Why is that the correct diagram?

 

I can give you direct observations that indicate space is flat,

 

This begs the question, how do we know the universe is flat? So much of our inventory of the universe depends on satisfying this condition.

 

It’s actually pretty clever. As with any geometry project, we need triangles. If we take an enormous triangle and add up all the angles, we should get 180 degrees if the universe is flat, less than that if it’s negatively curved (middle surface in diagram above) and more than 180 degrees if its positively curved (the sphere at top).

 

Now, all we need is a triangle. A big one.

 

It turns out that the Cosmic Microwave Background provides a great triangle. Using the Earth as one apex, and measuring the largest fluctuations in the CMB, if the universe is flat, those fluctuations should be 1 degree across. A negatively curved universe would give smaller angular sizes, and a positively curved universe would yield larger ones.

 

It turns out that the largest fluctuations measured in the CMB from the WMAP satellite are, in fact, 1 degree across.

 

I welcome you to do the same for option A.

 

~modest

Link to comment
Share on other sites

Ok, but linearity is being misapplied here. :)

 

Touche :)

 

 

Interesting, isn't it, that such a remarkable fine-tuning would exist in nature

 

Indeed :agree:

 

I'm sure you've seen supporters of intelligent design use the critical density as an example of god fine tuning the universe. While there is, of course, no evidence of that it does indicate the problem. Why is the density so very near the critical density? I hope one day we figure it out, because "it just is" I find unsatisfactory.

 

And that the manifold within which that transpires is Euclidean, despite the universe expanding out of control. Convenient too I might add.

 

It's actually not convenient. Most physicists would have preferred a closed universe. If you've ever read Hawking's A brief history of time, for example, he hopes that it ends up being closed. That would have avoided problems of infinity with an open universe.

 

The point to make is that without DE and CDM, and considering expansion according to the pre-1998 critical Friedmann model, non-linearity was clearly manifest.

 

I don't follow. The SN-1a tests measure the Omegas.

 

In other words Hubble's constant was no longer constant, it was no longer a law.

 

In what model specifically is the Hubble parameter constant over time?

 

I don't think that has ever been the case, certainly not something that happened post-1998. The Hubble parameter is:

 

[math]H \equiv \frac{\dot{a}}{a}[/math]

 

Where [math]\dot{a}[/math] is the scale factor and a function of time so that H always changes with time in an expanding or contracting universe.

 

There was no longer a one-to-one relation between the velocity at which various galaxies were receding from the Earth proportional to their distance from the observer.

 

This, again, would depend on how you define distance. The "current" velocity *is* proportional to the "current" distance. By "current distance" I mean the distance between the earth now and the galaxy now. We don't see the galaxy as it exists now and it doesn't make much sense to talk about the distance or velocity between earth now and a galaxy a billion years ago. The current velocity is:

[math]v=H_0D_{now}[/math]

as given half way down this page which means the velocity *is* proportional to distance. This didn't change post 1998. Neither did redshift become non-linear post 98. It is non-linear when Lambda = 0. So, I don't agree with your characterization.

 

The Hubble Diagram, in which the velocity (assumed approximately proportional to the redshift) of an object plotted with respect to its distance from the observer was no longer a straight line.

 

I don't believe it was ever a straight line. [edit: I should clarify, I don't believe the redshift/distance relationship was ever a straight line]

 

-source

 

The ultimate fate of the universe looked entirely different than predicted.

 

Before the Omegas were well-constrained any prediction would not have been well-founded. The point of the SNe-Ia study was to measure the omegas. Measuring something is good science.

 

At that point, there were to options: (1) to consider the expanding universe hyperbolic, with a zero value for the cosmological term and accelerating; or (2) consider the expanding universe flat with a non-zero, non-negative value for the cosmological constant, and accelerating. Basically, that would change our manifold from diagram A to B (the Euclidean spacetime).

 

Are you suggesting [math]\Omega_M[/math], [math]\Omega_{\Lambda}[/math] = 0.26, 0 would fit the SNe-Ia data? I don't believe so, and neither would it work with WMAP's observations.

 

But let's be clear: Lambda-CDM has no explicit physical theory for the origin or physical nature of dark energy or cold dark energy.

 

It doesn't try to explain the physical nature of anything.

 

Indeed, if the expansion rate of the universe has actually been increasing, this should be concurrent with an open, hyperbolically curved universe.

 

If space is hyperbolic then a cosmic triangle has 
less than 180 degrees. If space is expanding at an accelerating rate then the velocity between two points will grow non-linearly with time. They are different things.

 

I would love to agree with you and say that space is significantly hyperbolicly curved, but the data does not indicate that. Having no dog in this fight, I'd rather just accept the best interpretation of the data.

 

~modest

Link to comment
Share on other sites

Everyone agrees that redshift z is the fractional amount by which features in the spectra of astronomical objects are shifted to longer wavelengths.

 

Today, questioning validity of the expansion interpretation is symptomatic of the fact that huge components of dark energy and dark matter are required by a theory (Lambda-CDM) in order to agree with observations.

 

So the question could be posed: If cosmological redshifts displayed by distant celestial objects are not caused by the expansion of the universe, what is the cause of z?

 

At the outset of this thread it was suggested that there are only two possible interpretations for cosmological redshift z that show wavelength independence over 19 octaves of the spectrum.

 

The only viable interpretation for redshift z (aside from a change in the scale factor to the metric) is a general relativistic curved spacetime interpretation (implying a stationary yet dynamic and evolving universe).

 

 

If redshift results as light propagates through a curved spacetime continuum, what causes the curvature?

 

And what would be the geometric structure of such a continuum?

 

Would it be hyperbolic or would it be spherical?

 

 

As mentioned above, Lobachevsky wrote his first major work, Geometriya, in 1823. The geometrical ideas that it represented led to his most important discovery—non-Euclidean geometry (called Lobachevskian geometry); consequently, reducing Euclid’s geometry to a special case of a more general system. Lobachevsky proved that for all triangles in the Lobachevskian plane the sum of the angles is less than 180 degrees. This led him to the idea of space geometry, where the radius of a sphere is considered as purely imaginary (consisting of coordinates x, y and the imaginary coordinate z), and appears to be a hyperboloid. Though z need not be imaginary, as it was shown later (by Minkowski if I recall).

 

 

The construction of this hypothesis proves to be non-contradictory in nature.

 

 

In 1908, Minkowski considered this complex space in connection with Einstein’s special principle of relativity (now called pseudo-Euclidean space). As we remarked in the Introduction, Lobachevsky wanted to prove the universal character of his new geometry by measuring the angles of triangles of astronomical scales, but never carried out the experiment because of the fundamental problems associated with the inherent motion of stars, the perturbations of the Earth’s orbit, and the margin of error being of the same magnitude as the deviations from linearity.

 

His attempts to elucidate experimentally what type of geometry transpires in the real world are not dissimilar to the problems we face today in accomplishing the same goal; on the cosmological scale. The 1998 supernovae results clearly demonstrate that the sum of angles of a cosmic triangle measures less than 180 degrees, and that the circumference of the visible universe is larger than 2πr. Remarkably, the redshifts themselves could be considered evidence of the non-Euclidean nature of the universe. And now, with the SNe Ia data we find the manifold to reflect explicitly a non-Euclidean form (with a negative signature).

 

 

Why would the continuum be non-Euclidean rather than Euclidean?

 

 

As long as mass-energy is present in the universe, gravitation is everywhere present. Locally and globally that fact is well known. What is not so well known is the possibility that light traversing an isotropic and homogenous gravitational field (a universe with a nonnegative and nonzero value of curvature) will be affected continuously, and that from the perspective of an observer, that effect (the continuous energy loss) will manifest itself as redshift z.

 

Einstein’s general principle of relativity characterizes the metric properties of space by the gravitational field (curvature). We are familiar with the idea that local gravitational fields are analogous to hills and valleys (in two dimensions), which are obviously not flat. The average quantity of curvature in a homogenous field is not zero. Light rays follow ‘curved’ (geodesic) paths that depend on the variations in the fields they traverse. The actual departure from linearity is derived from the propagation of light as it crosses every hump and wrinkle in the global metric. Globally, all the matter and energy in the universe introduces a deviation from flat space. It would no doubt be ludicrous to envisage a Euclidean universe where local non-Euclidean features merely cancel other curved protuberances in the spacetime metric.

 

General relativity impels us to reassess the geometrical configuration of the large-scale spacetime manifold as it does with the properties of the local environment. This being the central issue; let us reformulate the predicament with greater precision. The section of the universe that is accessible to observation forebodes that the mean mass-energy density throughout space is nonzero and positive, and that the global density emerges as virtually consistent throughout. The substantial meaning of this is that the geometry of spacetime relates to its physical content, and that although curvature is negligible at small distances, it becomes increasingly important (and noticeable) at larger distances. To revert back to a Euclidean universe would be to accept special relativity universally while relegating general relativity to a special case; an obvious absurdity. The inception of GR did away with Euclidean space in favor of non-Euclidean spacetime.

 

 

Why hyperbolic geometry as opposed to spherical geometry?

 

 

The answer to that question, non-intuitive as it may seem, is straightforward. In an isotropic and homogenous universe the energy loss associated with a photon as it passes through spacetime is continuous, but there is no reason why it should be linear. (We'll comeback to this). The further the photon has to travel, the greater the energy loss, and the higher the redshift. So redshift increases with distance. The redshift increases further because of the additional time dilation factor of (1 + z), also nonlinearly. The deviation from linearity will manifest itself hyperbolically (as viewed by an observer) because spatiotemporal increments appear kinematically to increase continually with distance (i.e., distances will appear greater as one ponders objects further removed, and the time increments will be measured to slow down with distance, as compared to what would be expected in a Euclidean manifold).

 

It follows that the spherical geometry is untenable. In addition to not consistent with observation it would mean that spatiotemporal increments would become smaller with distance, i.e., if the universe were somehow geometrically spherical it would lead to the peculiarity of time contraction with distance, as opposed to time dilation, and spatial increment would appear to become increasingly smaller with distance. It would mean that the photon energy-loss in spherical space is less than the energy loss of a photon in Euclidean space: a logical impossibility since, in principle, there would be no energy loss if spacetime were Euclidean.

 

 

So, to be consistent with the curved spacetime interpretation for redshift z, and in accord with observations, the solution A in the above geometric spacetime manifold illustration would be the correct choice. Both the Euclidean and the spherical solutions (B and C of the above diagram) are ruled out, both by general relativity and observations.

 

 

 

I would like to stress (and perhaps elaborate in the next post) that this interpretation for redshift z does not reflect an actual curvature in the sense that we would be centered on a Pringles potato chip shaped manifold. The universe doesn't actually have that shape (in reduced dimensions). Any observer at any location in the universe would see the universe as if they were centered at the origin of a hyperbolic paraboloid, since all points on the manifold are equal, in accord with the cosmological principle.

 

 

 

CC

Link to comment
Share on other sites

It's actually not convenient. Most physicists would have preferred a closed universe. If you've ever read Hawking's A brief history of time, for example, he hopes that it ends up being closed. That would have avoided problems of infinity with an open universe.

 

To save inflation, and the things inflation resolved (flatness problem, horizon problem, etc), the universe had to be flat. That is why the critical model was favored.

 

 

 

I don't follow. The SN-1a tests measure the Omegas.

 

SNe Ia tests measured redshift and light curves, luminosity as a function of time after the explosion.

 

 

It is non-linear when Lambda = 0. So, I don't agree with your characterization.

 

This is what I refer to. It is non-linear when Lambda equals zero. To agree with theory, lambda had to be employed. But at what cost?

 

 

Measuring something is good science.

 

Can't argue with that. :idea:

 

 

It doesn't try to explain the physical nature of anything.

 

I think it should. How else to get rid of the bunk? The physical nature of DE and CDM needs to be explained.

 

 

 

If space is hyperbolic then a cosmic triangle has 
less than 180 degrees. If space is expanding at an accelerating rate then the velocity between two points will grow non-linearly with time. They are different things.

 

I would argue that the CMB is not a good way to measure cosmic triangles. The best way is to use two SNe Ia at significantly different distances (the third point is earth). The result is that the sum of the angles equals less that 180 degrees, consistent with illustration A (not B or C).

 

 

 

I would love to agree with you and say that space is significantly hyperbolicly curved, but the data does not indicate that. Having no dog in this fight, I'd rather just accept the best interpretation of the data.

 

The data is consistent with hyperbolicity, depending on the interpretation of redshift. Both interpretations can be perceived as equally valid; though on seems not to require a large DE component, making it better.

 

 

 

CC

Link to comment
Share on other sites

The 1998 supernovae results clearly demonstrate that the sum of angles of a cosmic triangle measures less than 180 degrees, and that the circumference of the visible universe is larger than 2πr.

 

The SNe-Ia results do not demonstrate that space is hyperbolic.

 

To save inflation, and the things inflation resolved (flatness problem, horizon problem, etc), the universe had to be flat.

 

That's backwards. Inflation was introduced to explain flatness. Flatness was not introduced to explain inflation.

 

This is what I refer to. It is non-linear when Lambda equals zero. To agree with theory, lambda had to be employed.

 

The redshift/distance relationship has never been linear (still is not). The velocity(now)/distance(now) relationship is linear (always has been). The statements you made about Lambda changing those properties were mistaken.

 

I would argue that the CMB is not a good way to measure cosmic triangles. The best way is to use two SNe Ia at significantly different distances (the third point is earth). The result is that the sum of the angles equals less that 180 degrees, consistent with illustration A (not B or C).

 

That doesn't work. We don't have people at those supernova to measure and tell us angles. We know the angular distance between two supernova, but we do not know the actual metric distance that would allow us to solve side-angle-side. The CMB triangle works because we know the actual distance between peak temp fluctuations as if there were a tape measure connecting them. We can compare that to the angular distance. This was a confirmed prediction of flatness and the standard model.

 

There is no way to solve the three angles between us and two supernova.

 

The data is consistent with hyperbolicity, depending on the interpretation of redshift. Both interpretations can be perceived as equally valid; though on seems not to require a large DE component, making it better.

 

It's an interesting subject to explore, but I wish you wouldn't make such definitive claims.

 

What should be the relative brightness of a SNe-Ia at redshift z = 0.5, 0.75, 1.0, 1.25? What is the distance to z=0.5... z=1.25? What is the circumference of a circle at that distance and by what factor is that circumference greater than it would be in an Euclidean universe? What is the angular diameter distance and luminosity distance at those redshift. With those derived distances, can you prove the relationship [math]D_L = D_A(1+z)^2[/math] so that the photon count is conserved? How much is a supernova time dilated at z=1?

 

We would need these answers to not only be consistent and make sense, but to perfectly match observation. If that were the case and they were derived from a valid theory of gravity then "both interpretations could be perceived as equally valid". That would be awesome, but we're not there.

 

~modest

Link to comment
Share on other sites

By the way, if someone can support an assertion with evidence then the assertion is not bias.

To turn something that ( from the moment we are being part of this thread conversation) is at least open to criticism or debatable, or provisional-whatever you wanna call it- into something definitive ,( evidence as you like to call it) verges on dogmatism.

Honestly, I am not an argumentative person , and am not interested in convincing you of anything. I know most of the scientific community endorses your view but if you are really willing to explore alternatives you might wanna use the word "evidence" more cautiously.

 

Try explaining what observations fit diagram A. Why is that the correct diagram?

I welcome you to do the same for option A.

I refer you to coldcreation. He indeed does a much better job at doing it than I do.

 

quantumtopology-

Link to comment
Share on other sites

The SNe-Ia results do not demonstrate that space is hyperbolic.

 

I believe SNe Ia data demonstrates spacetime is hyperbolic, when, and only when, redshift z is interpreted as an effect due to energy loss as photons propagate through non-Euclidean spacetime continuum (where gravity is everywhere present), appearing to be a curved general relativistic manifold from the perspective of any observer.

 

According to the field equations of Einstein's general theory of relativity, the structure of spacetime is affected by the presence of both matter and energy. On small scales (say, compatible with that of the Local Group, or within distances of a few Mpc from the Local Group) spacetime appears quasi-Euclidean—as does the surface of the Earth if one looks at a small section. On large scales however, space is 'curved' by the gravitational effect of matter and energy.

 

Because general relativity postulates that matter and energy are equivalent, this apparent curvature effect is also produced by, in addition to matter, the presence of energy (e.g, light and other electromagnetic radiation). The amount of curvature (or bending) of the manifold depends on the total density of matter/energy present. (Not to mention, the actual curvature of the manifold without mass or energy, an 'empty' universe, which according to de Sitter is curved hyperbolically). That would add to the total effect.

 

 

 

To save inflation, and the things inflation resolved (flatness problem, horizon problem, etc), the universe had to be flat.

 

That's backwards. Inflation was introduced to explain flatness. Flatness was not introduced to explain inflation.

 

Actually, what I wrote was the same thing as you. Flatness, or fine-tuning was thought to be observed (resulting in the flatness problem), inflation predicted flatness, and explained how flatness occurs naturally. Well, that is, if you think a false vacuum is natural. I don't think there's anything natural about it.

 

 

The redshift/distance relationship has never been linear (still is not). The velocity(now)/distance(now) relationship is linear (always has been). The statements you made about Lambda changing those properties were mistaken.

 

I wrote that very quickly and surely with a lack of rigor. What I meant to say was; since the universe is thought to be accelerating, the velocity/distance relation is no longer linear. Too, that to reconcile SNe Ia observations and the pre-1998 big bang theory, lambda had to be extracted from the dustbin of relativity. :) That is, lambda, Einstein's greatest blunder, became his greatest discovery.

 

 

That doesn't work. We don't have people at those supernova to measure and tell us angles. We know the angular distance between two supernova, but we do not know the actual metric distance that would allow us to solve side-angle-side. The CMB triangle works because we know the actual distance between peak temp fluctuations as if there were a tape measure connecting them. We can compare that to the angular distance. This was a confirmed prediction of flatness and the standard model.

 

There is no way to solve the three angles between us and two supernova.

 

I don't think we "know" the actual distance, even approximately, between peak temp fluctuations "as if there were a tape measure connecting them." We certainly do not "know" the distance from Earth to the peaks. The CMB is everywhere present, not projected on to a background sphere from which we can judge distance, let alone perform a triangulation measurement that means anything.

 

That would especially not be the case if the CMB were interpreted in the way Hoyle and Burbidge do in this seminal publication: The Astrophysical Journal, 509:L1–L3, 1998 December 10. Recall that in the case of a stationary universe, the CMB has an entirely different source than what is thought by most cosmologists. It would not be a relic radiation from a hot dense phase of an early universe. Measuring differences in peak temperatures would provide no information as to the geometry of the universe.

 

And I do think there is a way to solve the three angles between us and two supernova. In other words there should be a way of testing the geometry of the universe via these 'standard candles' (using the technique or method of triangulation) The prediction here is that the universe would appear hyperbolic.

 

 

It's an interesting subject to explore, but I wish you wouldn't make such definitive claims.

 

Sorry. It's easier to write in the affirmative than to continually write shoulda, woulda, coulda all the time. Obviously the subject still needs to be explored, and a full model developed and tested empirically.

 

Remember though, distant SNe Ia appear to be further than expected, as if spacetime were 'stretched' out further and further with increasing distance. That is a hyperbolic signature with little doubt (at least from the perspective of an observer in a static globally curved spacetime scenario). Wether this is a sign of hyperbolicity in an expanding universe depends on how one adjusts the parameters to account for observational data.

 

 

What should be the relative brightness of a SNe-Ia at redshift z = 0.5, 0.75, 1.0, 1.25? What is the distance to z=0.5... z=1.25? What is the circumference of a circle at that distance and by what factor is that circumference greater than it would be in an Euclidean universe? What is the angular diameter distance and luminosity distance at those redshift. With those derived distances, can you prove the relationship [math]D_L = D_A(1+z)^2[/math] so that the photon count is conserved? How much is a supernova time dilated at z=1?

 

You won't find the answers below, yet. It will take a while to figure all that out, if it hasn't already been calculated by others.

 

It is true that in order to determine to some extent an accurate measure of distance, a model is required. But it is not essential, since an estimate, even gross, would be applied consistently throughout the regime, leading to an answer about the geometry of the universe.

 

Even the distances associated with the actual standard model could be used, since in a stationary universe the position (distance) of those SNe Ia has not changed drastically, if at all (as in the case where expansion takes them to a new location (now) with the change in scale factor to the metric).

 

The advantage of using SNe Ia over, say, the distance between peak temp fluctuations of the CMB (which heavily depends on the interpretation of the origin of the CMB) is that SNe Ia produce consistent peak luminosity, due to the uniform mass of white dwarfs that explode via a particular accretion mechanism. It is because of the stability of this value that this category of supernovae can be used as standard candles. An accurate determination of distance to their host galaxies can be derived, because the visual magnitude of these supernovae depends predominantly on the distance.

 

No matter what model is used, SNe Ia should (would, could :)) be the one of the best ways to determine extragalactic distances. See for example Cosmic distance ladder

 

So we don't need, for now, to revamp the entire edifice on which we currently calculate distances. Fortunately there are many ways to determine approximate distances.

 

 

We would need these answers to not only be consistent and make sense, but to perfectly match observation. If that were the case and they were derived from a valid theory of gravity then "both interpretations could be perceived as equally valid". That would be awesome, but we're not there.

 

Unfortunately you are correct, we are not there yet. Several things, though, can be said for sure, for either interpretation of redshift z: the gravitation theory of choice is general relativity. The difference between redshift of photons traveling though expanding space, or a geometrically curved spacetime is practically indistinguishable. Both scenarios are testable and both are falsifiable, in principle and in practice (pending a full fledged model for the latter). The important point to make is that the similarities and differences between the two interpretations can be (should be, could be, would be) disentangled.

 

I believe the SNe Ia observations have already done that, in favor of the latter.

 

 

CC

Link to comment
Share on other sites

That doesn't work. We don't have people at those supernova to measure and tell us angles. We know the angular distance between two supernova, but we do not know the actual metric distance that would allow us to solve side-angle-side.

 

We don't need people at those supernova to measure and tell us angles. We can figure them out ourselves. We need just to estimate the metric distance, not "know" the actual metric distance.

 

 

The CMB triangle works because we know the actual distance between peak temp fluctuations as if there were a tape measure connecting them. We can compare that to the angular distance. This was a confirmed prediction of flatness and the standard model.

 

This paper seems to provide a glimpse into the problem of the CMB: Is the low-l microwave background cosmic?

 

Here's the PDF file

 

If indeed the l = 2 and 3 CMB fluctuations are inconsistent with the predictions of standard cosmology, then one must reconsider all CMB results within the standard paradigm...

 

The implications of this finding are vast. It casts doubts on the cosmological interpretation of the lowest-1 multipoles from the temperature-polarization, and from the temperature-temperature correlation, and in turn on the many claims associated with the CMB (e.g., that the first stars formed very early in the history of the universe), including yours above.

 

In another way, there has been found a strong correlation with the orientation of the solar system (ecliptic plane) and with its motion (measured as the CMB dipole).

 

The observed quadrupole and octopole are inconsistent with a Gaussian random, statistically isotropic sky (the generic prediction of inflation). See the press release here.

 

There is strong evidence of either (a) some systematic error in the WMAP pipeline (and there are similar features in COBE maps), or (:eek_big: the largest scales of the microwave sky are dominated by a local foreground.

 

I am banking on the latter.

 

 

CC

Link to comment
Share on other sites

According to the field equations of Einstein's general theory of relativity, the structure of spacetime is affected by the presence of both matter and energy.

 

I agree :agree:

 

In the Friedmann equations, matter is the variable [math]\rho[/math] and radiation is [math]p[/math]. In the omega form,

 

[math]\frac{H^2}{H_0^2} = \Omega_R a^{-4} + \Omega_M a^{-3} + \Omega_k a^{-2} + \Omega_{\Lambda}[/math]

 

Omega-M being matter and Omega-R radiation. So, yeah, certainly the key ingredients so to speak.

 

I'm also glad you say, and I agree,

 

the gravitation theory of choice is general relativity

 

But, in the confines of GR I'm afraid there is very little wiggle room for alternative interpretations. The Friedmann equation above has 4 density terms. If the universe is homogeneous and isotropic (the Friedmann conditions) then I believe we are stuck setting those 4 terms and comparing the results to observation.

 

Anything more and we start to wander outside the confines of GR, which I'd be willing to entertain, but then we are quickly left without a theoretical foundation, if you know what I mean.

 

On large scales however, space is 'curved' by the gravitational effect of matter and energy.

 

Yes, and I'll point out that matter and radiation both push space toward positive curvature.

 

I don't think we "know" the actual distance, even approximately, between peak temp fluctuations "as if there were a tape measure connecting them." We certainly do not "know" the distance from Earth to the peaks.

 

It is because cosmologists had those two distances constrained so well that they were able to correctly predict the angular diameter of the peak temp fluctuations.

 

The CMB is everywhere present, not projected on to a background sphere from which we can judge distance

 

The CMB is everywhere, but our observation of it, our observation of the surface of last scattering, is of a surface—a spherical surface. When the CMBR was emitted the surface was 36 million lightyears from 'us' (before there was an us obviously). The surface is currently 1292 times further than that now because the universe has expanded 1292 times since the CMBR was emitted.

 

This explanation, whether you agree with it or not, works very well. It correctly predicts the temperature, the brightness, and the anisotropy of the CMB—all from a very simple hypothesis—isotropic expansion.

 

let alone perform a triangulation measurement that means anything.

 

It's only meaningless if you decide it has no meaning. From a factual standpoint, it's a confirmed prediction based on a functioning model. That's what science is all about.

 

Remember though, distant SNe Ia appear to be further than expected, as if spacetime were 'stretched' out further and further with increasing distance.

 

They are "further than expected" by a fiducial model ([math]\Omega_M = 1[/math]) that we know doesn't match our universe. There are other fiducial models (for example one without dark matter and without a cosmological constant ([math]\Omega_M = 0.0456[/math])) in which the SN-Ia would be closer than expected. So, it doesn't seem like a good implication to me. I don't agree with the interpretation.

 

We don't need people at those supernova to measure and tell us angles. We can figure them out ourselves. We need just to estimate the metric distance, not "know" the actual metric distance.

 

No, I think we need to somehow measure the distance between them rather than using a method that assumes the curvature. Otherwise, the answer will just reveal the assumption.

 

Astronomers have recently used the 'baryon acoustic oscillation' studies to examine the metric distance of density fluctuations in galaxy survey data. That is their independent way of finding the metric distance and comparing it to the angular size. The best current data:

 

indicates that the universe is spatially flat ([math]\Omega_K = -0.006 \pm .008[/math]).

 

Just talking about two supernova, on the other hand, I don't think you can constrain the topology because I don't think you can get the three angles without making assumptions that would set, rather than measure, the shape of space.

 

For example, they might be 4 degrees separated in the sky (the angular distance between them is four degrees) and they might both be at 0.5 gigaparsecs (found by redshift or brightness). But, this is not enough information to get the other two angles of the triangle.

 

The acoustic peaks in the large scale structure SDSS data are a way of finding the metric distance of tangent space at low redshift without using the angular distance. The angular distance can then be compared to the metric distance, and according to the paper linked above the results are an independent confirmation of Lambda-CDM and a flat universe.

 

~modest

Link to comment
Share on other sites

:agree:

 

In the Friedmann equations matter is the variable [math]\rho[/math] and radiation is [math]p[/math]. In the omega form,

 

[math]\frac{H^2}{H_0^2} = \Omega_R a^{-4} + \Omega_M a^{-3} + \Omega_k a^{-2} + \Omega_{\Lambda}[/math]

 

Omega-M is matter and Omega-R is radiation.

 

Agreed, we have a departure from linearity, a displacement away from flatness.

 

 

Yes, but I'd again point out that matter and radiation give space positive curvature.

 

Here there is a common misunderstanding about curvature (being positive or negative). Curvature is simply an expression for nonlinearity. A departure from linearity is a positive departure away from a zero value of curvature (a Euclidean or flat spacetime). That is why I expressly did not label diagrams A or C and negatively or positively curved.

 

What the field equation reveal is that there is a departure form linearity (a departure from flatness). The equations do not say whether the universe will appear hyperbolic or spherical from the point of view of an observer located at the origin of a homogenous and isotropic manifold that has a nonzero value of curvature (where gravity is everywhere present). There are not two types of gravity, each opperating with a different sign.

 

The misinterpretations of positive and negative curvature have left the door open to endless speculation and paradoxes: space and time bend back on themselves, the universe is finite; space and time curve away from themselves, space is infinite yet spherical; and in-between, space is flat, despite the presence of a non-zero and nonnegative mass-energy density—an unconcealed violation of general principle of relativity, which states that the spacetime continuum is not a Euclidean continuum. Only the special cases of Newtonian space, Minkowski’s field-free “world” of three-dimensional space (with or without imaginary time), and that of the special theory of relativity permit such a Euclidean continuum, in the absence of the gravitational influence.

 

In that sense any departure from linearity can be considered a positive departure. A "negative" departure from linearity (it it were to exist) would be one that results not from gravitation but from some form of repulsive vacuum energy. I'm not convinced that this type of curvature is possible, since it would imply that the vacuum of empty space cold 'grow,' 'stretch,' 'be created,' and ultimately that photons could gain energy as they propagate through a vacuum of this type (again, if it were to exist).

 

My point is not to change the terminology most frequently used in the field, but to amend it somewhat for the purpose of elucidating the fundamental aspects of gravity, which at the present time remains poorly defined: It is the manifestation of distortion in the geometry of spacetime.

 

In sum, the terms negative and positive (open and closed) are misleading and observer dependent (i.e., subjective, or model dependent). To do away with such misunderstandings (in the case under study here) one simply needs to use the geometric expressions hyperbolic, Euclidean and spherical, relative to the rest-frame from which the observer is situated when he or she ponders the heavens (see manifolds A, B and C, respectively, in the illustration above).

 

 

Observations will determine how spacetime appears to be distorted.

 

 

 

I don't think we "know" the actual distance, even approximately, between peak temp fluctuations "as if there were a tape measure connecting them." We certainly do not "know" the distance from Earth to the peaks.

 

Those things are very well constrained. If cosmologists were wrong then it would be quite a coincidence that all the anisotropy predictions were exactly confirmed.

 

That wouldn't be the only coincidence. It would simply be added to the other coincidences, such as the coincidence that the universe appears to be finely tuned.

 

There are many observations that can have different interpretations. To determine which interpretation best fits the data test have to be made. Hitherto, there has not been sufficient tests with regards to a model (since there is apparently no complete model, yet) that describes a stationary, evolving universe that is homogenous, isotropic and geometrically non-Euclidean. Until then, the concept should not be discarded, a priori.

 

This doesn't imply that cosmologists are wrong, but that there may exist an alternative solution that is equally viable (pending further investigation as to which hypothesis can be discarded).

 

 

That is the core point of the discussion in this thread

 

 

 

The surface of last scattering that we currently observe is a surface. It was 36 million lightyears from 'us' when the CMBR was emitted. It's 1292 times further than that now because the universe has expanded 1292 times since then.

 

Sure, I understand that, but that option would not be a valid one in the framework of a stationary universe model. That only works for expanding models. In a static universe what is observed is not a last scattering surface, and the CMB was not emitted 36 million light years from us.

 

 

If the brightness data fit a hyperbolic universe then you would be right.

 

Hyperbolicity is revealed by the excessive faintness of distant type Ia supernovae, whose brightness calibrates their distances. More or less as would be predicted in a manifold of the type A above (as opposed to manifolds B or C).

 

Had the SNe Ia data revealed an excessive brightness then you would have been correct, had you predicted a spherical geometry of the type portrayed in model C above . Had you predicted a Euclidean geometry (example B above) the brightness of any SNe Ia would have been a linear function as viewed from earth, relative to their distance.

 

The observed excessive faintness is in line with the observed redshifts and the angular size measurements, consistent with hyperbolicity (again, only if we consider a model where there is no expansion taking place). Again, since lambda and CDM can change the geometry in an expanding universe, virtually at will, the actual geometry, or shape of the universe is model dependent.

 

 

Two lone supernova, on the other hand... you can't hope to find all three angles independently. For example, they might be 4 degrees separated in the sky and they might both be at 500 megaparsec found by redshift or brightness).

 

The distance to a supernova is measured by comparing its apparent and intrinsic brightness and reveals the time over which that signal has traveled at the speed of light. Taken together with redshift we can accurately gauge the distance of these objects, as if points on a meter stick. (See for example Dark Energy, Dr. Adam Riess)

 

Taken at face value, the presence of the time dilation factor of (1 + z) in the SNe Ia data is simply a reduction in the flux density by more than the inverse square law. So the effective distance, from the view-point of any observer (at this time) does not behave like a Euclidean distance with increasing redshift. Again, the SNe Ia results are consistent with hyperbolicity (in a stationary universe).

 

In an static regime, the redshift and distance relation of every supernova records not the past change in the scale factor over the inferred time interval, or the expansion rate, but the degree, quantity of value of curvature (the departure from linearity).

 

Fortunately, it is possible to test divers astrophysical hypotheses for the dimming of distant SNe Ia against the cosmological hypothesis; either based on dark energy with negative pressure combined with CDM in an expanding frame: or, postulating a hyperbolic manifold in a stationary regime. Both are possible solutions.

 

Real progress has been made along these lines and results derived (both current and future) can be employed to work out relations between intrinsic properties and observables for any isotropic and homogenous world-model.

 

 

You can't add angles and compare to 180 from that data. We'd have to know how far the one supernova is from the other independent of our 4 degree measurement and we have no way to measure that.

 

The SNe Ia chosen for such an experiment would have to have a large separation, both angularly from each other, and distance wise from earth.

 

The easy way to do it would be to use two supernova's located at right angles (90 degrees) relative to the background sky. We would then have a right-angle triangle, the sum of which the sum of the inner angles could be determined.

 

Any triangulation experiment would work though, provided the difference in distance from earth of either SN is large. We don't need to find them at right angles from earth.

 

There are several ways of determining distance from SNe Ia. Flux-averaging can be used to test the presence of unknown systematic uncertainties to yield more robust distance measurements. Distances of SNe Ia can be modeled using the rest-frame UV spectral energy distribution. lightcurve fits are performed by integration of the spectra in instrumental pass-bands without k-correcting the data. The retrieved supernovae parameters; brightness, lightcurve shape parameter(s), and colors in the supernova rest-frame (for both nearby and high-z SN) can be constructed, giving an accurate relative distance measurement. (See here: The Supernova Legacy Survey SALT2 : using distant supernovae to improve the use of Type Ia supernovae as distance indicators).

 

A comparison of apparent and abolute magnitudes of SNe Ia also yields accurate relative distances. The analyses described above does not require absolute calibration of SNe Ia. Indeed, a formalism similar to that used in peculiar velocity studies, carried out by Perlmutter et al. in 1997, can be used, in which distances are measured in km s-1 according to redshift and absolute magnitudes. This techniques can be used for convenience, even in a non-expanding model. Too, the expanding photosphere method of core-collapse SNe can be used with the calibration of the plateau luminosity to determine relative distances.

 

Using distances to galaxies where Cepheids are located will do as well, provided SNe Ia are present. Pending the detection and analysis of SNe Ia in a larger number of nearby galaxies with accurate Cepheid distances, estimates of distance inferred from supernovae should be seen as preliminary. (Source). This would be a completely independent distance measurement.

 

So, SNe Ia, in combination or not with the above Cepheid program, establish a good relative distance indicator in the visible universe. What we have (if indeed SNe Ia can be considered good standard candles), is custom yardstick (in combination with the Tully-Fisher relation, surface brightness fluctuations, fundamental plane relation, etc.)

 

 

Certainly, accurate triangulation measurements can be established.

 

 

The acoustic peaks in the large scale structure SDSS data are a way of finding that metric distance between galaxies. As the paper above shows, the results are an independent confirmation of Lambda-CDM.

 

Yes, but it's not the only way. And if the largest scales of the microwave sky are dominated by a local foreground, as shown likely in the above mentioned paper (Is the low-l microwave background cosmic?) then the conclusion you post may turn out to be erroneous at worst, or spurious at best.

 

 

 

CC

Link to comment
Share on other sites

Here there is a common misunderstanding about curvature (being positive or negative). Curvature is simply an expression for nonlinearity. A departure from linearity is a positive departure away from a zero value of curvature (a Euclidean or flat spacetime). That is why I expressly did not label diagrams A or C and negatively or positively curved.

 

There are not two types of gravity, each opperating with a different sign.

 

Indeed it's a common misunderstanding, maybe because intuitively one places flat euclidean null curvature between spherical on one side and hyperbolic on the other. And it seems natural to say one is positive and the other negative. Also when we talk about K in FLWR metric you have k=-1 for open hyperbolic space and k=1 for closed spherical with k=0 flat space in the middle. So that might have some influence too.

But if you look at it from the perspective of curves generated by conic sections you clearly see that starting from no curvature or flat , as curvature increases you get first hyperbolas, then parabolas, then elliptics, then circles.In this animation is backwards wrt my explanation, from circle to null curvature

 

 

EDIT:Something is not working , I tried to insert an animation, but it doesn't show up, and the quoting does't work either, Something I did wrong?

Link to comment
Share on other sites

This tutorial is instructive. It deals with relativity, hyperbolicity and projective geometry, Lobachevsky, the universe, triangles etc.

 

Edit: This is a six part tutorial "of a Pure Mathematics Seminar given at UNSW in the School of Mathematics and Statistics. Assoc Prof N J Wildberger explains a new approach to hyperbolic geometry which links it directly to Einstein and Minkowski's relativistic geometry viewed projectively. The idea is to extend Rational Trigonometry to the hyperbolic setting. The first part of the lecture motivates with a summary of classical hyperbolic geometry, then the purely algebraic set-up is introduced. Quite a few new results are introduced."

 

 

Hyperbolic Geometry is Projective Relativistic Geometry (Part1) http://youtube.com/watch?v=BnQw7hon_ZY

 

I haven't seen part III or the others yet.

 

I'm viewing them 'now'.

 

 

CC

Link to comment
Share on other sites

Until someone can explain how the Universe ( in whatever space time form) got here, how the electron and proton generate a field with the single strongest force in the Universe without losing any energy, we're pissing in the wind. You say time is just a tool. I could make a reasonable argument that time is the only component of matter.

Link to comment
Share on other sites

Until someone can explain how the Universe ( in whatever space time form) got here, how the electron and proton generate a field with the single strongest force in the Universe without losing any energy, we're pissing in the wind. You say time is just a tool. I could make a reasonable argument that time is the only component of matter.

 

It's always better to do that in the wind direction.

 

What have your comments to do with redshift z?

 

 

 

_______________

 

 

 

This, on topic, may be of interest, by the same author as the video presentation above, Norman J. Wildberger:

 

Affine and Projective Universal Geometry (2006).

 

And with regards to this work, from Wildberger's webpage:

 

This paper establishes the basics of universal geometry, a completely algebraic formulation of metrical geometry valid over a general field (not of characteristic two) and an arbitrary quadratic form. The fundamental laws of rational trigonometry are here shown to extend to the more general affine case, and also to a projective version, which has laws which are deformations of the affine case. This unifies both elliptic and hyperbolic geometries, in that the main trigonometry laws are identical in both.

 

Euclidean versus non-Euclidean geometries are a manifestation of the distinction between the affine and the projective.

 

 

Edit: And to quote his paper:

 

By recasting metrical geometry in a purely algebraic setting, both Euclidean and non-Euclidean geometries can be studied over a general field with an arbitrary quadratic form. Both an affine and a projective version of this new theory are introduced here, and the main formulas extend those of rational trigonometry in the plane. This gives a unified, computational model of both spherical and hyperbolic geometries, allows the extension of many results of Euclidean geometry to the relativistic setting, and provides a new metrical approach to algebraic geometry.

 

[...]

 

It is pleasant that the main laws of planar rational trigonometry have affine and projective versions which turn out to hold simultaneously in elliptic geometry, in hyperbolic geometry, and indeed in any metrical geometry based on a symmetric bilinear form. The usual dichotomy between spherical and hyperbolic trigonometry deserves re-evaluation.

 

[...]

 

Elliptic and hyperbolic geometries should be considered as projective theories. Their natural home is the projective space of a vector space, with metrical structure—not a metric in the usual sense— determined by a bilinear or quadratic form. Over arbitrary fields the familiar close relation between spheres or hyperboloids and projective space largely disappears, and the projective space is almost always more basic. The fundamental formulas and theorems of metrical geometry are those which hold over a general field and are independent of the choice of bilinear form. Many results of Euclidean geometry extend to the relativistic setting, and beyond, once you have understood them in a universal framework.

 

Bold added.

 

This is what we need in a cosmological setting!

 

 

 

CC

Link to comment
Share on other sites

CC, I apologize. I rewrote my last post before you replied, but apparently after you copied it.

 

Here there is a common misunderstanding about curvature (being positive or negative). Curvature is simply an expression for nonlinearity. A departure from linearity is a positive departure away from a zero value of curvature (a Euclidean or flat spacetime). That is why I expressly did not label diagrams A or C and negatively or positively curved.

 

You know that isn't true.

 

The circumference of a circle with negative curvature is greater than 2pir. With positive curvature it is less than. The angles of a triangle are less than 180 in hyperbolic space.

 

There can be no doubt or obfuscation. In GR, energy density increases curvature of space in a positive direction.

 

EDIT:Something is not working , I tried to insert an animation, but it doesn't show up, and the quoting does't work either, Something I did wrong?

 

Your

tag was missing a forwardslash. I've edited it.

 

Not sure about the animation. You might report the problem in the user feedback forum.

 

~modest

Link to comment
Share on other sites

There can be no doubt or obfuscation. In GR, energy density increases curvature of space in a positive direction.

~modest

 

 

Modest, there is still a misunderstanding.

 

According to GR, mass-energy density increases curvature of spacetime in a 'negative' direction (hyperbolically); not a 'positive' direction as you write.

 

That is a displacement away from linearity. It is a 'positive' displacement in one sense, but the result is hyperbolic curvature of the manifold. It is not spherical in the non-Euclidean sense (it's not comparable to the surface of a sphere in reduced dimensions)

 

We agree that "negative curvature" means curved hyperbolically, like a saddle in reduced dimensions, or trumpet-bell-like (rather than like the surface of a sphere, which has positive curvature). However, negative curvature corresponds to an attractive force; a positive curvature corresponds to a repulsive force.

 

In general relativity the attractive force of gravity created (or induced) by matter and energy is due to a negative curvature of spacetime, represented, however poorly, by the 'rubber sheet analogy' with the negatively-curved (trumpet-bell-like) dip in the sheet. (Source)

 

 

Note: we are avoiding the cosmological constant for the time being. That is because the hypothesis of a stationary universe doesn't particularly require one, or at least does not require one with a nonzero value (I'll come back to this).

 

Regardless, one could argue, for example, that an anti de Sitter space is a general relativity like spacetime, where in the absence of matter or energy, the curvature of spacetime is naturally hyperbolic. (Same source). But our universe obviously contains matter and energy. The question is whether that hyperbolicity should be added on to the gravitational hyperbolicity, compounding the relative effect, via the observational point of view.

 

 

You would be correct that for a classically expanding (non-accelerating) universe described by a smooth, homogeneous and isotropic (Friedmann-Robertson-Walker model), geometry, curvature and thus destiny, are inextricably linked. Whether the spatial curvature is positive, zero (flat), or negative correlates one-to-one with whether the expansion is closed (bounded), critical (asymptotically static), or open (unbounded). Though, Einstein's field equations which govern the expansion involve not only the amount of energy density but the entire energy-momentum contribution. So there would be (and is) a loophole to escape destiny. (Source)

 

Similarly, there is a loophole in an accelerating universe.

 

In a stationary universe there is no loophole. Observations determine the degree of curvature of the manifold, which depend on the mass-energy density of the homogenous and isotropic space, of not just the observable universe, but beyond (i.e., the curvature is proportional to the total mass-energy density of the universe, not solely to the section we observe). And that curvature of the manifold can only be "negative," that is to say, hyperbolic, from the reference frame of any observer. That view would seem to agree with observations.

 

 

CC

Link to comment
Share on other sites

There can be no doubt or obfuscation. In GR, energy density increases curvature of space in a positive direction.

 

 

 

It may seem redundant to pound the nail in further still, but these are some of the most fundamental relationships in nature, if not the most important, that are conceptually not yet properly fastened.

 

Here is a visual example of gravitational curvature in accord with general relativity. Note that this is not a positive, spherical curvature. It is a 'negative' or hyperbolic curvature of the manifold.

 

 

 

 

 

Source:

Relativity in Curved Spacetime

Eric Baird

 

 

_____________

 

 

I think this is where the misunderstanding originates: In the standard expanding model the curvature parameter takes on values +1, 0, or –1 for positive curvature, Euclidean or flatness, and negative curvature, and is usually related to the scale factor or size of the universe and redshift z. It could be argued that this an archaic way of looking at curvature, and more generally, of looking at the universe.

 

Whether we are examining the gravitational field of the earth, or a section extending to the outer reaches of the universe, and if general relativity is operational both locally and globally, then gravitational curvature always has the same sign. This verifiable phenomenon is model independent.

 

 

To be continued...

 

 

 

CC

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...