Jump to content
Science Forums

SNe Ia, Implications, Interpretations, Lambda-CDM...


coldcreation

Recommended Posts

It is interesting that you are using a paper that has problems with the Taylor series expansion of the LCDM model for [math]D_L[/math] and is proposing changes to Hubble's law. The FLS model does not use a Taylor series.

 

Any observable distance measure is a Taylor series in redshift z. Hubble’s law is a Taylor series. How could yours not be? Look at your equation 4 - the first step in properly deriving that is doing a Taylor series expansion for a(t) or about z=0. Are you saying that’s not what you did?

 

-modest

Link to comment
Share on other sites

Any observable distance measure is a Taylor series in redshift z. Hubble’s law is a Taylor series. How could yours not be? Look at your equation 4 - the first step in properly deriving that is doing a Taylor series expansion for a(t) or about z=0. Are you saying that’s not what you did?

 

-modest

 

That equation of the FLS model does not use a Taylor series, yet the FLS model matches the SNe Ia Hubble diagram, galaxy counts of the Durham group, The Hubble constant of the Sandage Consortium, and flatness of the CMB. Go Figure!

 

For a coasting universe the deceleration distance as defined in the paper you are using is equal to the proper distance of my model. I call it proper distance because it equals the speed of light times proper time. The proper distance that you referred to is my dilated distance because dilated distance is what we see when we observe sources in space. As it turns out, for a coasting universe, the dilated distance that we see is also the actual distance to the source, but we can only infer this from the model of a coasting universe. By the way there are two competing teams for the Hubble constant: the Sandage Consortium (mean Hubble constant equals to 55-57 with a long distance scale) and the HST Key Project (mean Hubble constant equals to 71-73 with a short distance scale). Direct methods (Bananos, et al. 2006 and others) support the Sandage Consortium. The FLS model supports the Sandage Consortium.

Link to comment
Share on other sites

That equation of the FLS model does not use a Taylor series, yet the FLS model matches the SNe Ia Hubble diagram, galaxy counts of the Durham group, The Hubble constant of the Sandage Consortium, and flatness of the CMB. Go Figure!

 

For a coasting universe...

 

I see. You want a linear evolution of the scale factor. In other words a coasting universe or one where the scale factor does not depend on gravity. I’m sorry I missed that - I think I was concentrating only on your proof against the FLRW supernova distance.

 

I’m personally fascinated by the concordance of freely coasting cosmology. There is a good description here:

http://arxiv.org/PS_cache/astro-ph/pdf/0306/0306448v1.pdf

 

Such models are not in the family of GR models - but certainly attractive for their concordance nonetheless.

 

-modest

Link to comment
Share on other sites

I see. You want a linear evolution of the scale factor. In other words a coasting universe or one where the scale factor does not depend on gravity. I’m sorry I missed that - I think I was concentrating only on your proof against the FLRW supernova distance.

 

I’m personally fascinated by the concordance of freely coasting cosmology. There is a good description here:

http://arxiv.org/PS_cache/astro-ph/pdf/0306/0306448v1.pdf

 

Such models are not in the family of GR models - but certainly attractive for their concordance nonetheless.

 

-modest

 

modest, thanks for the paper. It is not my model. I find the discussion of nucleosynthesis very interesting and I will spend some time studying it in detail.

 

My model is a coasting model in the general class of Friedmann-Lemaitre solutions which means that FLS falls within the theory of general realtivity.

Link to comment
Share on other sites

I see. You want a linear evolution of the scale factor. In other words a coasting universe or one where the scale factor does not depend on gravity. I’m sorry I missed that - I think I was concentrating only on your proof against the FLRW supernova distance.

 

I’m personally fascinated by the concordance of freely coasting cosmology. There is a good description here:

http://arxiv.org/PS_cache/astro-ph/pdf/0306/0306448v1.pdf

 

Such models are not in the family of GR models - but certainly attractive for their concordance nonetheless.

 

-modest

 

 

It seems to me that the flatness (or fine-tuning) problem, despite what the authors of the link above state, is still alive and kicking in their Concordant “Freely Coasting” Cosmology, now more than ever.

 

The flatness problem is directly related to initial conditions, the unknown mass-energy density of the universe, the rate of expansion or Hubble law (the redshift-distance relation), and the geometric curvature or topological structure of the universe at large. An amazing amount of literature is available on this problem: testimony to its importance. It would be very surprising if the authors were correct in their statement:

"As a matter of fact' date=' a linearly evolving model is the only power law model that has neither a particle horizon nor a cosmological event horizon. Linear evolution is also purged of the flatness or the fine tuning problem. The scale factor in such theories does not constrain the matter density parameter. The Linear coasting characteristic of a Newtonian cosmology can be dynamically generated. [...'] A non-minimally coupled scalar field then produces an effective repulsive gravitation that quickly constrains the universe to a linear coasting.

 

Indeed, it seems, as a matter of fact, that the conjecture of a linearly evolving power law model is a direct source of the flatness or the fine-tuning problem. There is no way to dynamically generate linear coasting in Newtonian cosmology (that is why Newton attributed the observed fine tuning to Deity). The "non-minimally coupled scalar field" that "produces an effective repulsive gravitation" is as chimeric as DE, CDM (or the C-field or iron whiskers of QSSC).

 

Unless we learn something about the initial conditions at ground zero (how the big bang produced an expansion rate at precisely the critical rate), which seems unlikely since it forever remain hidden behind a kind of event horizon, then it seems this problem is here to stay.

 

The beauty of LCDM, arguably, is that there simply is no fine tuning, so it's no longer a problem. That is about the only thing of beauty related to LCDM however (something rather unfortunate).

 

 

 

 

CC

Link to comment
Share on other sites

It seems to me that the flatness (or fine-tuning) problem, despite what the authors of the link above state, is still alive and kicking in their Concordant “Freely Coasting” Cosmology, now more than ever.

 

The flatness problem is directly related to initial conditions, the unknown mass-energy density of the universe, the rate of expansion or Hubble law (the redshift-distance relation), and the geometric curvature or topological structure of the universe at large. An amazing amount of literature is available on this problem: testimony to its importance. It would be very surprising if the authors were correct in their statement:

 

I agree.

 

Decoupling parameters such as the scale factor is an unconvincing and rather harsh solution to fine-tuned parameters.

 

I suppose it would be correct to say one problem is being traded for another.

 

By these 2 models; we either need to know why the universe chose Omega=1, or how expansion could be unrelated to other factors. In either case it would seem there is more to learn.

 

-modest

Link to comment
Share on other sites

modest, thanks for the paper. It is not my model.

 

I didn't think so

 

 

I find the discussion of nucleosynthesis very interesting and I will spend some time studying it in detail.

 

Off hand, I would think there is way too much time spent at too high temperature for proper nucleosynthesis. I believe the amount of time is constrained by the ratio of neutrons to protons. It the era were protracted all the neutrons with their larger mass would have spontaneously decayd into protons.

 

My model is a coasting model in the general class of Friedmann-Lemaitre solutions which means that FLS falls within the theory of general realtivity.

 

In order to satisfy general relativity the rate of expansion must be a function of the radiation, matter, or any energy content that shape the metric. For a Friedman solution it must be a function of pressure and density and is time dependent explicitly. Unless your model is devoid of matter it wouldn't coast freely.

 

If you insist your model has a linearly evolving scale factor that is defined in the usual way then please tell me its parameters at redshift 0, 1, and 2:

 

pressure, density, scale factor, Hubble constant, Hubble parameter, curvature K, Lambda

 

I don't see this being done - certainly not with realistic values (or any value pressure and density).

 

-modest

Link to comment
Share on other sites

I didn't think so

 

If you insist your model has a linearly evolving scale factor that is defined in the usual way then please tell me its parameters at redshift 0, 1, and 2:

 

pressure, density, scale factor, Hubble constant, Hubble parameter, curvature K, Lambda

 

I don't see this being done - certainly not with realistic values (or any value pressure and density).

 

-modest

 

modest, lambda is zero and the hubble constant is 56.96 km/s per Mpc which translates to a density of 3.65 equivalent proton masses per cubic meter. To obtain the past density at a particiular redshift multiply the current density by [math](1+z)^3[/math]. The Hubble flow velocity is [math]cz[/math] which means that the redshift of a source due to expansion as the Universe expands will remain constant. The model is flat and expanding and there is no curvature.

Link to comment
Share on other sites

The model is flat and expanding and there is no curvature.

 

With this in mind, I set [math]k=1[/math] and [math]Lambda=0[/math]

 

:confused:

 

which translates to a mean age of the universal expansion of 17.16 billion years.

 

Solving for your paramaters above and making no other assumptions gives:

 

For Ho = 56.96, OmegaM = 1.000, Omegavac = -0.000, z = 1.000

 

  • It is now
    11.442
    Gyr since the Big Bang.

  • The age at redshift z was 4.045 Gyr.

  • The light travel time was 7.398 Gyr.

  • The comoving radial distance, which goes into Hubble's law, is 3082.9 Mpc or 10.055 Gly.

  • The comoving volume within redshift z is 122.738 Gpc3.

  • The angular size distance DA is 1541.5 Mpc or 5.0276 Gly.

  • This gives a scale of 7.473 kpc/".

  • The luminosity distance DL is 6165.9 Mpc or 20.110 Gly.

source

 

Are your parameters not the same as the Einstein-de Sitter model? Perhaps the parameters are the same but you solved things differently than they did?

 

-modest

Link to comment
Share on other sites

Solving for your paramaters above and making no other assumptions gives:

 

For Ho = 56.96, OmegaM = 1.000, Omegavac = -0.000, z = 1.000

 

  • It is now
    11.442
    Gyr since the Big Bang.

  • The age at redshift z was 4.045 Gyr.

  • The light travel time was 7.398 Gyr.

  • The comoving radial distance, which goes into Hubble's law, is 3082.9 Mpc or 10.055 Gly.

  • The comoving volume within redshift z is 122.738 Gpc3.

  • The angular size distance DA is 1541.5 Mpc or 5.0276 Gly.

  • This gives a scale of 7.473 kpc/".

  • The luminosity distance DL is 6165.9 Mpc or 20.110 Gly.

source

 

Are your parameters not the same as the Einstein-de Sitter model? Perhaps the parameters are the same but you solved things differently than they did?

 

-modest

 

This doesn't make sense. Before 1998 the universe was thought to be 15 Gyr old. That was with the favored Friedmann flat critical coasting model. Then, After the SNe Ia results, the age had to be lowered to 13.7 Gyr (because of acceleration). In Bigsam's model the universe cannot be younger than the LCDM model. So your 11.4 Gyr since the big bang must be wrong.

 

Unless, of course, the universe is getting younger with time.

:D

 

 

CC

Link to comment
Share on other sites

:D

 

 

 

Solving for your paramaters above and making no other assumptions gives:

 

For Ho = 56.96, OmegaM = 1.000, Omegavac = -0.000, z = 1.000

 

  • It is now
    11.442
    Gyr since the Big Bang.

  • The age at redshift z was 4.045 Gyr.

  • The light travel time was 7.398 Gyr.

  • The comoving radial distance, which goes into Hubble's law, is 3082.9 Mpc or 10.055 Gly.

  • The comoving volume within redshift z is 122.738 Gpc3.

  • The angular size distance DA is 1541.5 Mpc or 5.0276 Gly.

  • This gives a scale of 7.473 kpc/".

  • The luminosity distance DL is 6165.9 Mpc or 20.110 Gly.

source

 

Are your parameters not the same as the Einstein-de Sitter model? Perhaps the parameters are the same but you solved things differently than they did?

 

-modest

 

I do not use the Einstein de Sitter model, so my parameters are not the same as the Einstein de sitter model. I specifically said this in an earlier post and I described how I solve the Friedmann-Lemaitre metric. I obtain an age of 17.16 Gyr for the expansion.

Link to comment
Share on other sites

This doesn't make sense. Before 1998 the universe was thought to be 15 Gyr old. That was with the favored Friedmann flat critical coasting model. Then, After the SNe Ia results, the age had to be lowered to 13.7 Gyr (because of acceleration). In Bigsam's model the universe cannot be younger than the LCDM model. So your 11.4 Gyr since the big bang must be wrong.

 

So long as you stay at the critical density, more lambda and less matter makes for an older universe. Of course, lowering the Hubble constant will also make it older.

 

**edit** I guess this would be true if we stay at the critical density or not.

 

-modest

Link to comment
Share on other sites

My solution does not have the 2/3 value of the einstein de Sitter solution in it, because I transform the dilated time derivative into a proper time derivative usnig the chain rule.

 

An empirical way to obtain the dilated distance for a coasting universe follows. [math]D_0[/math] is the proper distance between source and observer. [math]D=(1+z)D_0[/math] is dilated distance between source and observer for a coasting universe. [math]{\Delta}t_0=c^{-1}D_0[/math] is the proper time that photons travel from source to observer while the Universe is coasting. [math]v_H=(D-D_0)/{\Delta}t_0[/math] is the Hubble flow velocity for a coasting universe. Substituting above definitions into the Hubble-flow-velocity equation yields [math]v_H=cz[/math]. The Hubble law is [math]v_H=H_0D[/math]. Equating the two equations yields [math]D=c{H_0}^{-1}z[/math]. (This is the same dilated-distance equation that was obtained from the FLS solution of the Friedmann-Lemaitre metric for a flat, coasting universe.) Substituting this equation into Equation (4) of the original post yields an emperical match of the SNe Ia Hubble diagram for the Hubble constant equal to 56.96 km/s per Mpc without using general relativity. The only assumptions were that the Universe is coasting and photon energy is conserved.

Link to comment
Share on other sites

... The only assumptions were that the Universe is coasting and photon energy is conserved.

 

Shouldn't it be determined empirically whether the universe is coasting with photon energy conservation, rather than assuming these initial conditions?

 

 

Here are a few more questions regarding your model:

 

 

Why shouldn't the luminosity distance have a component due to the effective luminosity being reduced in an expanding universe (creating the impression that luminosity distance is larger than the actual distance)?

 

Why should a reduction in photon density lead to two possible interpretations for effective luminosity in expanding space? How can it be determined empirically which interpretation is correct?

 

You wrote that "fewer photons per second are crossing over the source-centered spherical boundary at an observer than the original number of photons per second that were emitted from the source." Are you saying this is not accounted for in the LCDM model?

 

Why wouldn't there be a reduction in the effective luminosity relative to intrinsic luminosity providing an accurate distance measurement consistent with LCDM?

 

 

Antigravity:

 

Where does the "symmetry between gravity and antigravity" come from in your model? What is antigravity if it is not lambda? Is this latter concept based on string theory alone? Why, if antigravity really exist, would its value precisely cancel gravity?

 

You write that antigravity provides the solution to the flatness problem and the horizon problem, but given the speculative nature of antigravity (i.e., in may not exist) doesn't that leave your model with large question mark as to its viability?

 

 

Last, but not least, you write that there is a density of 3.65 equivalent proton masses per cubic meter. I thought there was only one atomic hydrogen mass per cubic meter. Where does this discrepancy come from?

 

 

:)

 

 

 

CC

Link to comment
Share on other sites

You wrote that "fewer photons per second are crossing over the source-centered spherical boundary at an observer than the original number of photons per second that were emitted from the source." Are you saying this is not accounted for in the LCDM model?

 

This reminds me. I found a source contradicting sam's proof from post #90.

 

It shows explicitly that the flux equation used:

[imath]F=(Ldt`R/R_0)(4{\pi}r_1R_0)^{-2}/dt[/imath] (as shown in post #99) to get the relationships needed:

 

[imath](1+z)=R_0/R_1,\;d_A=R_1r_1,\;d_M=R_0r_1,\;d_L={R_0}^2r_1/R_1[/imath]

 

and more importantly:

[imath](1+z)^2d_A=(1+z)d_M=d_L[/imath]

 

as in Carroll 1992 equation 41

and post #99 - are not making the mistake Sam says they are.

 

The rationale behind using it is demonstrated in this link:

Google book where it is shown that no "dilated reduction" is being made to its distance equation 11.18.

 

-modest

Link to comment
Share on other sites

The rationale behind using it is demonstrated in this link:

Google book where it is shown that no "dilated reduction" is being made to its distance equation 11.18.

 

-modest

 

The problem that you have with this google book source, is where does the photon energy go and what is the mechanism that causes the energy to be lost? What changes the momentum of the light? Are we bringing back the tired light theory in disguise, or some sort of unknown matter that is causing the light to scatter similar to Compton scattering. Since spatial expansion is stretching the wave length of the photon, it seem reasonable that the photon energy is also being stretched with the wave length. This makes more sense to me than just assuming that the photon energy is lost with no mechanism to explain the lose.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...