Jump to content
Science Forums

SNe Ia, Implications, Interpretations, Lambda-CDM...


coldcreation

Recommended Posts

All physics is contingent and a healthy amount of skepticism is always wise. Other explanations are possible, although all of the alternative explanations I have seen have problems (see Ned Wright's website). I have remained conservative in my critique of the standard model. Remaining within the context of the general class of Friedmann-Lemaitre solutions, I noticed a problem with the standard model use of the energy-flux equation. When I corrected the problem, the resulting Friedmann-Lemaitre solution is in agreement with three areas of physical observation. IMHO this has stengthened the case for the Big Bang.

 

 

 

What are the three areas of physical observation that your FL solution is in agreement with?

 

If I understand, you reduce the dark energy content, but do you eliminate it? What about cold dark matter?

 

Anything that could get rid of DE & CDM would be an improvement - strengthening the case for the BB.

 

 

For conservation of photon energy as space expands and a coasting universe the correct energy-flux equation should be

 

[math]F=L/4{pi}D^2=L_0(1+z)^{-1}/[4{pi}(c{H_0}^{-1}z)^2][/math]

 

Dividing the energy-flux equation by the reference energy flux, then taking the log of the energy-flux ratio, then multiplying by -2.5 leads to Equation (4) in the original post.

 

 

 

:dust:

 

 

 

 

CC

Link to comment
Share on other sites

Sam,

 

It is a pleasure. I am a novice at GR so go easy on me.

 

How am I not allowed to compare observed flux to intrinsic luminosity here:

[math]F=\frac{L_0}{4\pi{D_L}^2}[/math]

but I can compare observed luminosity to intrinsic luninosity here?:

[math]L=L_0(1+z)^{-1}[/math]

 

Algebraically and conceptually I see no difference.

 

-modest

Link to comment
Share on other sites

Sam,

 

It is a pleasure. I am a novice at GR so go easy on me.

 

How am I not allowed to compare observed flux to intrinsic luminosity here:

[math]F=frac{L_0}{4pi{D_L}^2}[/math]

but I can compare observed luminosity to intrinsic luninosity here?:

[math]L=L_0(1+z)^{-1}[/math]

 

Algebraically and conceptually I see no difference.

 

-modest

 

There is nothing wrong with what you have done, as long as you understand that the problem is when [math]D_L[/math] is solved for in the FL metric instead of [math]D[/math]. The FL metric should solve for the actual dilated distance, not the luminosity distance. The luminosity distance has a component due to the effective luminosity being reduced because of the expansion of space; therefore, the luminosity distance is larger than the actual distance.

Link to comment
Share on other sites

What are the three areas of physical observation that your FL solution is in agreement with?

 

If I understand, you reduce the dark energy content, but do you eliminate it? What about cold dark matter?

 

Anything that could get rid of DE & CDM would be an improvement - strengthening the case for the BB.

 

 

 

 

:headbang:

 

 

 

 

CC

 

Actually there are four areas

 

1. Agreement with SNe Ia Hubble diagram

2. Agreement with the galaxy-count surve.ys of the Durham group.

3. Agreement with the CMB results from WMAP.

4. Hubble Constant of the Sandage consortium using variable Cepheids and other methods. (May the great Allen Sandage forgive me for not thinking of his work)

 

Certainly, dark energy as a cause of acceleration has vanished in the model. However, due to the dilation effect on the image that an observer sees that is caused by the expansion of space, a dilated mass/energy of the source image occurs. This dilated mass/energy (dark mass/energy) is an effect of expanding space, not a cause of acceleration.

 

This discussion about dark matter in galaxies has some speculation in it. More research needs to be done in this area by the Durham Group. As far as dark matter in galaxies is concerned, the dark matter is real, but I suspect that the dark matter is Baryonic matter in the K-band (2.2 microns). The reasoning goes like this. From the Durham counts, there are far more galaxies in the K-band of the galaxy-count surveys than the other color-band galaxy counts combined. The stars and near-stars of the K-band require far more mass for the same luminosity that stars in the other color-bands produce. From the proposed model and the K-band galaxy counts the median luminosity of K-Band galaxies is approximately 100 billion solar luminosities. Also, from the K-band galaxy counts and the mass from my model, the median mass of a K-band galaxy is approximately 1.33 trillion solar masses(this number may be reduced by approximately 10% to account for the other color-band stars). Thus, there is approximately a median 12 times more mass in the K-band of stars than indicated by the luminosity. Gravitational lensing indicates that this mass/luminosity ratio is more or less close to what is observed. One caveat should be pointed out: unlike the other color-band galaxies, K-band galaxy counts start falling off rapidly around a median redshift of 0.2, indicating that K-band stars started forming later than the other color-band galaxy stars.

Link to comment
Share on other sites

 

Adjusting the Hubble constant in Equation (4) to fit the SNe Ia Hubble diagram data results in a mean global value of 56.96 km/s per Mpc, which translates to a mean age of the universal expansion of 17.16 billion years. This Supports the Hubble constant work of the Sandage Consortium (Sandage, et al. 2006). The space of a coasting universe is flat, unbounded and expanding, not accelerating.

 

Actually there are four areas

 

1. Agreement with SNe Ia Hubble diagram

2. Agreement with the galaxy-count surve.ys of the Durham group.

3. Agreement with the CMB results from WMAP.

4. Hubble Constant of the Sandage consortium using variable Cepheids and other methods. (May the great Allen Sandage forgive me for not thinking of his work)...

 

Actually, there is another area that your FL solution is in agreement with:

 

5. The age of stars vs. the age of the universe.

 

With your 17.16 Gyr old universe, the age of the oldest stars seems reconciled, at least according to R-Process Abundances and Chronometers in Metal-Poor Stars" by J.J. Cowan (et al), compared with the theoretical ratio suggesting an average age of two metal poor stars to be approximately 15.6 +/- 4.6 Gyr (between 11 Gyr and 20.2 Gyr), consistent with earlier radioactive age estimates.

 

Arguably, a 13.7 Gyr old universe model had great difficulty absorbing this kind of data (without assuming extraordinarily rapid star formation from primordial density fluctuations post-BB.

 

 

 

CC

Link to comment
Share on other sites

 

I have yet to find a paper on this exact solution. Perhaps I should write one myself.

 

The person's name completely escapes me at the moment and this may not be accurate as I read it many years ago and my memory of it is incomplete at best, but:

 

The very very first person to discover "the de Sitter effect" was ____. He did so by analyzing how much of one stars light would be received by another star. He found that the light dropped of non-linearly in de Sitter's metric and this is eventually called the de Sitter effect - or maybe he named it that (I don't know).

 

It seems to me that what he must have worked out was a brightness to distance function for de Sitter's original model. He was not solving for redshift, I remember that much.

 

If apparent and intrinsic brightness are to be presented very much differently with "de Sitter time" then perhaps we could get a hold of that paper. If I am right, it would have exactly what is needed.

 

-modest

 

I was under the impression that the first person to discover "the de Sitter effect" was de Sitter himself.

 

Or maybe you were thinking of von Laue, Lanczos, Mc Vittie, Slipher, Milne, Mie, Weyl or Eddington? The latter seems most likely. I'll see if I can find something.

 

 

 

CC

Link to comment
Share on other sites

I was under the impression that the first person to discover "the de Sitter effect" was de Sitter himself.

 

Or maybe you were thinking of von Laue, Lanczos, Mc Vittie, Slipher, Milne, Mie, Weyl or Eddington? The latter seems most likely. I'll see if I can find something.

 

I've decided my memory of the paper I looked at some years ago is flawed. I apologize for bringing it up and sharing my confusion with others.

 

But, thank you CC

 

-modest

Link to comment
Share on other sites

Now, for a static universe, [math]L=L_0[/math] and [math]D=D_L[/math]; therefore, the energy-flux equation for a static universe is

 

 

[math]F=L_0/4{pi}{D_L}^2[/math]

 

Equation (1) is the energy-flux equation used in the standard model (Riess et al. 2004, Perlmutter&Schmidt 2003). Equation (1) leads to the distance modulus equation:

 

Sam,

 

For our purposes this equation works fine.

 

A supernova in static space delivers an equal number of photons to earth as an equally distant supernova in expanding space. The photons will be redshifted and time dilated but equal in number. Because supernova only last so long and have a characteristic light curve this can be and is accounted for.

 

-modest

Link to comment
Share on other sites

I'm sorry, I missed some of the preliminaries on the equation that indicated the calculation for DL.

 

However, don't the K-corrections to the supernova observations take into account flux changes due to redshift?

 

Yeah, 'K-correction' - that's the term I was looking for in my last post. You and I were thinking the same thing. It would sure seem to me it would negate any time-dilated-flux issues.

 

-modest

Link to comment
Share on other sites

...

Certainly, dark energy as a cause of acceleration has vanished in the model. However, due to the dilation effect on the image that an observer sees that is caused by the expansion of space, a dilated mass/energy of the source image occurs. This dilated mass/energy (dark mass/energy) is an effect of expanding space, not a cause of acceleration.

...

 

Bigsam1965, have you published your work (or tried to) (aside from your book) on this subject in a refereed journal (i.e., Apj, or other, arXiv, etc.)? If so, where? If not, why not? It seems to me something (if it checks out) that may generate some fuss (for better or for worse).

 

After all, the problem of DE and non-baryonic DM have plagued modern cosmology for over 10 years now. If you can do something about it - as you claim - then wouldn't it be worthwhile diffusing the solution to a wider audience (e.g., those directly involved in the SNe Ia observations), in addition to science fora, or course.

 

 

CC

Link to comment
Share on other sites

Bigsam1965, have you published your work (or tried to) (aside from your book) on this subject in a refereed journal (i.e., Apj, or other, arXiv, etc.)? If so, where? If not, why not? It seems to me something (if it checks out) that may generate some fuss (for better or for worse).

 

After all, the problem of DE and non-baryonic DM have plagued modern cosmology for over 10 years now. If you can do something about it - as you claim - then wouldn't it be worthwhile diffusing the solution to a wider audience (e.g., those directly involved in the SNe Ia observations), in addition to science fora, or course.

 

 

CC

 

CC, I spoke to an editor at ApJ, and he said it would be difficult for me as a single author and not in the field to publish a 90 page article in ApJ, and I can see his point. He recommended as a start to publish the article as a book, and that is what I did. Essentially, throw it out there and see what happens. I also asked if they peer-reviewed books, and he said only articles that are published in ApJ. I plan to publish a small three-four page article on what we have been talking about on this forum. First the article will be published on arXiv and then possibly ApJ. I have an experienced senior physics professor helping me meet the requirements of publishing in ApJ.

Link to comment
Share on other sites

Sam,

 

For our purposes this equation works fine.

 

A supernova in static space delivers an equal number of photons to earth as an equally distant supernova in expanding space. The photons will be redshifted and time dilated but equal in number. Because supernova only last so long and have a characteristic light curve this can be and is accounted for.

 

-modest

 

That is the wrong way to look at redshift and photon density. Space is expanding, and just as galaxies are moving away from each other in expanding space, photons are also spreading out in all three directions as space expands; therefore, in addition to photon density being reduced in the transverse direction in a static universe as the photons spread out from the source, an additional photon density reduction occurs as the photons travel through expanding space. This reduction in photon density leads to the two possibilities for effective luminosity in expanding space in the original post. For effective luminosity, i.e. radiant power, the question is what is the rate at which photon energy is reaching an observer. Obviously, all of the photons will reach the observer over a dilated time interval.

Link to comment
Share on other sites

...

Certainly, dark energy as a cause of acceleration has vanished in the model. However, due to the dilation effect on the image that an observer sees that is caused by the expansion of space, a dilated mass/energy of the source image occurs. This dilated mass/energy (dark mass/energy) is an effect of expanding space, not a cause of acceleration. ...

 

 

What is the difference between the favored pre-1998 standard critical Friedmann model and your solution? As far as I can see, both models predict that space is flat, unbounded, expanding and coasting (non-accelerating).

 

How then do you explain the curve in figure 13 of Hubble Space Telescope Observations of Nine High-Redshift ESSENCE Supernovae?

 

[Edited to add:] I assume your answer will be that the curvature (interpreted at present (LCDM) as time dilation due to acceleration) is not real, since your interpretation requires a supplemental reduction in photon density as photons travel through expanding space (i.e., photons are also spreading out in all three directions), something you say, or imply, is not considered in the LCDM model. Would this mean that a third factor of (1 + z) is operational according to FLS? Is that not, then, a quadradic relationship for redshift-distance?

 

I was under the impression, as modest, that the second factor of (1 + z) in the standard model took the expansion of space into consideration already. One factor comes because photons are degraded in energy by (1 + z) due to the redshift (regardless of its cause). The second factor of (1 + z) is attributed to the dilution in the rate of photon arrival due to the stretching of the path length in the travel time: i.e., due to expansion. This second factor would not be present if the redshift were not caused by the FL expansion. So according to the standard interpretation, a static universe has only one factor of (1 + z).

 

What is the difference with the second factor of (1 + z) due to time dilation in the LCDM models and the third factor (what you call "an additional photon density reduction [which] occurs as the photons travel through expanding space") in the FLS model. It seems the second factor of the LCDM models has what you describe built in already.

 

 

 

Another question: Recall that before the advent of inflation (exponential expansion or a false vacuum, or slow roll) the standard model was lurching from crisis to crisis, stricken with a host of well-known tribulations - notably the horizon problem, the flatness problem and to some extent the monopole problem. Without some form of repulsive cosmological constant, dark energy, negative pressure or false vacuum, how do you resolve these outstanding issues? It seems to me that your FLS model would just bring the problems back to the forefront. Is something amiss in my understanding?

 

 

 

Through the problems we learn much about the truth.

:)

 

 

 

CC

Link to comment
Share on other sites

What is the difference between the favored pre-1998 standard critical Friedmann model and your solution? As far as I can see, both models predict that space is flat, unbounded, expanding and coasting (non-accelerating).

 

How then do you explain the curve in figure 13 of Hubble Space Telescope Observations of Nine High-Redshift ESSENCE Supernovae?

 

[Edited to add:] I assume your answer will be that the curvature (interpreted at present (LCDM) as time dilation due to acceleration) is not real, since your interpretation requires a supplemental reduction in photon density as photons travel through expanding space (i.e., photons are also spreading out in all three directions), something you say, or imply, is not considered in the LCDM model. Would this mean that a third factor of (1 + z) is operational according to FLS? Is that not, then, a quadradic relationship for redshift-distance?

 

I was under the impression, as modest, that the second factor of (1 + z) in the standard model took the expansion of space into consideration already. One factor comes because photons are degraded in energy by (1 + z) due to the redshift (regardless of its cause). The second factor of (1 + z) is attributed to the dilution in the rate of photon arrival due to the stretching of the path length in the travel time: i.e., due to expansion. This second factor would not be present if the redshift were not caused by the FL expansion. So according to the standard interpretation, a static universe has only one factor of (1 + z).

 

What is the difference with the second factor of (1 + z) due to time dilation in the LCDM models and the third factor (what you call "an additional photon density reduction [which] occurs as the photons travel through expanding space") in the FLS model. It seems the second factor of the LCDM models has what you describe built in already.

 

 

 

Another question: Recall that before the advent of inflation (exponential expansion or a false vacuum, or slow roll) the standard model was lurching from crisis to crisis, stricken with a host of well-known tribulations - notably the horizon problem, the flatness problem and to some extent the monopole problem. Without some form of repulsive cosmological constant, dark energy, negative pressure or false vacuum, how do you resolve these outstanding issues? It seems to me that your FLS model would just bring the problems back to the forefront. Is something amiss in my understanding?

 

 

 

Through the problems we learn much about the truth.

:yeahthat:

 

 

 

CC

 

CC, there are a lot of questions in your post. My time is limited. Lectures all morning and students in my office all afternoon. Let me give a short answer to spatial-expansion photon density reduction. Any model that has expansion of space has photon density reduction due to the expansion of space, this includes the LCDM standard model and the FLS model. Look at Ned Wright's website about "errors in tired light cosmology". I will address this issue and your other questions in more detail when I get some free time. Until then have fun.

Link to comment
Share on other sites

I thought I would interject a brief recap.

 

 

The re-emergence of lambda with different brain, aside from having sparked a wave of "dark" Einsteinian conspiracy theories, epitomizes and reawakens the belief in genius. Indeed it is a revelation of the exquisite sensitivity and refinement, unforced elegance and rarified adroitness of one prolific gentleman, the creator of the most controversial term of 20th century physics: Albert Einstein. One thing seems certain, the so-called fudge factor is not going away, however much astronomers and others (such as bigsam1965 and Mike C) might have wished it to.

 

The new lambda is different conceptually to Einstein's, this one is responsible for disequilibrium, i.e., it overpowers gravity while blowing the universe apart.

 

Lambda was attached to relativity to generate equilibrium, even if at the time it was deemed synthetic, and Einstein-created embodiment of empty space with attributes, still controversial, deemed safely repulsive enough to counter gravity precisely. Today’s lambda, repellent, impermeable, obscure, mysterious, dark, a kind of "antigraviy," is what Star Wars is to science: a fantastical reproduction, a travesty, a perversion, a distortion of the real thing. The dividing line between the two appears to be irreversibly clear; even though Einstein had not fully defined lambda physically, its role was unambiguous.

 

And my contention is obviously that lambda never should have been discarded, the original intention for introducing it was estimable, and of course, the new version is not only a grotesque exaggeration, but an example of modern cosmology gobbledygook in the face of contradictory evidence, an example of how the standard model is changed rather than discarded when a detrimental observation emerges.

 

 

The greatest blunder became the greatest revelation.

 

 

So it was not only that Einstein disliked it, a fact that is hardly contested, and others held odious opinions of it: nor even that it precisely countered gravity then and bizarrely surpassed it now: The most irrepressible rationale for opposition to Einstein’s lambda was (and still should be) its potentially operational mechanism in stabilizing the universe against collapse or blowing apart. All of ninety years later lambda has become monstrous accomplice to mass obliteration.

 

Indeed it is hypothesized that the cosmic repulsion from dark energy may become so strong that the universe and everything in it could eventually (in 50 or 60 billion years) be torn apart, right down to the atomic level: the universe would end in a last outwardly crazy, zany, fuming, frenetic, uncontrolled, ridiculous instant of self-obliteration, dubbed a Big Rip.

 

 

Rest In Peace Universe.

 

 

 

:lol:

 

 

:eek_big:

 

 

 

 

CC

Link to comment
Share on other sites

The new lambda is different conceptually to Einstein's, this one is responsible for disequilibrium, i.e., it overpowers gravity while blowing the universe apart.

 

Lambda was attached to relativity to generate equilibrium, even if at the time it was deemed synthetic, and Einstein-created embodiment of empty space with attributes, still controversial, deemed safely repulsive enough to counter gravity precisely. Today’s lambda, repellent, impermeable, obscure, mysterious, dark, a kind of "antigraviy," is what Star Wars is to science: a fantastical reproduction, a travesty, a perversion, a distortion of the real thing. The dividing line between the two appears to be irreversibly clear; even though Einstein had not fully defined lambda physically, its role was unambiguous.

 

I think people should understand that today's cosmological constant is exactly the same term Einstein first developed in general relativity. No other term will do. People have made good arguments that lambda arises naturally when deriving GR and you must consciously set it to zero to get rid of it. The fact that a natural term in GR corresponds directly to an effect of quantum field theory strengthens both GR and lambda even more.

 

Einstein's motivations for setting lambda as non-zero may not have been pure as they were based on an assumption rather than observation. However, his term is mathematically correct and is the same today as ever. Before we know where to set lambda we have to make observations. This is what's happening today. The motives seem pure. I don't understand the objection and believe the description above is inaccurate.

 

-modest

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...