Jump to content
Science Forums

SNe Ia, Implications, Interpretations, Lambda-CDM...


coldcreation

Recommended Posts

I think people should understand that today's cosmological constant is exactly the same term Einstein first developed in general relativity. No other term will do. People have made good arguments that lambda arises naturally when deriving GR and you must consciously set it to zero to get rid of it. The fact that a natural term in GR corresponds directly to an effect of quantum field theory strengthens both GR and lambda even more.

 

Einstein's motivations for setting lambda as non-zero may not have been pure as they were based on an assumption rather than observation. However, his term is mathematically correct and is the same today as ever. Before we know where to set lambda we have to make observations. This is what's happening today. The motives seem pure. I don't understand the objection and believe the description above is inaccurate.

 

 

Yes, that is why I wrote:

 

  • The new lambda is different conceptually to Einstein's' date=' this one is responsible for disequilibrium, i.e., it overpowers gravity while blowing the universe apart.
     
    [*']Lambda was attached to relativity to generate equilibrium, even if at the time it was deemed synthetic, and Einstein-created embodiment of empty space with attributes, still controversial, deemed safely repulsive enough to counter gravity precisely.

Note the word "conceptually."

 

Recall that without lambda (i.e., had Einstein not introduced it into the field equations) he would have predicted expansion (a phrase written in every relevant text book). In other words, with lambda, he predicted a stable regime. That is why it was introduced. Today, conceptually, it was re-introduced for another reason...

 

Stephen Hawking’s view is not exceptional:

 

“Einstein’s static model of the universe was one of the great missed opportunities of theoretical physics; if he had stuck to his original version of general relativity without the cosmological constant he could have predicted that the universe ought to be either expanding or collapsing. As it happened however' date=' it was not realized that the universe was changing with time until astronomers like Slipher and Hubble began to observe the light from other galaxies.” (Hawking, see Davies, New Physics, 1989) [/quote']

 

Indeed, it was Einstein's lambda that prevented the universe from expanding or collapsing. So my contention is, old-school as it may sound, if Einstein had stuck to his version of general relativity with the cosmological constant he could have predicted that the universe ought to remain stable. The greatest missed opportunity of theoretical physics was Einstein's ultimate rejection of lambda.

 

True it is thought that even with lambda the field equations lead to instability, like a pencil balancing on its point. But it can be shown that is not the case, i.e., the balance is natural, just as the planets and satellites in the solar system are balanced (like a pencil standing on its point). There is nothing synthetic about it. The natural fine-tuning is simply like hanging a pencil from its eraser (something that anyone can try at home). The pendulum-like hypothesis is based on the maximum and minimum force within a given system. The velocities of the parts of the system depend only on the potential difference between gravity and the cosmological constant, or conversely, the equivalence of mass and energy. This concept, formally known as the “balance” principle, is founded on the principle of both conservation of energy - first enunciated by Leibnitz in the seventeenth century - and the principle of conservation of mass, now fused into one conservation law.

 

A similar fine-tunning "problem" occured in the preferred pre-1998 Friedmann-Lemaitre critical model: the ‘flat’ Euclidean one–to–one relation between the energy density to the critical density omega, and the spatial curvature of the universe. Recall too, Newton was forced to attribute the fine-tunning observed in the solar system to a supernatural force. Like balancing a pencil on its point.

 

My point is, the impossibility of explaining local and large-scale stability in the midst of gravitational attraction has led to the expansion scenario and the man-made manufacturing of the theoretical artifacts such as cold dark matter and dark energy. The incorporation of nonphysical factors (pen name: new physics) into theory is why I call this type of fine-tuning non-natural, synthetic, or artificial. It is the theory put forth by modern physics, not the fine-tuning, that swallows the burden of revolt.

 

Those who have argued that this Newtonian crisis has been solved (with the above artificiality’s) have failed to recognize the real physical mechanism responsible for the observed stability of dynamically interconnected gravitating systems.

 

I think it is important for people should understand that today's cosmological constant (dark energy) is not exactly the same term (conceptually) Einstein introduced into the general relativity field equations.

 

To state otherwise is historically incorrect.

 

 

Note: setting lambda to zero does not get rid of it. Au contraire...

 

Einstein's motivations for introducing lambda was not based on an "assumption rather than observation." It was based on both assumption and observation.

 

The crux of the fine-tuning problem: Throughout the period following Einstein’s introduction of general relativity, the cosmological constant was repeatedly caught in the crossfire of contradictory charges. It was a concept that could seem from different angles, either as a stabilizing ‘force’ or a repulsive ‘driving force,’ at one and the same moment. It is a particularly ironic demonstration of this elasticity of image that even Einstein’s own vigorous attack on the ‘force’ of the cosmological constant might really have been uncompromising, like that of another relativist; Eddington, who chose to accept the term in some form or another, as if science could reach it. But the fact is that this contradictory image was possible because for the most part the scientists who defended it were very much aware of the continuing search for common and constant aesthetic or physical principles.

 

 

That search has not yet ended.

 

 

The flatness problem, Krauss wrote, “is a prime example of what has become known in particle physics as a “naturalness” or “fine-tuning” problem.” He warns that the “flatness problem is the second worse fine-tuning problem we know in physics.” And in a footnote on the same page he includes: “In fact, the worst fine-tuning problem in physics relates to the cosmological constant. If the quantity is nonzero, but small, then the fine-tuning of about 125 decimal places seems called for.” He then suspects that these two problems, being the worst in physics, might some how be related, but then suggests, “this does not seem to be the case.” Because “the flatness problem has a natural solution in terms of calculable physics.” And then comes the Coup-de-Gras: “To date, no one even understands how to address the cosmological constant problem.” A few pages later, after having “sung the praises of inflation models, and for a flat universe” he surprisingly mentions the fact that he would be “very surprised if any of these initial models [GUTs or inflation] turned out to be true.” Both were divest of quantum gravity. He promptly then introduces the dark matter issue to explain the flatness, saying the “stuff” must be made of “something else” as opposed to ordinary matter, and that we must have “missed most of it” then he concludes in his most expansive moment; “This is a very large pill to swallow.” (see Krauss, L. 2000-2001, Quintessence, The Mystery of Missing Mass in the Universe, pp. 138-68)

 

The other point is this: The two worst fine-tuning problems are related; they are in fact one and the same, but, only once the cosmological constant issue is resolved will it surface the unacceptability of a flat accelerating universe (or nearly so) and its theoretical needs.

 

 

 

 

CC

Link to comment
Share on other sites

Note the word "conceptually."

 

Lambda is conceptually the same as it has always been.

 

Today’s lambda, repellent, impermeable, obscure, mysterious, dark, a kind of "antigraviy," is what Star Wars is to science: a fantastical reproduction, a travesty, a perversion, a distortion of the real thing. The dividing line between the two appears to be irreversibly clear; even though Einstein had not fully defined lambda physically, its role was unambiguous.

 

How does the same term in the same equation turn into a "perversion"?

 

-modest

Link to comment
Share on other sites

[Edited to add:] I assume your answer will be that the curvature (interpreted at present (LCDM) as time dilation due to acceleration) is not real, since your interpretation requires a supplemental reduction in photon density as photons travel through expanding space (i.e., photons are also spreading out in all three directions), something you say, or imply, is not considered in the LCDM model. Would this mean that a third factor of (1 + z) is operational according to FLS? Is that not, then, a quadradic relationship for redshift-distance?

 

I was under the impression, as modest, that the second factor of (1 + z) in the standard model took the expansion of space into consideration already. One factor comes because photons are degraded in energy by (1 + z) due to the redshift (regardless of its cause). The second factor of (1 + z) is attributed to the dilution in the rate of photon arrival due to the stretching of the path length in the travel time: i.e., due to expansion. This second factor would not be present if the redshift were not caused by the FL expansion. So according to the standard interpretation, a static universe has only one factor of (1 + z).

 

What is the difference with the second factor of (1 + z) due to time dilation in the LCDM models and the third factor (what you call "an additional photon density reduction [which] occurs as the photons travel through expanding space") in the FLS model. It seems the second factor of the LCDM models has what you describe built in already.

 

 

You may not assume that I added an additional [math](1+z)[/math]. Because of the spreading out of photons as space expands, the effective luminosity [math]L[/math] equals [math](1+z)^{-1}[/math] times intrinsic luminosity [math]L_0[/math]; this is because fewer photons per second are crossing over the source-centered spherical boundary at an observer than the original number of photons per second that were emitted from the source.

 

If photon energy is stretched and conserved, no additional scaling is requred, thus, [math]L=(1+z)^{-1}L_0[/math]. Therefore, for conservation of photon energy [math]D_L=(1+z)^{1/2}D[/math], as I presented in the original post.

 

If photon energy is not conserved then an additional [math](1+z)^{-1}[/math] is multiplied times [math]L_0[/math], thus resulting in [math]L=(1+z)^{-2}L_0[/math]. Therefore, for nonconservation of photon energy [math]D_L=(1+z)D[/math], as I presented in the original post.

 

The point that I have been trying to make is that [math]D_L[/math] is not the distance to solve for in the FL metric, because [math]D_L[/math] has a component that is due to reduction in the effective luminosity relative to intrinsic luminosity, this component is not a real distance. It has been interpreted as a real distance by some. The LCDM standard model solves for [math]D_L[/math] in the FL metric, and thus has let the horse and fox escape before the fox hunt.

 

To be continued.

Link to comment
Share on other sites

What is the difference between the favored pre-1998 standard critical Friedmann model and your solution? As far as I can see, both models predict that space is flat, unbounded, expanding and coasting (non-accelerating).

 

 

Another question: Recall that before the advent of inflation (exponential expansion or a false vacuum, or slow roll) the standard model was lurching from crisis to crisis, stricken with a host of well-known tribulations - notably the horizon problem, the flatness problem and to some extent the monopole problem. Without some form of repulsive cosmological constant, dark energy, negative pressure or false vacuum, how do you resolve these outstanding issues? It seems to me that your FLS model would just bring the problems back to the forefront. Is something amiss in my understanding?

 

The pre-1998 critical model has a number of unattractive features that had introduced critical density into the model; therefore making the model unstable. The model had the Universe at critical density and any pertubation could cause the Universe to either have runaway expansion or contraction. The thought back then was that something caused the Universe to start expanding and as space expanded gravity would slow the expansion. I scraped the critical density concept and returned to the earlier metric when [math]k [/math] on the [math]kc^2[/math] term of the metric was either -1, 0, 1. I do not use the Einstein-deSitter model with k=0 to obtain a coasting model.

 

I approached the solution of the Friedmann-Lemaitre metric as a symmetry between gravity and antigravity (In string theory a graviton-antigraviton symmetry). With this in mind, I set [math]k=1[/math] and [math]\Lambda=0[/math]. Then I split the resulting metric into four equations: two equations for past and future spatial contraction and gravity and two equations for past and future spatial expansion and antigravity. Then I picked the appropriate expansion equation for determining [math]D[/math]. Both time and distance are dilated in the metric and integrating over dilated time is not a fruitful direction. I, therefore, transformed the dilated time derivative into a proper time derivative, using the chain rule, and the dilated distance into proper distance. I obtained proper distance as a function of the stretch factor [math]a=1+z[/math]. Then I converted proper distance into dilated distance to obtain the coasting-universe solution. In the FLS model antigravity provides the solution to the flatness problem and the horizon problem at the CMB.

Link to comment
Share on other sites

This post brings to mind a few oldies but goodies:

 

CC, in my humble opinion, bigsam1965 knows his chit. Very nice bigsam.

 

 

If you've got an opinion, why be humble about it? (Joan Baez)

 

I have opinions of my own - strong opinions - but I don't always agree with them. (George Bush)

 

Every man has a right to be wrong in his opinions. But no man has a right to be wrong in his facts. (Bernard Baruch)

 

Opinion is ultimately determined by the feelings, and not by the intellect. (Herbert Spencer)

 

In all matters of opinion, our adversaries are insane. (Oscar Wilde)

 

 

You see Little Bang, differences of humble opinion have always been vast between the opposing camps. Those divergences were the reason and incentive behind retaliatory attacks launched by the big bang leaders and their top aides, against the steady state crew (Hoyle et al). Many crucial assaults transpired under the cover of darkness and conducted by foot soldiers operating from behind a computer (mostly on public fora dedicated to science) with little fear or interference from the higher ranked members. The opposition too conducted sabotage missions and cunningly chose earlier ‘variations’ of Nernst, MacMillan and Millikan to back their claims, however unsuccessful those were.

 

The goal seemed clear, even though the original rationale was lost in a mist of reprisals: Pro-big-bang cosmologists were initially trying to destabilize and topple the steady state and Arpian leadership, first by intimidation, then by pursuing individual rogues with a terror campaign—all for fear that a cosmological coup might be attempted by the British commandos to overthrow the head of the Cambridge physics department.

 

That fight has not ended.

 

 

In my opinion, reality finally hit the big bang and its fans in the late 1990s, on fourth and four inside the rival’s 10-yard line, it was goal-to-go with only one second on the clock, the big bang quarterback took the snap and threw a bomb over the head his intended receiver who’s long-shot hope for a catch was negated by the blinding light of a distant supernova Type Ia. It may be a little early, though, to exchange chest-bumps and heart-taps.

 

Meanwhile, the inflation angst rippled through the cosmological community—for the simple reason that there’s no way the inflationary theorist’s and their allies can get out of the current mess without bringing back the dreaded cosmological constant. So what are the consequences of a flat universe no longer expanding at a critical rate (LCDM), where the balance between the inward tug of gravity has virtually vanished? Now the theorists are the ones doing a tightrope act, with a balance between spirit and will, and the latter doesn’t always prevail, especially at higher echelons where the crosswinds between competing scientists and variable opinions are extremely fierce.

 

The inspirational pull of current inconceivabilities remains, in my opinion, great art. Should however cosmologists, those learned artists with the highest stanch credentials, at some level revamp the dike of conventional handicaps—starting from ground zero—this acute epic congenital conflict may find resolution by means of adjudicating disputes according to laws of nature, rather than with brut dark force. Finally, we get the bird back in its cage. (Coldcreation)

 

 

 

 

CC

Link to comment
Share on other sites

Yes, every time someone points out how Arp's evidence always withers away as soon as any further research is done on any of his examples, it's part of some vast conspiracy against the work of a number of scientists who, unlike those actually employed to do cosmology, actually do have scientific standards.

 

Hoyle put forth a number of theories throughout the years, all based on a variety of principles that are not consistent with those of other models, none of which have met with as much success as the standard relativistic model. Yet to point this out is to ally oneself, supposedly, with the vast conspiracy of sub-standard scientists.

Link to comment
Share on other sites

Yes, every time someone points out how Arp's evidence always withers away as soon as any further research is done on any of his examples, it's part of some vast conspiracy against the work of a number of scientists who, unlike those actually employed to do cosmology, actually do have scientific standards.

 

Hoyle put forth a number of theories throughout the years, all based on a variety of principles that are not consistent with those of other models, none of which have met with as much success as the standard relativistic model. Yet to point this out is to ally oneself, supposedly, with the vast conspiracy of sub-standard scientists.

 

I would say that is a rather harsh assessment of those who practice cosmology (observational astronomers). I doubt that a prerequisite to been a good cosmologist is to accept the mainstream view, especially in light of all the problems inherent in its framework. Hint: recall how consistent the Friedmann models were prior to the SNe Ia data. Not one of them stood up to the test of time (dilation). Enter LCDM.;)

 

My point is, regarding Hoyle's mechanisms such as his C-field (for the creation of matter), his iron whiskers (for the thermalization of star-light as origin of the CMBR), etc, are no less extraordinary as exotic stuff like non-baryonic dark matter or dark energy. His use of parameters was always to a minimum, if at all. For this he was a Naturalist (or as close as one could be to a Naturalist, considering his oscillating cosmos), and for that reason I respect him to the utmost, and that, whether I accept his cosmology or not (so too for Arp and others).

 

 

 

CC

Link to comment
Share on other sites

How can you accept the C-field, which uses the cosmological constant, but throw out the cosmological constant?

 

Your refusal to actually learn the basic mathematics of the theories that you trash because they are generally accepted or laud because they are not generally accepted is embarrassing.

Link to comment
Share on other sites

How can you accept the C-field, which uses the cosmological constant, but throw out the cosmological constant?

 

Your refusal to actually learn the basic mathematics of the theories that you trash because they are generally accepted or laud because they are not generally accepted is embarrassing.

 

I don't accept the C-field (or iron whiskers) of QSSC.

 

I don't trash theories because they are generally accepted.

 

I don't laud theories because they are not generally accepted.

 

Nor would I have thrown out the cosmological constant in the first place.

 

 

 

Nothing embarrassing there.

:hihi:

 

 

 

CC

Link to comment
Share on other sites

Meanwhile, back in the jungle:

 

Regrettably, the 1998 observational data does not detach one particular representation as the true model universe, i.e, there are still conflicting opinions as to wether the universe is open, closed or flat (in the Friedmannian sense of the terms). The new acceleration theory with its weaponized cosmological constant is provocative but unconvincing. The new universe, as depicted by the LCDM model, is a grotesque arena where gravity and lambda fight to stay alive, two oversize opponents, caricatures. One, an abandoned enfant-terrible from the previous century, thin and weak, recently readopted, and the other, a 300-year-old Newtonian power-horse, standing in volte-face at the end of their lives in a dust-filled courtyard ready for the final feud.

 

Goldsmith, The Runaway Universe, thoroughly attuned to the challenge offered by the highly evocative shift in cosmic epitome, without the epiphany in mouth-watering details, offers these words in consolation:

 

Let us pull ourselves from the slough of despond in which the implications of these new observations threaten to drown us. Rather' date=' let us lift our spirits by celebrating the astronomical powers of insight that have brought us the latest news about the universe. [The shocking conclusion:'] The redshift and apparent brightness of Type Ia supernovae reject the possibility of a flat universe with a zero cosmological constant and suggest a flat universe with an average density of matter much less than the critical density. (p. 78)

 

The explanation of the data, again, rests on the fact that remote highly redshifted SNe Ia appear further away than expected in a flat Euclidean expanding universe—when these supernovae reach maximum brightness they are approximately 25 percent fainter than the peak brightness they would have attained in a universe where lambda equals zero. Adequately large values of lambda, or omega (vacuum), “would imply that no big bang had ever occurred—quite a conundrum for cosmology”…(Goldsmith 2000).

 

 

 

 

CC

Link to comment
Share on other sites

Meanwhile, back in the jungle:

 

Regrettably, the 1998 observational data does not detach one particular representation as the true model universe, i.e, there are still conflicting opinions as to wether the universe is open, closed or flat (in the Friedmannian sense of the terms). The new acceleration theory with its weaponized cosmological constant is provocative but unconvincing. The new universe, as depicted by the LCDM model, is a grotesque arena where gravity and lambda fight to stay alive, two oversize opponents, caricatures. One, an abandoned enfant-terrible from the previous century, thin and weak, recently readopted, and the other, a 300-year-old Newtonian power-horse, standing in volte-face at the end of their lives in a dust-filled courtyard ready for the final feud.

 

Goldsmith, The Runaway Universe, thoroughly attuned to the challenge offered by the highly evocative shift in cosmic epitome, without the epiphany in mouth-watering details, offers these words in consolation:

 

 

 

The explanation of the data, again, rests on the fact that remote highly redshifted SNe Ia appear further away than expected in a flat Euclidean expanding universe—when these supernovae reach maximum brightness they are approximately 25 percent fainter than the peak brightness they would have attained in a universe where lambda equals zero. Adequately large values of lambda, or omega (vacuum), “would imply that no big bang had ever occurred—quite a conundrum for cosmology”…(Goldsmith 2000).

 

 

 

 

CC

 

SNe Ia appear farther way than expected by the proponents of LCDM because one component of the distance modulus [math]\mu[/math] is due to the reduction of effective luminosity relative to source intrinsic luminosity. This component is not a real distance, and can lead some people to the conclusion that SNe Ia are farther away than expected for a flat, coasting universe.

Link to comment
Share on other sites

The point that I have been trying to make is that [math]D_L[/math] is not the distance to solve for in the FL metric,

 

Sure it is. There are many measures of distance that can be written as a function of redshift and [imath]d_L[/imath].

 

Photon flux:

[math]d_F=\frac{d_L}{(1+z)^{1/2}}[/math]

Photon count:

[math]d_P=\frac{d_L}{(1+z)}[/math]

Deceleration distance:

[math]d_Q=\frac{d_L}{(1+z)^{3/2}}[/math]

Angular diamater distance:

[math]d_A=\frac{d_L}{(1+z)^2}[/math]

 

These are worked out and discussed in the context of supernova standard candle in this paper. [imath]d_L[/imath] does not in any of these equations or in your equation one assume a static space-time as stated by your proof. A quote from the above paper:

 

The “photon flux distance” [imath]d_F[/imath] is based on the fact that it is often technologically easier to count the photon flux (photons/sec) than it is to bolometrically measure total energy flux (power) deposited in the detector. If we are counting photon number flux, rather than energy flux, then the photon number flux contains one fewer factor of [imath]{(1+z)}^{-1}[/imath]. Converted to a distance estimator, the “photon flux distance” contains one extra factor of [imath]{(1+z)}^{-1/2}[/imath] as compared to the (power-based) luminosity distance.

 

because [math]D_L[/math] has a component that is due to reduction in the effective luminosity relative to intrinsic luminosity, this component is not a real distance.

 

It would be ok to call the distance [imath]{d_L}^2=L/4\pi{F}[/imath] an 'effective' distance. This is done in "Hubble Space Telescope Observations Of Nine High-Redshift Essence Supernovae" and might be semantically correct or perhaps less-confusing... so long as the terms are kept straight and the reason for using proper luminosity L is understood.

 

We wouldn't want to make any corrections or adjustments to the luminosity or flux distance that were not made to the redshift distance. It is, after all, the point of the study to compare the model-derived relationship between [imath]d_L[/imath] and [imath]z[/imath] and the observed relationship.

 

Redshift represents real emitted spectrum and observed spectrum. [imath]d_L[/imath] represents real emitted brightness and observed brightness.

 

Consider the angular diameter distance which is the proper or intrinsic size divided by the observed angular size. All these distance measures are consistent and properly defined. Their definitions are not being lost or changed in the formulation of the supernova luminosity/distance/redshift equation. It seems proper to me.

 

-modest

Link to comment
Share on other sites

Sure it is. There are many measures of distance that can be written as a function of redshift and [imath]d_L[/imath].

 

Photon flux:

[math]d_F=frac{d_L}{(1+z)^{1/2}}[/math]

Photon count:

[math]d_P=frac{d_L}{(1+z)}[/math]

Deceleration distance:

[math]d_Q=frac{d_L}{(1+z)^{3/2}}[/math]

Angular diamater distance:

[math]d_A=frac{d_L}{(1+z)^2}[/math]

 

These are worked out and discussed in the context of supernova standard candle in this paper. [imath]d_L[/imath] does not in any of these equations or in your equation one assume a static space-time as stated by your proof. A quote from the above paper:

 

 

 

 

 

It would be ok to call the distance [imath]{d_L}^2=L/4pi{F}[/imath] an 'effective' distance. This is done in "Hubble Space Telescope Observations Of Nine High-Redshift Essence Supernovae" and might be semantically correct or perhaps less-confusing... so long as the terms are kept straight and the reason for using proper luminosity L is understood.

 

We wouldn't want to make any corrections or adjustments to the luminosity or flux distance that were not made to the redshift distance. It is, after all, the point of the study to compare the model-derived relationship between [imath]d_L[/imath] and [imath]z[/imath] and the observed relationship.

 

Redshift represents real emitted spectrum and observed spectrum. [imath]d_L[/imath] represents real emitted brightness and observed brightness.

 

Consider the angular diameter distance which is the proper or intrinsic size divided by the observed angular size. All these distance measures are consistent and properly defined. Their definitions are not being lost or changed in the formulation of the supernova luminosity/distance/redshift equation. It seems proper to me.

 

-modest

 

I never said the proponents of LCDM explicitly assumed a static space-time; however, I am saying that backing out those distances after you have solved for [math]D_L[/math] in the metric is the wrong way to solve for distance. By doing so you end up with the wrong values for the Hubble constant and the cosmological constant. By the way, the deceleration distance above is proper distance. Where proper distance is the distance between observer and source when photons being observed were first emitted from the source. We are seeing the proper distance dilated by (1+z).

Link to comment
Share on other sites

I never said the proponents of LCDM explicitly assumed a static space-time;

 

Yet your proof is based on the claim that one of the equations used by the standard model is incorrect for being static:

 

It is the standard model that is using [math]L=L_0[/math] and [math]D=D_L[/math], not the solution I am proposing. Using the static-universe energy-flux equation in the standard model leads to a standard-model over prediction of observed galaxy counts by the factor [math](1+z)^{3/2}[/math]. If the Universe really was static there would be no redshift due to the expansion of space. Since redshift is observed (and IMHO is correctly interpreted as due to spatial expansion), the static-universe energy-flux equation is not the correct form of the equation to use.

 

Which I and others have pointed out is not true.

 

however, I am saying that backing out those distances after you have solved for [math]D_L[/math] in the metric is the wrong way to solve for distance.

 

Important is the comparison between observables. [imath]d_L[/imath] is most natural as a common term of comparison between luminosity and redshift. For instance, if [imath]d_L={R_0}^2r_1/R_1[/imath] then we can relate redshift to [imath]R_0[/imath] and [imath]R_1[/imath]. I don't see how this presents a problem that you've so far shown. Perhaps if you were more specific.

 

By the way, the deceleration distance above is proper distance. Where proper distance is the distance between observer and source when photons being observed were first emitted from the source. We are seeing the proper distance dilated by (1+z).

 

The “deceleration distance” [imath]d_Q[/imath] is not a real physical property - certainly not proper distance. It is a mathematical construct defined in the above link I provided with the equation’s source. It is:

 

The quantity dQ is (as far as we can tell) a previously un-named quantity that seems to have no simple direct physical interpretation — but we shall soon see why it is potentially useful, and why it is useful to refer to it as the “deceleration distance”.

 

The FLRW metric defines proper distance as per usual: the radial distance. Or, more specifically, it is the geodesic measured on a hypersurface with constant time. [imath]D=R_X[/imath]. Introducing time gives [imath]D(t)=R(t)_X[/imath]. Time is then dilated by [imath] \Delta{t}=\Delta{t_0}(1+z)[/imath].

 

This is not what that paper was talking about.

 

-modest

Link to comment
Share on other sites

Yet your proof is based on the claim that one of the equations used by the standard model is incorrect for being static:

 

 

 

Which I and others have pointed out is not true.

 

 

 

Important is the comparison between observables. [imath]d_L[/imath] is most natural as a common term of comparison between luminosity and redshift. For instance, if [imath]d_L={R_0}^2r_1/R_1[/imath] then we can relate redshift to [imath]R_0[/imath] and [imath]R_1[/imath]. I don't see how this presents a problem that you've so far shown. Perhaps if you were more specific.

 

 

 

The “deceleration distance” [imath]d_Q[/imath] is not a real physical property - certainly not proper distance. It is a mathematical construct defined in the above link I provided with the equation’s source. It is:

 

 

 

The FLRW metric defines proper distance as per usual: the radial distance. Or, more specifically, it is the geodesic measured on a hypersurface with constant time. [imath]D=R_X[/imath]. Introducing time gives [imath]D(t)=R(t)_X[/imath]. Time is then dilated by [imath] Delta{t}=Delta{t_0}(1+z)[/imath].

 

This is not what that paper was talking about.

 

-modest

 

It is interesting that you are using a paper that has problems with the Taylor series expansion of the LCDM model for [math]D_L[/math] and is proposing changes to Hubble's law. The FLS model does not use a Taylor series. The solution is an exact solution of the metric for a coasting universe and the model does not propose changes to Hubble's law. I defined proper distance in my last post and that is the way I use it in my model.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...