
Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today. 
19th November 2023, 04:45 PM  #1281 
Philosopher
Join Date: Oct 2009
Posts: 5,515

The highlighted statement is flatout false. Readers can decide for themselves whether it was a deliberate lie, or just another indication that Mike Helland doesn't understand his own algorithm.
Here is the body of Mike Helland's while loop after I modified it. The "if useHX" code is Mike Helland's original code. The "if not useHX" code is the result of changing the two lines of code that update distances. It should be obvious that I never changed a "line of code that applies H to the model", because (apart from the two lines that update distances) there never was such a line of code. Code:
t = 1 if useHX: x1 += c  H * x1 x += c  H * x z = 0.1 / (x1  x)  1 if not useHX: x1 += c / (1 + z) x += c / (1 + z) z = 0.1 / (x1  x)  1 H = H0 * (OmegaM * (1+z)**3 + OmegaL + OmegaK * (1+z)**2)**0.5 With those changes, the Scheme code continues to work correctly for models with constant H. With exactly those same changes, Mike Helland's Python code breaks. Those changes break Mike Helland's algorithm because that algorithm uses x1x2 to update z instead of using the scale factor. That's the only significant difference between Mike Helland's algorithm and the bogstandard algorithm, and it is precisely that difference that has served as the basis for Mike Helland's almost daily claim that his algorithm is more general and easier to understand than the bogstandard algorithm. Mike Helland now admits that his algorithm doesn't work with v=c/(1+z) unless we also change the part of his algorithm that he's been most proud of. It seems Mike Helland is falling for his own antics. Let's note once again that no such change was needed when I modified the bogstandard algorithm to use v=c/(1+z) instead of v=c−Hd (for models with constant H). But Mike Helland's algorithm breaks if we use v=c/(1+z) instead of v=c−Hd. To repair that breakage, it is necessary to make a significant change to Mike Helland's algorithm. To be precise, we must replace the very aspect of Mike Helland's algorithm that Mike Helland has been touting as superior to the bogstandard algorithm. 
19th November 2023, 05:01 PM  #1282 
Illuminator
Join Date: Nov 2020
Posts: 4,261

It's plain as day.
The "useHX" code path reads the value of H. The "not useHX" code path doesn't. You're setting the value of H, sure, but you're never reading it. It's value has no effect on the results.
Quote:
It should also be obvious that you've broken the algorithm for every model except dS, so what business this algorithm has calculating H is beyond me. Let's recapp. I said v=cHd is the speed of light, and provided an algorithm that solves it, and consequently all observables in FLRW. I also said that, if H is constant, you can simplify if to just use x instead of x and x2 or x and a. You can get away with this because z = c/(c  Hd)  1. So you're gonna need that if you simplify for dS. You took v=cHd out of the algorithm, both in the moving of the photon and in the calculating of redshift, and showed that it doesn't work without it. Your algorithm still works because you've abstracted the Hd to a reference distance "a." 
__________________
I'm not entirely sure what I'm talking about, but based on what little I know, the above seemed like a reasonable thing to say. Thank you in advance for any corrections. 

19th November 2023, 05:18 PM  #1283 
Illuminator
Join Date: Nov 2020
Posts: 4,261

You just said that you changed the two lines of code that update distance using H to not use H. Then you said you never changed a line of code that applies H to the model, after just admitting you did, and then you claimed those lines don't exist, except for the ones that did. I'm not trying to embarrass you here, but maybe it's been a long day? 
__________________
I'm not entirely sure what I'm talking about, but based on what little I know, the above seemed like a reasonable thing to say. Thank you in advance for any corrections. 

19th November 2023, 05:38 PM  #1284 
Philosopher
Join Date: Oct 2009
Posts: 5,515

ETA:The following remark reveals profound ignorance of FLRW models. That's exactly right. Mike Helland thought he was being clever when he used x1x2 to update z, and touted that misfeature of his algorithm as a significant improvement upon the bogstandard algorithm. In particular, Mike Helland claimed his algorithm was more general and easier to understand than the bogstandard algorithm. In reality, Mike Helland's use of x1x2 to update z made his algorithm less general and harder to understand than the bogstandard algorithm. That became obvious when we modified the Helland and bogstandard algorithms to use v=c/(1 + z) instead of v=c−Hd for FLRW models with constant Hubble parameter H. With the bogstandard algorithm, that modification was confined to the lines of code that update x1 and x2. With Mike Helland's algorithm, confining that modification to the lines of code that update x1 and x2 breaks the algorithm. To repair that breakage, we must replace the very aspect of Mike Helland's algorithm that Mike Helland has been bragging about. That's exactly right (apart from the misspelling of "affects"). The bogstandard algorithm solves the differential equation and uses that solution to drive all auxiliary calculations. Mike Helland thinks it's better to use his ad hoc technique for updating z, in which x1−x2 serves as a proxy for the scale factor a(t). But it isn't better. Most of the calculations we'd want to perform don't involve those particular distances x1 and x2 at all, making them extraneous. Mike Helland's algorithm has to compute x1 and x2 even when they are conceptually irrelevant to the calculation we want to perform. Furthermore, the x1−x2 hack doesn't even work when, for a model with constant H, we replace v=c−Hd by v=c/(1+z). To repair the algorithm, we have to reintroduce v=c−Hd into the calculation of z. So we can't fully replace v=c−Hd by v=c/(1+z) in Mike Helland's algorithm. With the bogstandard algorithm, we can fully replace v=c−Hd by v=c/(1+z). That demonstrates the greater generality of the bogstandard algorithm. The fact that Mike Helland still doesn't understand the above argues against his claim that the Helland algorithm is easier to understand. The bogstandard algorithm calculates H because the bogstandard algorithm solves the differential equation Although H(t) is constant for the de Sitter (dS) model, a(t) is not constant, so the value of H(t) is needed at each step to calculate the value of (da/dt)(t). In what he wrote above, Mike Helland is saying he doesn't understand why the bogstandard algorithm calculates H at each step even for the extremely special case of a model in which H is constant. Which is just a way of saying Mike Helland doesn't understand why people prefer generalpurpose algorithms that work with an extremely wide range of model parameters, instead of developing and coding a new specialpurpose algorithm for every special case of model parameters. To recap: If Mike Helland were a better programmer, he would understand why people prefer generalpurpose software to a plethora of specialpurpose software. 
19th November 2023, 06:13 PM  #1285 
Illuminator
Join Date: Nov 2020
Posts: 4,261

Quote:
Your "general purpose" algorithm is broken for all models except dS, and contains superfluous code at that. The algorithm works like this: * update d based on Hd * update z based on d * update H based on z That works for all FLRW. For an FLRW with a constant expansion rate, you can do that, or you can do: * update d based on z * update z based on Hd * update H based on z What you can't do is: * update d based on z * update z based on d * update H based on z It makes H obsolete. In an expanding universe, photons become redshifted. But they also arrive at a reduced rate, known as time dilation. In an exponentially expanding universe (aka, constant expansion rate, aka de Sitter (dS), asks pure dark energy [OmegaL=1, OmegaM=0]) the "low redshift approximations" become the actual results of the model. Which can be found by setting E(z) to 1 in the respective integrals and solving analytically: * comoving distance * angular diameter distance * light travel time You can see why those FLRW parameters results in E(z) = 1 by looking at E(z): When , you can see why a pure dark energy universe has a constant expansion rate, because: With those integrals, you can calculate every observable with FLRW, including cosmic redshift and cosmic time dilation. You can also generalize all those integrals into v=cHd, and solve with my algorithm. Due to time dilation if photons were emitted 1 second apart from z=1, they would arrive at z=0 two seconds apart. This is logical and easily visually apparent by analyzing my algorithm. First consider the version that goes forward in time. You see the left photon travels at c (because c+Hd = c when d=0) from t=0 to t=1, but the right photon is traveling faster, because its d > 0. In the second step, the left photon is now traveling as fast as the first right one was, but now the right one is traveling is faster. This will continue on and one. This means that successive photons get farther apart, being observed as time dilation. You should note that the time dilation here cannot be reproduced by the relativistic Doppler effect. As an example. At z =1 in dS, the galaxy is at c/H_{0} now, moving at v=c. When its light was emitted, it was at 0.5 c/H_{0}, moving at c/2 (both the galaxy and its light, but in opposite directions). This means a photon emitted by the galaxy 1 second after the previous photon will only be 0.5 light seconds behind due to the recession velocity, or 1.5 seconds in total. However the photon shows up 2 seconds later, not 1.5. So the effect is not relativistic Doppler, which breaks down at v=c anyways. If some courageous soul thinks W.D.Clinger's variation of my algorithm more clearly represents reality, please explain how this: Shows us time dilation. Notice that H is always positive, and always stays positive in LCDM. Using v=c+Hd the photon never be moving in the opposite direction of where it really wants to go. This means you can never have an angular diameter turnaround if you tried calculating LCDM forward in time from a starting point in time prior to the angular diameter turnaround. This is means it is impossible to calculate LCDM (or any nondS FLRW model) where time moves forward from a starting time prior to z=1.6. Unless you use comoving coordinates, in which case your starting conditions already include the future. 
__________________
I'm not entirely sure what I'm talking about, but based on what little I know, the above seemed like a reasonable thing to say. Thank you in advance for any corrections. 

19th November 2023, 06:45 PM  #1286 
Philosopher
Join Date: Oct 2009
Posts: 5,515

The generalpurpose algorithm is the algorithm that solves for the scale factor a(t) in any FLRW model. That gives you z(t) and H(t) and everything else that can be defined in terms of the scale factor. It is also easy to attach code that calculates other things such as distances and the evolution of massenergy density and pressure. Modifying that generalpurpose algorithm to use v=c/(1+z) does indeed break it for all models in which the Hubble parameter H(t) changes over time, but that's true of the Helland algorithm(s) as well. Mike Helland's apparent belief that dS is the only model with constant H(t) is yet another sign of his ignorance of FLRW models. That is false. LCDM includes all FLRW models. Everyone who understands LCDM and FLRW models is aware of FLRW models that start with a Big Bang and end with a Big Crunch. But Mike Helland thinks H(t) is always positive and cannot go negative. That sentence is nonsense. We can add comoving coordinates to the already long list of things Mike Helland is telling us he doesn't understand. 
19th November 2023, 07:08 PM  #1287 
Illuminator
Join Date: Nov 2020
Posts: 4,261

Agreed.
Quote:
Were we to have never defined cosmological phenomena in terms of wavelength, ie redshift z, and instead in terms of frequency and energy, ie, negative blueshift, we wouldn't be having this argument. If: In FLRW, 1+b would be the scale factor. 1+b = a In dS, angular diameter distance would be r = bc/H_{0}. Put like this, the concepts of "redshift" and "scale factor" are literally the difference between b and 1+b. I get that already knowing the ins and outs of FLRW (and thank you for teaching me), which is why the scale factor is so great, but it's just wavelength emitted over wavelength observed.
Quote:
Quote:
Quote:
LCDM is a specific set of parameters for FLRW. So FLRW includes LCDM. FLRW includes models with and without dark energy and models with and without matter. LCDM includes dark energy (L, about 70%) and matter (CDM, about 30%). Without those it wouldn't be LCDM. All models except pure L have a big bang. Some models collapse. Some, like ours, don't.
Quote:
It's positive. 
__________________
I'm not entirely sure what I'm talking about, but based on what little I know, the above seemed like a reasonable thing to say. Thank you in advance for any corrections. 

19th November 2023, 10:08 PM  #1288 
Philosopher
Join Date: Oct 2009
Posts: 5,515

Yes.
Or rather, sort of. As elaborated below, LCDM is a specific set of parameters, but several of those LCDM parameters do not correspond to any FLRW parameters. No. Not all of the six independent parameters of the LCDM model are parameters of a pure FLRW model. It goes the other way. All FLRW parameters can be found among the independent and derived parameters of the LCDM model, but not all parameters of the LCDM model are FLRW parameters. In particular, the H_{0}, Ω_{M}, and Ω_{Λ} that have been so prominent in the recent history of this thread are derived parameters of the LCDM model. From that it follows that (almost!) every FLRW model is also an LCDM model, but not every LCDM model is an FLRW model. In fact, most LCDM models are not pure FLRW models. For some values of the FLRW parameters that are permitted by the LCDM theory, the FLRW model determined by those parameters ends in a Big Crunch. The Hubble parameter H(t) goes negative as the model approaches that Big Crunch. And since (almost!) every FLRW model is an LCDM model, LCDM includes models with and without dark energy and with and without matter. The values of those parameters are not hardwired into LCDM. Their values are estimated from empirical measurements and associated theory. Mike Helland cannot insist that the empirically determined values of those parameters are so well known as to rule out a Big Crunch without conceding that the values of those parameters are so well known as to rule out a nonexpanding universe. In other words, Mike Helland is digging himself a hole. He cannot argue that the empirically determined values of LCDM parameters rule out a Big Crunch without conceding that Helland physics is toast. That is the source of the only exceptions I know of to the general rule that every FLRW model is an LCDM model. One of the LCDM model's independent parameters is the age of the universe, and the LCDM model assumes that parameter is some finite age. That rules out all FLRW models that don't have a Big Bang. Note well that one of the models it rules out is what Mike Helland calls the "pure L" model. Here Mike Helland is admitting that some LCDM models end in a Big Crunch. But with that sentence he is saying he is convinced the empirical evidence proves to his satisfaction that our universe began with a Big Bang, has been expanding ever since, and will continue to expand forever. In other words, Mike Helland is admitting that Helland physics is toast. It gives me great pleasure to congratulate the author and sole proponent of Helland physics on his unequivocal rejection of Helland physics. 
19th November 2023, 10:39 PM  #1289 
Illuminator
Join Date: Nov 2020
Posts: 4,261

LambdaCDM doesn't insist on there being a Lambda and a CDM?
Seems pedantic either way. We have our algorithms for FLRW. Mine's the most general because it does flat, open and closed.
Quote:
I mean, a little. It's those spikes on the multipole moment graph, right? I keep trying to dive deeper into that. Still not totally sure what a multipole moment is and how it relates to the CMB, 
__________________
I'm not entirely sure what I'm talking about, but based on what little I know, the above seemed like a reasonable thing to say. Thank you in advance for any corrections. 

20th November 2023, 07:44 AM  #1290 
Illuminator
Join Date: Nov 2020
Posts: 4,261

These guys explain a lot about this, covering more in and outs than just about any other explanation I've found: https://www.youtube.com/watch?v=aNkS...8wu2S8&index=1 
__________________
I'm not entirely sure what I'm talking about, but based on what little I know, the above seemed like a reasonable thing to say. Thank you in advance for any corrections. 

20th November 2023, 10:55 AM  #1291 
Penultimate Amazing
Join Date: Jun 2003
Posts: 55,297

That's because you don't really understand calculus. What do you think an integral is? What do you think a differential equation is? Solving a differential equation IS integration. It's not written with an integral symbol, because you're generally not doing a definite integral when you solve a differential equation, but mathematically you're doing the exact same thing.
Quote:
Quote:
Quote:

__________________
"As long as it is admitted that the law may be diverted from its true purpose  that it may violate property instead of protecting it  then everyone will want to participate in making the law, either to protect himself against plunder or to use it for plunder. Political questions will always be prejudicial, dominant, and allabsorbing. There will be fighting at the door of the Legislative Palace, and the struggle within will be no less furious."  Bastiat, The Law 

20th November 2023, 11:03 AM  #1292 
Illuminator
Join Date: Nov 2020
Posts: 4,261


__________________
I'm not entirely sure what I'm talking about, but based on what little I know, the above seemed like a reasonable thing to say. Thank you in advance for any corrections. 

20th November 2023, 11:40 AM  #1293 
Illuminator
Join Date: Nov 2020
Posts: 4,261

Understood.
I produced my algorithm (with the changing expansion rate) during our discussion about time dilation. My primary concern was the time in between two photons arriving.at an observer. That's why my solution tracks two photons separated by an initial distance. Translating that to a system of integrals was a total nightmare. I figured it was kind of like one of those simple CA's that produces seemingly random (but just complex) results. The kind of simple rules that are nice for algorithms but not for equations. I thought my solution was verging on that territory. W.D.Clinger added a nice little abstraction, taking the initial distance that separates the photons out and treating it separately. So instead of two photons separated by a distance, you just have one photon and a reference distance. Now instead of subtracting the distance between the two photons, you're judging the offset distance from observer, so you don't have to subtract by zero, which neatens up the equations, compared to the mess I had developed. 
__________________
I'm not entirely sure what I'm talking about, but based on what little I know, the above seemed like a reasonable thing to say. Thank you in advance for any corrections. 

20th November 2023, 01:40 PM  #1294  
Illuminator
Join Date: Nov 2020
Posts: 4,261

Cranking up the Cunningham dial:
https://www.youtube.com/watch?v=aLyx4aDocaE


__________________
I'm not entirely sure what I'm talking about, but based on what little I know, the above seemed like a reasonable thing to say. Thank you in advance for any corrections. 

21st November 2023, 03:15 PM  #1295 
Illuminator
Join Date: Nov 2020
Posts: 4,261

Here are four ways to quantify redshift, where is a wavelength:
These are all fundamentally the same. The only real difference is the numeric range redshfits fall in for each. Any of these are valid choices, to be used as the value we record when we measure redshift, or in our equations. z is kind of nice because things are spread out instead of crammed between 0 and 1 or 1. But z doesn't relate to distance very well, unlike the others. For the most part, in a big bang universe, z=1 is around half way back to the beginning. So half is between 0 and 1, the other half between 1 and infinity. Notice you never see "z" alone. It has one exact use, as the comoving distance in a de Sitter model, d = cz/H, otherwise it is a mere low redshift approximation. Everywhere else it appears is as 1+z or its inverse. For this reason, "a" is pretty useful in the equations, and is more or less the de factor way to reason about redshifts. From ΛCDM Cosmology for Astronomers https://arxiv.org/pdf/1804.10047.pdf
Quote:
I think 1r is a very interesting option. It's between 0 and 1 for redshifts. The "r" stands for redshift, but we also use "r" for radius. But if the radius "r" is normalized to the Hubble length, say we give it a "natural cosmological unit" where r = 1 = c/H_{0}, then in a de Sitter universe (and therefore in my model too) the redshift r = radius r. 
__________________
I'm not entirely sure what I'm talking about, but based on what little I know, the above seemed like a reasonable thing to say. Thank you in advance for any corrections. 

21st November 2023, 08:47 PM  #1296 
Philosopher
Join Date: Oct 2009
Posts: 5,515

That "nice little abstraction", which Mike Helland described incorrectly, was discovered by Alexander Friedmann in 1922, and is familiar to everyone who understands the FLRW models.
The highlighted sentence is absurd, as becomes apparent when the formula is checked using realistic presentday estimates of Ω_{M}=0.3 and a=1. H_{0} is not 0.3 in any plausible units. Mike Helland's formula could be used as a subformula of a correct computation of the Hubble parameter, but Mike Helland's formula is not by itself a correct formula for the Hubble parameter. I am ignoring the rest of Mike Helland's most recent Gish Gallop, but the two quotations above are so far off the mark that I thought someone should mention it. 
21st November 2023, 08:53 PM  #1297 
Illuminator
Join Date: Nov 2020
Posts: 4,261


__________________
I'm not entirely sure what I'm talking about, but based on what little I know, the above seemed like a reasonable thing to say. Thank you in advance for any corrections. 

21st November 2023, 09:16 PM  #1298 
Philosopher
Join Date: Oct 2009
Posts: 5,515

It seems Mike Helland is now saying he believes the value of the Hubble parameter is given by (1/(1+z))^{3} = (1+z)^{3}.
Well, let's check that. When z=0, the value of that formula is 1, which is not the value of H_{0} in any plausible units. In his two most recent posts, Mike Helland has given two distinct incorrect formulas for the 
21st November 2023, 09:19 PM  #1299 
Illuminator
Join Date: Nov 2020
Posts: 4,261


__________________
I'm not entirely sure what I'm talking about, but based on what little I know, the above seemed like a reasonable thing to say. Thank you in advance for any corrections. 

21st November 2023, 09:26 PM  #1300 
Philosopher
Join Date: Oct 2009
Posts: 5,515


21st November 2023, 09:40 PM  #1301 
Illuminator
Join Date: Nov 2020
Posts: 4,261


__________________
I'm not entirely sure what I'm talking about, but based on what little I know, the above seemed like a reasonable thing to say. Thank you in advance for any corrections. 

21st November 2023, 10:09 PM  #1302 
Philosopher
Join Date: Oct 2009
Posts: 5,515

I call it (Mike Helland's repetitive posting of such equations) cargocult physics.
Mike Helland is quite good at copy/pasting equations. He's not so good at understanding what they mean, how they're derived, or their history. The Friedmann equations were derived in 1922 and generalized a bit in 1924. Redshifts had been observed previously, but were attributed to the Doppler effect of what we now call peculiar motions. In 1927, Georges Lemaître used the Friedmann equations to derive the Hubble–Lemaître law, which Edwin Hubble rediscovered independently in 1929. One of the reasons Mike Helland doesn't understand this stuff very well is that he's sort of allergic to notations that refer directly to the expansion of the universe. He prefers equations that refer to redshift z(t) over equations that refer to the scale factor a(t). He can indulge that prejudice because the two are related by (1+z)=1/a. But the scale factor a(t) is more fundamental—as a matter of history, mathematics, and physics. The redshift z(t) is a consequence of a(t), not the other way around. But Helland physics rests upon denying that z(t) is a consequence of a(t), so Mike Helland has put a lot of effort into failing to understand the scale factor a(t) and its importance. 
21st November 2023, 10:24 PM  #1303 
Illuminator
Join Date: Nov 2020
Posts: 4,261


__________________
I'm not entirely sure what I'm talking about, but based on what little I know, the above seemed like a reasonable thing to say. Thank you in advance for any corrections. 

21st November 2023, 11:54 PM  #1304 
Philosopher
Join Date: Oct 2009
Posts: 5,515

Hey folks, Mike Helland might have learned something.
Two days ago, Mike Helland thought a(t) was just a distance: As I noted at that time: Confirming my diagnosis, Mike Helland stated his belief that the scale factor a(t) cannot exceed unity: That inequality is based upon nothing more than Mike Helland's habitual focus on extrapolating backward in time instead of forward. As the universe continues to expand, the scale factor will become greater than 1. Helland physics is based upon denying that expansion. Hence the focus. Hence his ludicrous claim that the scale factor cannot exceed 1. The part I highlighted in red is a simple statement of fact. So is the part I highlighted in blue. Mike Helland has devoted years and years of effort toward rejecting the physical relationship between z(t) and a(t). Mike Helland could have responded by saying he now accepts that z(t) is a consequence of a(t), but that would have been an unequivocal rejection of Helland physics. When you've worked so hard on a project for so long, it's hard to give up on it. Easier to tell yourself a 6word retort is clever. 
22nd November 2023, 12:04 AM  #1305 
Illuminator
Join Date: Nov 2020
Posts: 4,261

False.
Quote:
That's not redshift.
Quote:
Quote:
Quote:
Quote:

__________________
I'm not entirely sure what I'm talking about, but based on what little I know, the above seemed like a reasonable thing to say. Thank you in advance for any corrections. 

22nd November 2023, 12:50 AM  #1306 
Philosopher
Join Date: Oct 2009
Posts: 5,515

Mike Helland is arguing with himself.
Two days ago, he wrote this: I suppose Mike Helland's understanding of the scale factor and of its role in my algorithm might have been so poor that he didn't realize the "a" in my algorithm is the scale factor (which is not a "reference distance"). As I have been saying, Mike Helland prefers talking about redshifts to talking about the scale factor a(t). As can be seen within several of his most recent posts, Mike Helland has made the mistake of thinking a(t) is just an alternative notation for redshift. Not so. The scale factor a(t) can (and soon will) exceed 1. It's pretty hard to interpret a(t) > 1 as a redshift. But a(t) > 1 makes perfect sense because a(t) is a scale factor, not an alternative notation for redshift. Mike Helland doesn't want to understand that, because Helland physics is about trying to come up with formulas that match up with redshift but don't involve expansion. That's why he likes to pretend the scale factor is just an alternative notation for redshift. Helland physics is based upon denying the expansion of the universe. I wish I were just making that up, but I'm not. That's most of what Helland physics is about. And that's why Mike Helland really, really doesn't want to admit the scale factor a(t) can exceed 1. If he were to admit the scale factor a(t) will exceed 1 as the universe continues to expand, he would be admitting Helland physics is rubbish. I stand corrected: Mike Helland has devoted years and years of effort toward rejecting the physical relationship between redshifts and a(t).What part of that could possibly be fiction? Is Mike Helland saying it's fiction because he is psychologically incapable of responding by saying he now accepts that z(t) is a consequence of a(t)? The only other possibility is that he's saying Helland physics is compatible with accepting that redshift is a consequence of the expanding universe. But for him to say that would itself be an unequivocal rejection of Helland physics. 'Tis a puzzlement. 
22nd November 2023, 01:14 AM  #1307 
Illuminator
Join Date: Nov 2020
Posts: 4,261

Again, making things up for no reason.
Yes, I realized that's the scale factor. Did you realize it's also a worldline? You algorithm sets the value of "a" at t_{0} to 1. So a_{0} = 1. Then it gets smaller. After 1 step, a_{1} = a_{0}  H * a_{0}. The result is you're tracking the world line of an object is 1 million light years away at t=0 back to the big bang. d(z) = a d_{0} In this case d_{0} = 1, so d(z) = a.
Quote:
Quote:
Are you intentionally going for the spaceballs things, here? When will then be now?
Quote:
Is everyone on this ship a Clinger?
Quote:

__________________
I'm not entirely sure what I'm talking about, but based on what little I know, the above seemed like a reasonable thing to say. Thank you in advance for any corrections. 

22nd November 2023, 01:54 AM  #1308 
Philosopher
Join Date: Oct 2009
Posts: 5,515

The highlighted claim is false.
By making that claim, Mike Helland incorrectly assumed I was using a naïve Euler method with a step size of 1. My code uses the classic RungeKutta method, aka RK4. Mike Helland's mistake is not terribly important, but it's another reminder of his naïveté when it comes to algorithms and computer programming. You'd think the presence of a procedure named "rungekutta4" would have offered him a clue, but I guess he didn't bother to look at the code I gave him. 
22nd November 2023, 02:12 AM  #1309 
Illuminator
Join Date: Nov 2020
Posts: 4,261


__________________
I'm not entirely sure what I'm talking about, but based on what little I know, the above seemed like a reasonable thing to say. Thank you in advance for any corrections. 

23rd November 2023, 03:19 PM  #1310 
Illuminator
Join Date: Nov 2020
Posts: 4,261

I think I've shown clearly, that in my algorithm, the change in wavelength and time dilation of light in an expanding universe are apparent in the physical magnitudes directly represented. W.D.Clinger's variation doesn't show this directly. I think the difference is primarily pedagogical. Do you want to show (teach) redshift and time dilation as a direct consequence of the expansion of space, or do you want to show (teach) the role of the scale factor, a ratio of physical magnitudes, in FLRW? Both would seem to have their purpose. Both take care of the issue of calculating d(t). From Sept 24 :
Originally Posted by Mike Helland
Originally Posted by W.D.Clinger
Assuming you know d_{0} d(t) = a(t) d_{0}So how do we find a(t). Like this: Code:
H0km = 68 ΩΛ = 0.7 Ωm = 0.3 H0 = H0km / 3.08e19 * 3600 * 24 * 365 * 1e6 H = H0 c = 1 t = 0 d0 = 1 d = d0 data = [] while d > 0: t = 1 d = H * d a = d / d0 H = H0 * (Ωm * a**3 + ΩΛ)**0.5 data.append([t, d]) If you were feeling extra saucy, you could substitute a**3 for (d/d0)**3 and you see it works just fine, skipping a and z all together. H is enough. If you don't know d_{0} Assuming we are interested in a lookback time, but we don't know the redshift, or current distance, we send a photon back in time, to see where it would be emitted from: Code:
c = 1 t = 0 a = 1 d = 0 data = [] while d >= 0: t = 1 d += c  H * d a = H * a H = H0 * (Ωm * a**3 + ΩΛ)**0.5 data.append([t, d]) Code:
H = H0 * (Ωm * a**3 + ΩΛ)**0.5 https://en.wikipedia.org/wiki/Friedm...led_derivation The changing volume of the universe affects the matter density by the power of 3, but not dark energy. What really makes this possible though is this line: Code:
d += c  H * d Forget the whole observers thing, we don't even need that. Say every object that interacts with light has a relative velocity of c with the light. That includes all observers then. If every object is moving away at v=Hd, and light is coming toward us at v=c, their relative velocity is v=c+Hd. Which breaks special relativity. Fix it by saying light travels at v=cHd. Then light is travels at cHd+Hd=c relativity to all objects. So, I think that's about half of what I've needed to prove. I'm thankful for all the great posts here, particularly W.D.Clinger's, many of which must've taken considerable time and efforts. So thank you, W.D. 
__________________
I'm not entirely sure what I'm talking about, but based on what little I know, the above seemed like a reasonable thing to say. Thank you in advance for any corrections. 

24th November 2023, 02:59 PM  #1311 
Illuminator
Join Date: Nov 2020
Posts: 4,261

There seems to be a flaw in the standard cosmological model, in the fluxluminositydistance relationship.
The idea is the light coming from a distant galaxy is redshifted, and also time dilated, both of which "ding" the luminosity by a factor of (1+z) so: We say that equals this: Where d_{L} is the luminosity distance. This is a sort of hypothetical distance. I think of it like this. Imagine you're looking at a light bulb ten feet away with sunglasses on. How far away would you have to be standing to see the light bulb without sunglasses for it to appear the same brightness as with sunglasses and ten feet away? It's basically saying, we know this light is redshifted and time dilated. Supposin' it wasn't, and it was a regular Euclidean space with steady time. How far away would the light source be in that space and time to appear as it does in ours? So what's "r"? Seems there's a couple choices. In the expanding universe, there is the distance between two galaxies at t_{emit}, the distance at t_{now}, and also the light travel time distance. All pictured here: The shortest one is the angular diameter distance. This is the distance the light is emitted from, and this distance determines the size it appears on the sky. Next is the distance the light actually traveled. Then is farthest one, the distance the galaxy is now. Light hasn't traveled this far. You would kind of think that the distance used as "r" in the luminosity relationship would be related to how the galaxy appears on the sky, the shortest one. At the very least, the distance the light has traveled. But actually, the farthest distance is used here, the distance the galaxy is now. I haven't really heard of a good justification for that choice. What am I missing? It seems like the worst choice. I would propose that both the "r" in the luminosity relationship and how the galaxy actually appears on the sky are wrong. They use the long and the short distance respectively. On one hand, farther than the light has traveled, on the other, ignoring the effects of redshift and time dilation entirely. They should meet in the middle, and use the light travel distance. It's what happens to the light after all that should affect how its appears. This leads to different predictions for the supernovae data, and also for angular size. To fit the angular size data requires an evolution of galaxy sizes that simply does not fit the data anymore. The best fit for both sort of looks like an exponentially expanding universe with a constant expansion rate. One would have to accept that the amount of matter in the universe and the effects of gravity are completely ignored by the expansion of the universe. It just does it what it does. If it wants to expand, it's going to expand. A few baryons here and there don't bother it. It's like the honey badger of physical processes. There's no initial singularity, so no inflation or anything like that. It's a pure dark energy universe, ala the cosmological constant, so that's still there though. Go, Einstein. 
__________________
I'm not entirely sure what I'm talking about, but based on what little I know, the above seemed like a reasonable thing to say. Thank you in advance for any corrections. 

24th November 2023, 03:12 PM  #1312 
Penultimate Amazing
Join Date: Jun 2003
Posts: 55,297

There isn't. There's a flaw in your understanding.
Quote:
Ignoring cosmology for a moment, why does luminosity fall off as 1/r^{2}? Because as light travels out from a point source, you've got the same energy spread out over a larger and larger area, namely the surface of a sphere centered on the source. How does that area scale with radius? As r^{2}. You're reducing the power density by the area that this power is spread out over. That radius r happens to correspond to the distance to the source in Euclidean geometry, but the distance to the source isn't directly what controls the scaling, the surface area that the power spreads over is what produces the 1/r^{2} scaling. Now back to cosmology. Light is still being spread out over an everexpanding area, the surface of a sphere. So the luminosity should still fall off as 1/r^{2}, for whatever r describes the area of the spherical surface light is propagating out from. And what r describes that surface? The current distance to the source, NOT the distance that the light traveled, or the distance at the time of emission.
Quote:

__________________
"As long as it is admitted that the law may be diverted from its true purpose  that it may violate property instead of protecting it  then everyone will want to participate in making the law, either to protect himself against plunder or to use it for plunder. Political questions will always be prejudicial, dominant, and allabsorbing. There will be fighting at the door of the Legislative Palace, and the struggle within will be no less furious."  Bastiat, The Law 

24th November 2023, 03:26 PM  #1313 
Illuminator
Join Date: Nov 2020
Posts: 4,261


__________________
I'm not entirely sure what I'm talking about, but based on what little I know, the above seemed like a reasonable thing to say. Thank you in advance for any corrections. 

24th November 2023, 03:47 PM  #1314 
Penultimate Amazing
Join Date: Jun 2003
Posts: 55,297

Why would it?
In a nonexpanding space, what controls angular size of something you look at? How much of a circle centered on the observer that the object they're looking at takes up. If the object's diameter is 1/360th of the circumference of that circle, then that object takes up 1 degree of angular size for the observer. That ALSO means that the light coming from one side of the object is traveling at 1 degree different angle than light coming from the opposite side of the object. OK, now what happens when we add in expansion? Light from each side of the object is approaching you from the same angle that it would have if space didn't expand. Uniform expansion doesn't distort path directions. So light from the left side is still approaching you at a 1 degree difference in angle compared to light from the right side. It takes longer for that light to arrive, but it still arrives coming from the same angle it started out at. So you still see light from the right side coming from 1 degree off compared to light from the left side, which means that you're still seeing a 1 degree angular size. Or imagine it this way. Suppose that when that light was emitted, you were surrounded by a ring of galaxies, each touching edge to edge, what would happen as space expanded? Would the expansion of space open up apparent gaps in the ring? No, that wouldn't make any sense. But if the galaxies in this ring decreased their angular size, then there would need to be gaps because the number of galaxies you see can't change as a result of expansion. So the angular size of the galaxies cannot change as a result of expansion, because there's no way to introduce gaps in what you see. You need to go back to basics and study physics from the ground up. You keep making really, really basic errors, and thinking that you're in a position to evaluate more complex issues. You aren't. 
__________________
"As long as it is admitted that the law may be diverted from its true purpose  that it may violate property instead of protecting it  then everyone will want to participate in making the law, either to protect himself against plunder or to use it for plunder. Political questions will always be prejudicial, dominant, and allabsorbing. There will be fighting at the door of the Legislative Palace, and the struggle within will be no less furious."  Bastiat, The Law 

24th November 2023, 04:36 PM  #1315 
Illuminator
Join Date: Nov 2020
Posts: 4,261

Ok.
Let's say in Euclidean space, you have: Code:
A B O C It seems like if space expands between the light being emitted and received by O, that only the light from B might make it, while A and C miss their target. It seems like A and C would intersect in front of O due to horizontally expanding space. Space expands vertically though too. Is it the case that these directions always cancel out? Even with a dynamic expansion rate? Part of what you're saying relies (or so it seems) on the fact that light leaves a place and is always coming toward you. All light from after the distance turnaround (so with z > 1.6 in LCDM) actually winds up farther away than it started at some point: So why wouldn't the size of the object on the sky be imprinted from there? That makes the angular size plateau. Here's what I'm really getting at. A galaxy with a z=9. * light emitted from: 3 billion light years * light traveled: 12.9 billion years * light source is now: 30 billion light years So. In reality, we're saying light from a z=9 galaxy is reaching us today, and also some point 60 billion light years away, forming a shell with a diameter of 60 billion light years, and reducing its luminosity thusly. However, it appears in the sky as exactly the same size as if it was only 3 billion light years away. I get that's what the model says. But do you ever wonder if this is actually describing reality or not? I guess I'm asking, you do have any doubts about that, or does that all accurately describe the z=9 galaxy in reality? 
__________________
I'm not entirely sure what I'm talking about, but based on what little I know, the above seemed like a reasonable thing to say. Thank you in advance for any corrections. 

24th November 2023, 09:08 PM  #1316 
Penultimate Amazing
Join Date: Jun 2003
Posts: 55,297

No. That's wrong. I can only guess as to your misconception, perhaps you are imagining expansion in only one direction and not all directions. But there is nothing special about B. The fact that it's in the middle is arbitrary, the inclusion of additional sources should illustratethat. As I said, uniform expansion doesn't distort lines. The direction between you and any other stationary object remains the same during uniform expansion. Lasers would not miss.
Quote:
Quote:
In the case of luminosity, the source is the center of the sphere and the observer is on the surface. We treat each source as pointlike, and consider the sphere of possible observers around it. In the case of angular appearance, the observer is at the center of the sphere at the source is on the surface, because the observer is point like but the source obviously cannot be or it would have zero angular size. The difference in which radius we use comes from the difference in when the light is on the surface of the relevant sphere.
Quote:

__________________
"As long as it is admitted that the law may be diverted from its true purpose  that it may violate property instead of protecting it  then everyone will want to participate in making the law, either to protect himself against plunder or to use it for plunder. Political questions will always be prejudicial, dominant, and allabsorbing. There will be fighting at the door of the Legislative Palace, and the struggle within will be no less furious."  Bastiat, The Law 

24th November 2023, 09:38 PM  #1317 
Illuminator
Join Date: Nov 2020
Posts: 4,261


__________________
I'm not entirely sure what I'm talking about, but based on what little I know, the above seemed like a reasonable thing to say. Thank you in advance for any corrections. 

24th November 2023, 10:04 PM  #1318 
Penultimate Amazing
Join Date: Jun 2003
Posts: 55,297

You misunderstand, as usual. Given a model of uniform expansion, there is no alternative to calculating luminosity based on current distance and angular appearance based on distance at time of emission. You didn't understand how the model makes those predictions, I explained to you how. I'm not addressing nonexpansion models. Why would I?

__________________
"As long as it is admitted that the law may be diverted from its true purpose  that it may violate property instead of protecting it  then everyone will want to participate in making the law, either to protect himself against plunder or to use it for plunder. Political questions will always be prejudicial, dominant, and allabsorbing. There will be fighting at the door of the Legislative Palace, and the struggle within will be no less furious."  Bastiat, The Law 

25th November 2023, 01:51 PM  #1319 
Illuminator
Join Date: Nov 2020
Posts: 4,261


__________________
I'm not entirely sure what I'm talking about, but based on what little I know, the above seemed like a reasonable thing to say. Thank you in advance for any corrections. 

25th November 2023, 04:01 PM  #1320 
Illuminator
Join Date: Nov 2020
Posts: 4,261

That makes sense. Plain as day in in polar coordinates.
ETA: This does require that A and C are free to expand away from B. I was thinking ABC represented a galaxy, with B as the middle.
Quote:
But I've been wrong about most things before. This "shell" with the surface area with the luminosity and what not. What is that? Well, that's just a 3D spatial slice at some time of a 4D light cone. To make it easier to think about, subtract a dimension of space, so a 2D circle, and then add a dimension of time. At t=0 the area is 0, and as time goes it expands. A cone. The "surface area" is now the circumference of a circle at a time slice of the cone. If space were not expanding and light were not time dilated, it would still be a cone. Assuming the tip is at the origin, light is traveling at v=c, nothing weird happens. In the standard model of cosmology, that's not an accurate description of the light cone. Over a change in cosmic time, the change in proper distance of a photon is v=cHd, at least for a photon headed toward us (as everything else is moving away at v=Hd). Photons headed away from us have to be moving at c+Hd. In my model, it's still v=cHd, but the change in speed is "absorbed" by time instead of space. Both models are in essence warping a 4D light cone, which, contains all the 3D spherical shells the propagation of the light makes over time. So far so good? 
__________________
I'm not entirely sure what I'm talking about, but based on what little I know, the above seemed like a reasonable thing to say. Thank you in advance for any corrections. 

Thread Tools  

