One thing that isn’t explained is the modelling and formulae that goes to form the theory which shows certain sizes of certain bodies are impossible. When results show that they may not be, rather than thinking the theory must be wrong, modifications to the theory or ‘fudges’ are used to explain things that are becoming more commonplace. Occam’s Razor. The simplest answer is that the basis for the theory is wrong, not finding special or complicated solutions.

Take Hubble’s constant that at first started out at 600 kilometres per second per megaparsec. Through the intervening years it’s gone from 500, through 290, 180, 75, 55, and finally to 67.4, where it stayed until recent developments suggested 73.5, and lately ranging from 64-78 and capable of having differences between the effects of a brown dwarf to a supernova and everything in between depending on what you decide you want your model to give.

So far, at the moment the observable universe is 27.6 billion light years across, allowing for a standard expansion giving 92 billion light years. This is a figure based on the Hubble Constant and how far it moved further away during the time it took to get to us. The higher the constant the smaller the universe, and vice-versa. But a lot is based on the assumption of unadulterated photons and standard candles. Light from a defined source that does’t change in any way for 13.6 billion light years of travel, forgetting we have ourselves travelled an estimated but unknown 20 million light years in that time. We’re not sure but we could be travelling plus or minus a million miles an hour give or take a billion in comparison to other places. And space is considered fairly homogenous and isotropic.

Einstein once said “As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality.”

Leave a Reply

Your email address will not be published. Required fields are marked *