Blog of Daniel Baxter, now secure! :)

Free SSL from Let's Encrypt!

Archive for January, 2011

If Physics was Mathematical…

Modern science often leads towards a very blind pathway. For instance, a few entries back I brought up Occam’s razor, this entry we’re going to explore it further. A simple calculation for Pi is 7/22. Of course it’s wrong, and we know this. Realistically speaking in science we would never, ever require more than the first 11 decimal places of Pi, for just about any computation you can think of (yes building a circle the size of the universe out of hydrogen atoms requires 39 decimal places, but that’s irrelevant as it’s useless hypothetical’s). Computer programmers generally use up to 20 digits.

What if you didn’t need to know 11 decimal places in the first place, what if you only needed to know 8? In that case you could use 355/113, and it would always be accurate. You would be taught this simple pattern: 113355, spit in two 113-355 and put the bigger on top. But we need 11 decimal places, right? So what if Pi = 3.14159265358? That’s a LOT simpler than having a transcendental number. It’s what’s called a Rational Number, whereas Pi is actually an Irrational Number.

What if you didn’t know Pi was transcendental? It is extremely difficult to prove that a number is transcendental, especially if there is no meaning ascribed to an infinite recession of decimal places. That is, what difference does it make if Pi’s decimal places are infinitely long if you can only use the first 11 anyway? They have no purpose, no meaning, no reason for being.

This raises the obvious question – how do we know that Pi has to be irrational? The answer is simple, Pi is computed using the laws of Mathematics, and not the laws of science. Under mathematics it can be proven definitively that the ratio between a circle’s diameter and circumference references an infinite regression of precision. This causes a major problem for science however, because the best we can do is approximate, and often Occam’s unhelpful razor is used to dismiss intelligent theories that are more complex or involve more numbers.

We don’t know who first discovered Pi or when, but it has been used for thousands of years. Around 1900BC the Egyptians were known to have used the approximation of 256/81 – correct to the first decimal place. That makes it about 99.4% accurate to the true value of Pi. Long before this the Egyptians integrated Pi into the construction of at least most of their pyramids in a variety of different ways. Consider the Great Pyramid of Giza. It was built c. 2560BC. It was originally 280 cubits high and each side was built as close as possible to 440 cubits long, making the parameter 1760 cubits long. Pi is found as 22/7 in this ratio: 1760/280 = 2 x 880/280 = 2 x 22/7. 22/7 is actually significantly more accurate than 256/81, 22/7 is 99.96% accurate. This is pretty conclusive evidence that they didn’t use 256/81 when building the great pyramid, or the size of the sides would have been 442 cubits long.

If you could reduce the speed of light to Planck Time, then how would you represent it? If you look in any science book older than about 10 years that cites the value of Planck Time, it will say the value is 6.6262…x10^-34. You look at anything today and the value is 6.62606… x10^-34 (they may even add a few more decimal places).

But we know we sometimes run into trouble if we try to use Pi with less than at least 8 decimal places! Where do these numbers come from? When we look hard enough, nature seems to be able to pull new numbers out. Suddenly there’s another decimal place, or there’s a new constant multiplier that applies between different numbers (just as Pi does). Why does the universe use such a complicated number like Pi – does it compute it, if so how?

One of the things that sets science apart from mathematics is its use of numbers. Experimental values are often way off of theoretical values, but even when they’re close to each other we’re still shown greater precision than theoretical models when we look for those numbers experimentally. In either case, it proves we’re just reading the value we can see; this is like trying to compute Pi by measuring the diameter and the circumference; you will always have an error since you can only compute that as a fraction and Pi cannot be written as a fraction.

But it gets us thinking, in science what numbers are rational and what ones aren’t? We can never know if Planck’s Constant is a rational number, because there is no way to tell the difference unless you can see behind the curtain to how the number was generated in the first place. If we didn’t know how to compute Pi, and we didn’t have mathematics only science then we would think it’s a decimal number with a limited number of decimal places, we wouldn’t know wether it had 8, 11 or 20 decimal places, but we would have no reason to imagine further decimal places, where would they come from?

Occam’s razor is unhelpful to science. You can’t tell the difference between whether a number is simple or complex; Occam’s razor tells you “it’s probably simple”. Mathematics tells you that there are relatively speaking “more” complex numbers than there are simple numbers. If that is true then translated into science it means that there are more complexities at rudimentary levels then there are simplicities.

In the late 80’s cosmologists were absolutely certain that the universe was 20 Billion years old – give or take at most 1 billion years. Today they are absolutely certain that the universe is 13.75 Billion years old – give or take at most 170 million years. Do you see how these numbers have changed? When I attacked cosmologists a couple of entries back, I illustrated the point that they believe so strongly in their unproven science that they invent further unproven objects to handle discrepancies. I stumbled across a page today you might find interesting, from serious cosmologists urging for more thought outside of big bang cosmology to be considered in the serious academic studies of the universe. Click Here!

When everything is always followed to its “logical conclusion” science always spits out what it interprets as certainty. But they may be mistaken. The universe began as a chaotic event, and in these respects this means if you were alive to see it you would be unable to predict the results until you watch what happens. The same thing with Langton’s Ant building highways; Langton had no idea that would happen when he invented the concept, it arises out of “chaos”. It gets “stuck” in a never ending loop of moves, like a game of chess with 2 kings left at the end, it would go on forever if allowed to.

But if all you saw were the two kings moving around the board in a chaotic pattern, would you be able to predict – even if you knew the starting conditions – how they came to their present state? Of course not. It could have literally taken tens of trillions of different paths. I know, I know, I’m too modest. There are 265,252,859,812,191,058,636,308,480,000,000 (265.25 Nonillion) possible combinations of what order the other pieces got taken in alone (that’s exactly equal to 30! if you’re wondering how I came to that number). In the same way, a completed Rubik’s Cube will never ever tell you what the starting condition was, all you can possibly observe is the completed state. There are 43,252,003,274,489,856,000 possible states of a Rubik’s Cube (43.25 Quintillion).

The problem with the theory of the Big Bang is that it sets the starting conditions such that if you were alive to see the starting condition you could have predicted the outcome before you saw it. Doesn’t that violate everything you know about science and chaos or what? It would be the same logic in determining that Langton’s ant started with a highway – which is a mistake, the ant actually starts in chaotic patterns until it eventually gets “stuck” in a loop. The greatest mathematician in the world would be unable to predict it without computing it (in other words, without actually observing it)! Why should the universe behave any differently? The expansion of the universe is a never-ending “cycle”, but just because it has got “stuck” in this cycle does not mean that accurately represents the starting conditions, there could have been some other chaotic event that occurred that gave rise to this “loop” if you will. The only thing that disagrees with this theory is Occam’s razor, but I put it to you that it sounds far more sensible to your average mathematician who loves chaos!

So what we know is we don’t know if the creation of the universe as we observe it was a chaotic event or a uniformly predictable one; both are able (to a degree anyway) to explain how we got into being where we are. I have not read-up on competing theories to the Big Bang, by the way, everything I’m proposing is merely logically formed out of the mathematical point of view, it’s not based on any competing theory or anything I’ve read, just in case you’re wondering. In fact I am aware of the competing theories and none of them prefer to start with a purely chaotic beginning. This is because it’s non-computable, it’s unknowable in a very real sense. It’s like saying we know from the point where the universe began expanding but before that precise moment (that some believe was the Big Bang) we think there was a chaotic event that built this model, and we can’t know what it was because it’s impossible to determine the starting conditions.

Ah, wouldn’t that sit well with the cosmologists of today?

Evolution: Pseudoscience

Last time I explained that some physicists – in that case cosmologists – strongly believe the world functions in the way in which their window of science deals with. In other words, they will tell you everything is somehow related to general relativity, for instance Stephen Hawking says that gravity is what produced the universe! This of course is not a widely held belief within cosmology and everyone who isn’t one just ignores it and isn’t interested!

A quantum physicist on the other hand will tell you with conviction that the universe is a giant wave function. This is because they “observe” that behaviour in the quantum world. They don’t observe “wave functions” in the “real world”, but apply their quantum models to it. So they think the real world is a wave function because when they do things like shine light or electrons through slits that’s what they see. A mathematician however would tell you that you are a chaotic being who develops from an embryo but that tiny differences in initial conditions (and/or in developmental conditions) will have complicated and unpredictable effects on your development.

A biologist may tell you that every creature – animal, plant, insects, fish, bacteria, birds, reptiles, etc, begins life as unisex and sexes are what we have defined because of our need to use labels. This is because they can often observe the embryos of a species developing in the same state for a while before becoming male or female.

A physiologist would probably tell you that you are a bunch of living cells, something between 10-100+ Trillion Cells, all working in a chemical system. An engineer or a computer scientist would tell you the exact opposite: that you are a neural network processing machine with innumerable functionalities, including sight, depth perception, many senses including touch, balance, taste, etc.

An anthropologist may tell you that you’re just a product of your surroundings and environment, and that almost all your characteristics, beliefs and personality can be attributed to your culture. A psychologist may tell you the polar opposite with equal conviction.

Evolutionists, particularly Darwinists and geneticists, will tell you that complicated life is just a by-product of DNA; that animals exist only because DNA exists and that DNA is what evolves.

Do you see yourself as a wave function? A piece of chaos? A collaboration of 50 trillion living cells? A sophisticated computer? A piece of 21st Century history? An intelligent monkey? A strand of DNA?

Theories are supposed to build on what has been observed in the real world. None of the above is a balanced view on who you are as a person. It’s biased towards the field of reference of the scientist who tells you what you are. And as I discussed in my last entry, those fields of reference are usually only useful in a specific context and are not “universal” truths.

For instance consider chaos. Mathematicians widely believe that chaos affects everything in the universe, because they observe some things as chaotic. But this is a false truth, it is not representative of reality. There are plenty of processes and “predictable” functions in the world which are not chaotic, and function just fine even in a chaotic environment. It’s not that chaos doesn’t happen, it does. But it doesn’t affect everything. Crystallization still occurs, and many form perfectly predictable shapes like cubes, a truly chaotic process should yield more chaotic results than that. In the same way that I explained General Relativity does not necessarily apply to everything – most scientists would be fine with this, except cosmologists who take GR as the gospel truth.

The theory of Evolution makes a number of assumptions about the real would which have not been observed or experimentally confirmed. The biggest and most problematic is in regards to the function of DNA itself. And for the remainder of this entry I will be talking exclusively about DNA. In the future I may talk about the other problems.


I bet you’ve heard of DNA as “DNA code” and/or “the blueprint of life”. Not so long ago certain scientists told us that human DNA and chimp DNA is 99% identical. Never mind that Chimp DNA is 10% bigger, and that that figure was chosen by specifically choosing the context of what they considered “identical” means, regardless that should mean that we are 99% identical to chimps. Interesting.

Yet biologists and zoologists think that humans are more like Apes than they are like Chimps. Recently certain people have criticized the human and chimp DNA correlation and have suggested that under a “fairer” and more neutral context it’s probably more like 95%. In any case, and you can correct me if I’m wrong, the figure of 99% is itself based on comparing less than 1% of the overall DNA strands.

Darwinists told us this was rock solid evidence. Yet now that the difference is widely regarded as at most 97% they still claim the same thing. How far can our DNA diverge before they stop claiming that evolution works solely on DNA? Believe it or not the 1% of the genome that was compared between Chimps and Humans is largely related to physical design. Eyes, ears, mouth, organs, skin, hands, feet, legs, arms… What if they had taken a completely “random” section? After all, physical characteristics are fine, but it’s not all there is in an organism. Well, certain scientists now tell us that human DNA is about 92.3% similar to Apes; that’s the “lower limit” of serious scientific estimates, compare that to 95% for chimps, so we’re being given conflicting data to begin with because the context defines the result. You could get a 100% correlation if you wanted to, you just have to set the context to what you want and go from there.

DNA is not a code. DNA is not a blueprint. DNA is not a program. We interpret it in ways that are completely unhelpful to understanding it. Bonellia males and females are so different, for instance, that it was once believed they were two entirely different species. In fact, before we knew about DNA it was virtually impossible to tell that they were the same species. Both Bonellia males and females develop from exactly the same DNA, their sex is not controlled by chromosomes or any genes. The females are small marine worms about 15cm long. The males are about 1-3mm long, tiny microorganisms, they live either inside a female or attached to her outside, and are observed as a parasite. The males have about 150 cells, which is mostly used for reproduction. That makes the female about 5-6 thousand times as large as the male.

Even if humans only contain 10 trillion cells (the lower estimate), that would mean if you unravelled every single strand of DNA in your body and joined them, you could stretch it from the sun to Pluto, and still have some left over. Given that, it’s not surprising that evolutionists think DNA is so special. But they are mistaken about it’s role because they’re making assumptions that are not based on observation. If DNA is a code, then show us how the “code” is read. You don’t need to be able to read it, just so long as you can observe it being used as a code would be well enough.

DNA is digital. It is a quaternary system. Now consider that virtually no organisms can survive on their own, and are dependant on other organisms for their survival. Why? What evolutionary advantage did this provide? And more importantly, how is this related to DNA?

DNA is not the primary “building block” of life. The primary building blocks of life are proteins. Without proteins, DNA is useless. Without proteins, life would not exist. You cannot reduce one to the other. Yes, DNA “codes” for proteins, but you need proteins in order for it to work in the first place.

Crichton made many scientific mistakes in Jurassic Park (not the least of which claiming that all vertebrate DNA is inherently female, completely false in the context he was talking about – birds – which are inherently male), but even if you had an abundance of dinosaur DNA complete without any errors or missing bits, if you injected them into the egg of another species all that could possibly “grow” from it is some very strange form of the species egg, you would not get anything remotely resembling a dinosaur. You would need the specific proteins that go in a dinosaur embryo – in fact you would need the proteins specific to that species. Every creature that starts life as an embryo starts without ever accessing its own DNA. In fact this process continues for different lengths of time for different species, cells will grow and split, etc, and life develops without its own DNA. Then, eventually, it will start to access parts of the DNA as it develops. DNA is not the building block of life. Life starts without it. You can’t start with DNA and nothing else, you would never be able to use it. In the same way, you wouldn’t be able to start with proteins and no DNA, you can’t “reduce” one to the other, they are completely co-dependant.

The other thing that is put in embryos is RNA. It is always accessed before the DNA. Darwinists will typically say “so what? RNA comes from DNA”. Well yes it comes from DNA, but it doesn’t come from the Embryo’s DNA, it is given from the mother’s DNA. There’s a difference. Changes to this part of the developmental cycle that might occur in say Evolution do not affect the child, they affect the grandchild. I’ve give you an example. Snails have shells with a spiral on them, the direction of the spiral is given by the RNA in the embryo, and the RNA comes from the mother, so the spiral’s direction is independent of the child snail’s DNA. For evolution to progress anything in this process it has to wait an entire generation, this is not handled under the theory of evolution. That is to say, what good is it to give a “beneficial” mutation to a creature if its children are the one’s who will “benefit” from it, but who are equally likely to pass on the “original” gene to their children.

Let’s go back to Jurassic Park: Where would you get the RNA for the Dinosaur embryos? In addition to proteins and DNA you need RNA. Yes it comes from DNA, but unless you know exactly where it comes from you can’t locate it by chance alone, that’d be like winning the lottery every day for a year.

Let’s go back to those lovely marine worms, Bonellia. What caused the males to (presumably) change from a 15cm long worm into a 150-cell microorganism is not entirely clear (well it was Evolution, of course) but you would expect that changing an organism so much would obviously require a lot of changes to DNA. This is what Evolutionists expect. This is what the theory of Evolution states. Yet the females still grow to the “normal” size, so the males must have changed their form with minimal changes to their DNA. Aha. So now we know what changed their form. The only possible thing that could have done it: proteins. They start with the same DNA, the same RNA and the same proteins, and as they develop somewhere along the line, if they manage to attach themselves to a female, it sets of a switch that halts its development and causes it to develop into a tiny microorganism instead of the full creature.

Like I said towards the beginning, the theory of Evolution does not correlate to the observable world. The real world does not function the way Darwinists think it does. This is because they make DNA out to be something that it is not. Darwin himself talks about the “Selfish gene”. Darwin thinks DNA is the master and the organism it makes is its slave. Really? How did he come to this conclusion? He did not come to it by observing the real world.

DNA gets copied trillions of times in humans – probably tens of trillions of times. Why does it get copied? So it can be used by proteins. Proteins decide when and if they want to use the DNA and do whatever they want with it. Does it really sound like DNA is in charge to you? DNA is nothing more than a tool in the system of creatures to be used however the creature wants.

Let me be even more explicit. DNA cannot for its own purposes do anything that puts the survival of its species into jeopardy. The only thing it is allowed to do is cooperate with the development of the species. Therefore it is not in charge, not by a long shot. It doesn’t do anything on its own, and it’s so-called “code” could mean just about anything without specific proteins and specific RNA to “interpret” it.

Until the theory or Evolution manages to reconcile with the way DNA really works, the theory is pseudoscience. It is not testable (except in illegitimate thought experiments), and it treats DNA as something it can do explicitly what it wants to do with. I am not saying that Evolution does not occur. I’m simply stating fact: the theory of Evolution is not even close to describing how the real world works. The real world does not function as DNA, when was the last time you ever saw clear evidence of DNA? Never, right? Nothing we observe appears anything like DNA.

There is trillions of connections in the human brain, you cannot explain that “network” in 2.9 billion base codes (the length of human DNA). At the same time, if our bodies contain 50 trillion cells, how do you think they decide whether they’re going to be blood cells, skin cells, liver cells, brain cells, nerve cells, lung cells, muscle tissue, hair follicles, etc? Does DNA decide? Of course not. DNA has no say in the matter at all. Red blood cells don’t even contain DNA (that’s estimated to be one quarter of your body, by weight).

I know what you’re going to say. “It’s decided by RNA, and RNA is a subset of DNA”. Rubbish. There is structure to our bodies. That structure is decided by cell placement, that is not decided by RNA. If it was it would be impossible for two separate items to form in two different locations – and no don’t think along the line of kidneys, think bigger. Neve cells, muscle, etc. A cell’s placement determines the context of what it can develop into. For this to work DNA can’t be in charge, DNA only does what it is told to do. RNA is not in charge. Proteins are in charge, the basic building blocks of life, they arrange everything according to what is supposed to develop.

Although it’s now more accepted in scientific circles that life couldn’t have started on its own in the primordial soup of earth by chance alone, it is still the core of Evolution theory. In other words, it is an inevitable process of the laws of physics, just like chemistry and crystallization. No one can “reduce” chemistry into quantum physics and explain how all of the chemical processes are somehow implicit in the laws of quantum mechanics, nor can they do the same with crystallization. They still believe, however, that it is implicit in QM, they just don’t know how to make it explicit. The same logic applies here, life getting started is a chemical process, it is supposedly implicit in the laws of chemistry, we just don’t know how to make it explicit.

The problem that does remain, however, is that DNA and proteins are completely and totally co-dependent. It is impossible to reduce one to the other. Without proteins DNA cannot replicate or even be used at all. Without DNA (or segments of it like RNA), proteins can’t build anything, and more importantly can’t build other proteins as they come from DNA. You need both, not one or the other. And if by some miraculous set of circumstances DNA did piece itself together on its own, without also building the right proteins there would still be no life.

Just to put the nail in the coffin of the “selfish gene” as Darwin refers to: any organism that uses sex to reproduces destroys their own DNA in the process. Their children have “unique” DNA. Their hereditary DNA is shared (roughly) 50:50 between both parents. If DNA is “selfish” and “in charge” then it wouldn’t allow itself to be destroyed for reproduction.

Cosmology: What a Load of Science-Fiction!

Cosmologists tell us many things about the universe, which they believe emphatically, and some they believe with good reason. Others show their faith in general relativity is well beyond the scope of how science is supposed to be used.

The orbits of Mercury, Venus, Earth and Mars can all be explained using Newton’s Laws, which predicts elliptical orbits for planets. Well at least to a degree, there is some level of error, most particularly Mercury who’s orbit is not precisely elliptical (first discovered in 1859) and requires general relativity to explain more correctly.

This gives us a starting point. Newtonian Mechanics explained why planetary orbits are elliptical, and General Relativity explained why they are not.

Furthermore, General Relativity predicts that light will be bent by gravity, but Newtonian Mechanics calculates gravity using the inverse-square law which requires the mass of the two objects, and as light has no mass so it should be unaffected by gravity. It has been experimentally observed that gravitational lensing does indeed occur as predicted by GR.

Time to produce today’s horrible truth: We don’t know how to calculate gravity precisely using General Relativity. It’s not that we can’t do it, it’s that nobody knows how to do it. I said we can use GR to predict Mercury’s orbit, but there’s a reason that makes it possible. To calculate it we use the Schwarzschild solution, which is the only known way to precisely calculate gravity using GR at present, and it only works when calculating the gravity between two bodies of vastly different size. This is because one of them, the larger one, is always considered to be “stationary”.

So what about two bodies of roughly equal size, like say a binary star orbit. Well, we can predict that using Newtonian Mechanics, it’s easy because Newtonian Mechanics predicts where the “central force” will be. But try doing it with GR, and you can’t even come close to predicting it with the same degree of accuracy as NM! The one thing you can predict, however, with GR is the rate of energy loss (which is not handled under NM). However, there is a further problem: both NM and GR disagree between observables and predictions when it comes to binary systems.

So instead of seeing how we can fine-tune the system, so to speak, cosmologists are happier to assume that there are further unseen forces acting on the system, like say a smaller 3rd object, which causes the behaviour that isn’t predicted by GR.

But really, they should know better. Cosmologists call themselves scientists, yet they are adamant in the correctness of their theories – even in the face of contradictory evidence. Most other scientists know that scientific theories are not necessarily “how the mechanics of the world actually work” but are merely useful models of how the world functions. Indeed, if chemists were as adamant as cosmologists they would still be waiting to find those last two predicted atomic elements. But instead of seeing the existence of 8 out of the 10 predicted elements as rock-solid proof that all 10 will exist, they see it as evidence that the theory was good, but not good enough. Similarly, cosmologists should also be looking at the world under the same scientific conditions; that is they should be saying their theories are “good” but not always “good enough”.

Cosmologists are certain that there is overwhelming evidence for the existence of the Big Bang, there is a consensus on it. This goes to show how absolutely convinced of General Relativity they are. But they go further in assuming the correctness of the theory, they make predictions about the existence of yet more stuff in the universe which is not implicit in any way in GR or any other theory of physics, and which they have been consistently unable to prove.

For instance Dark Matter. It attempts to fill a hole left by our current understanding of the universe. There’s an unexplainable problem that for our current theories, so we’ve decided that something else must exist to explain it! Yes, I’m quite sceptical of Dark Matter and with good reason.

I’ve shown you that General Relativity predicts gravity very well in a vast variety of different situations (although it’s not entirely computable in every situation), there is a problem though. When we look at an even larger scale still like say, an entire galaxy, GR fails us, it also doesn’t seem to work on the quantum scale. And I’ll explain why: galaxies appear to rotate at the wrong speed under GR.

So this is basically a more or less coherent model of how to calculate gravity at present:

1. Quantum Scale: Use Quantum Mechanic’s Gravitons.
2. Human Scale: Use Newton’s Inverse-Square Law.
3. Solar System Scale: Use General Relativity.
4. Galactic Scale: Invent Dark Matter, sprinkle galaxy with desired amount of Dark Matter, then apply General Relativity.

Actually we shouldn’t be surprised that GR has its limits, after all every theory only seems to work in a specific window of reference. But try telling that to today’s cosmologists!

Quasars remain one of the most mysterious intergalactic objects, even though cosmologists have recently decided upon a consensus on the matter. I personally don’t feel a consensus on this matter is beneficial for science, especially when such a consensus pretty much inhibits anyone from thinking for themselves on the matter.

There aint no cure for those Redshift Blues. For a while now quasars have posed this problem: they either violate Hubble’s Law, or they appear in places where they shouldn’t. Last year a paper was released showing that quasars are devoid of time dilation. This shocked cosmologists. Even more shocking is that in the face of these problems you have some cosmologists jump up and down shouting “it proves that there’s dark matter”, never mind the little problem that dark matter should affect the light from quasars and stars equally…

I watched as the consensus on quasars shifted; they were like sheep. One minute confidently professing “quasars are all distant objects, and there’s some other explanation to explain away why you found them there”… then the next “many quasars are actually nearby, but we’re still 200% confident that we can still use redshift to calculate the distance of every other galactic object that isn’t a quasar”. It shook them to the core when that paper was released last year showing quasars devoid of time dilation. Of course, if they want to explain it using Dark Matter they run into a problem that Dark Matter now has mutually exclusive properties: on the one hand it has to be more centralized to galaxies to repair some predictions, but for others most of it has to be at the parameter of galaxies.

In addition to the unorthodox use of science as already discussed, cosmologists also love their Big Bang theory, so much so they’ll “change” just about any data to make it work. Inventing dark matter was just the beginning. Cosmologists have said, many times, that background radiation proves the existence of the Big Bang. There’s just one minor mishap – before background radiation was actually measured in the 1960’s, many estimates of what this value might be were made.

Guess what those estimates were? In the 1920’s Sir Arthur Eddington calculated this value as 3°K using classical models of the universe in other words he did not care about any Big Bang “echo” his calculation was related only to galactic objects, the result of simply having stars and quasars etc heat everything up. This is very close to the actual measured value of 2.8°K. On the other hand, Big Bang cosmologists estimated the value as being anywhere between 5°-50°, even the lower estimate of 5°k was still way off by comparison. So when cosmologists triumphantly claim to have found the “echo” of the Big Bang, all they did was measure the background noise of the universe as it is now, as it was predicted to be if there was no Big Bang, and offered no substantial proof that it’s origin was anything but purely galactic, and then claim it as evidence of the “echo” of the Big Bang. What actually happened, of course, is they looked for the echo of the Big Bang and there wasn’t one!

Every time observables disagree with theory in cosmology, cosmologists invent something new to handle it. Dark Matter, Cosmological Constant, etc. They do not allow the Big Bang theory, or indeed General Relativity’s predictions of gravity or other galactic characteristics to be falsified by observations. They don’t even allow Hubble’s Law to be falsified by the new consensus on Quasars. Instead they insist that the theories are correct and that it’s possible to reconcile any problems with them by introducing further assumptions and theories. Because they don’t allow their theories to be falsified, their theories are pseudoscience – much like the theory of Evolution is pseudoscience (and next entry I will fully explain this too).

The universe does not function the way that is predicted by GR. The “curvature of space-time” envisioned by Einstein may well be describing a different process.

I want to point out to those of you who note I’m not an astrophysicist, or even a scientist, and that I probably don’t know what I’m talking about: the method of science is very simple. Any theory that cannot be falsified by observations or by experiments is not science. That isn’t to say that a theory has to be false to be science; rather it has to be testable in a “neutral” environment such that it could potentially be falsified if it were false. I’m not arguing that GR is not science, quite the opposite, I’m arguing that it’s current use in cosmology is a pseudoscience because cosmologists do not treat it as a theory that could potentially be falsified; any “contradictory data” they encounter they simply interpret to mean “GR is correct, and something else is causing that problem with our science”. On the other hand, I am saying that the Big Bang is pseudoscience, and there’s lots of reason’s I’ve already gone through in this post, but my favourite one is the fact that any observation of the universe is claimed as further proving or supporting the theory; even when those observations might have been predicted better by a non-big-bang universe (such as background radiation, but there are also lots of others and if you do your own research you will discover this for yourselves). So I just want to reinforce the point that I am using science as it is intended.

Those of you who thought my comments on the problems with QM and with String or M-Theory were harsh may well be refreshed to hear me point out the problems with General Relativity. It doesn’t mean I think the theory is unhelpful, I just don’t think it’s as true as cosmologists think it is, in fact most scientists would not treat it the same way that cosmologists treat it; they would treat it as a theory, and not as a truth. I’m not claiming to be smarter than physicists, I’m simply pointing out some of the errors in their methodologies (or if you prefer “scientific disciplines”).

Uncertainty Certainly Un-Testable

In my last article I delivered a horrible truth to you, that physics is not about truth. Today you’re going to learn something even more horrible: quantum mechanics is largely un-testable and that makes it pseudoscience.

I’m going to address the uncertainty principle. According to the “correct” scientific view, all matter exists as a superstition of “all possible states”. The statement itself is self-contradictory; and it is un-testable. The reasons we’re given are in principle derived from the non-scientific ideology of Occam’s razor; that the simplest possible answer is the correct one. Every observation ever made has always given a distinct state; and if you look back to entries in my blog, in principle you can know both the velocity and the whereabouts at the same time; this gives us yet another problem.

QM is successful because it is very good at predicting and explaining many observable things; but then so are Newton’s “incorrect” Laws of Motion. Peon decay happens by pure “chance” according to QM; there is nothing that causes this. How is that possible? In this entry I hope to convince you that quantum mechanics is nothing more than a flawed world-view.

As I remarked last time, a lot of people will say “people who are smarter than I am have worked out these scientific truths”, this time I tell you the opposite is true: there are a lot of very intelligent scientists out that disagree with many current theories and “laws” of science. Albert Einstein remarked “If this is correct, it signifies the end of physics as a science”. He also said “Everything should be made as simple as possible, but not simpler.” You would do well to remember my criticism of Occam’s razor in my last entry.

The Quantum Uncertainty Principle works to a certain degree, but it’s based on purely circular reasoning. At the subatomic level it is impossible to make an observation without also disturbing the object you are observing. This is because you have to “hit” it with another object of roughly the same size (for simplicities sake). Because you have hit the object, it is no longer where it was when you measured it, and hence you can no longer make any further measurements for that particular state, as all subsequent measurements will only tell you where the particle is now.

It’s fine to understand that. QM takes it a step further and states that all matter exists as a probability wave that only condenses (or collapses) to a specific state when measured. I interpret this a little differently. I interpret this to mean that until you make an observation you must consider the state of the particle to be as a probability wave. There, that makes more sense. It means we have a working model, but it doesn’t mean that that’s how the real world or “nature” really works.

Consider a photon. It always travels at constant speed, it will travel “forever”, it ravels at the maximum rate the universe can handle, and travels with wave-like properties. They also posses another unusual attribute: they always travel in a straight line unless disturbed, and they do not exist at rest, they only exist in motion. It doesn’t accelerate, and it doesn’t decelerate.

What makes light go? Well if you can think of light as having no mass, then one can also think of it being “hit” like a cricket bat hits a cricket ball, but even just a slight nudge will send it on its way. Except that, because it has no mass, light reaches its maximum velocity instantaneously. So what produces its waveform?

One of the further puzzles of the quantum world is entanglement; by which an entangled particle can “communicate” instantaneously, or even in the past, with its pair. In my discussion of the quantum double-slit erasure experiment I noted that the results do not prove that quantum objects can’t possess both wave functions and particle functions at the same time, yet the experiment does definitively prove the entanglement link. Which presents us with a problem, as given by the EPR paradox: the first measurement you choose to make on an entangled pair always returns a 100% correlation. Therefore, when you measure the second particle you can’t say that it only “came to have” that property when it was measured, it must have possessed it all along. This in itself is an observable that consistently defies quantum theory. It does not prove locality, however, since locality is a separate matter, but it does give evidence to structure rather than random probabilities.

Last time I said that Multiverse theory is not science. Today I will prove the point. Stephen Hawking once said that the “Theory of Everything” was imminent. He has since completely given up on it, but believes emphatically that M-Theory is the ultimate explanation for the universe, and that we just don’t know how to fully compute it. Einstein would be mortified. Especially at the fact that someone who’s never won a Nobel Prize believes they know more than someone who has.

Multiverse theories do not tell us anything at all or give us any insight into the true nature of the universe. They simply rely on mathematical formulas which some physicists believe can be complete. We don’t even know how many fundamental particles there are, or how to predict them because our current theories don’t give us enough information on this. For comparison, when Dmitri Mendeleev created his periodic table of elements, the theory was so stable that he was able to predict the existence of 10 more as yet undiscovered elements. At the time only 63 elements were known to exist. Although 2 of the 10 theoretical elements are no longer expected to exist, we have since verified the existence of 8 out of the 10 that he predicted; along with other elements there are now 118 elements in the table.

I think it’s particularly telling that on the Wikipedia page it contains a section entitled “common objections and misconceptions”, every “objection” to Multiverse Theory is responded with a “MWI response”. Just more proof of how skewed and biased pages on Wikipedia are. Especially since it seems to give unwitting readers the idea that Mutliverse is science rather than pseudoscience. Anything that is un-testable is not science, no matter what mathematical framework it might have.

In closing let me say this. The quantum uncertainty principle is based around the idea of wave-particle duality, which is itself based on the assumption that you can’t gather information about a particle’s waveform at the same time as you gather information about its physical particle properties. This is not so. We have simply been asking questions which can only be answered one way or the other. We have simply been gathering information by either getting physical information or getting waveform information. Once we can gather the information simultaneously it will be established beyond doubt that the wave-particle duality idea is flawed (or it will prove it); but until we manage to do that we are following a pseudoscience.

Next time I will discuss cosmology, and hopefully point out some things which may surprise you. And by “surprise”, naturally I mean show you that cosmologists are in a very real sense practising science very differently to their neighbouring colleagues!

Physics is theoretical

Okay, it’s time my readers learned the awful truth. Brace yourselves. Physics is not about truth. In fact Science as a whole suffers from this problem. If you want truth, stick to mathematics. Mathematical formulas cannot be falsified, unlike scientific theory.

One of the most telling traits of Physics, and Science as a whole for that matter, is that inevitably observations and theories are influenced by non-scientific garbage; ideals that are influential yet unhelpful; beliefs that are blind and misleading. In my last blog entry I discussed how correctly interpreting the results of the quantum delayed-choice erasure double slits experiments reveals that the popular interpretation is a misinterpretation.

Now a lot of people consider physics and mathematics to be sister bodies of study. Let me point out one massive difference between the two: everything in accepted physics is theoretical; everything in accepted mathematics is true. Both are by definition. Indeed you can write a scientific paper on Physics outlying your new theory and people will look at it intently; try the same with mathematics, if you can’t prove what you have devised then no one will even look your paper, you will be laughed at by the academic community.

Before I continue, let me also say that just because Physics is a science does not mean that the average Joe, such as myself, is incapable of studying it and criticizing it. Bob de Bilde accused me of claiming to be smarter than astrophysicists. Too often I have heard people say “people who are smarter than I am have worked out these scientific truths”. Rubbish. Scientists often confuse themselves when they ask big questions, and often the look for the answers in places that may well be irrelevant to the question. If they have really discovered something, then they have to write a paper and submit it to the academic community for feedback and criticism; it’s not a matter of whether they study it for a living, that doesn’t make them immune from poor scientific judgement, and this entry I will talk about ways in which I see physics being taken in the wrong direction.

Occam’s razor is one of the most misleading scientific concepts ever devised. And I’ll prove it to you. According to Occam’s razor, the simplest answer is usually the correct one. To be even more precise it states that given two scientific theories, all things being equal, the simpler of the two is to be preferred.

Okay so tell me which of the following two theories on gravitation is “simpler”: 1. Gravity is a force between two objects that can be precisely calculated as being the inverse square of the distance Force = (m1 x m2)/r^2 (that’s the mass of the two objects multiplied divided by the square of the distance between the centres of the objects). 2. the first answer is a close approximation which can never give a precise answer, the correct answer is computable using a long convoluted string of equations implanting general relativity.

Well herein lies the problem. Newton’s formula is so close that we still use it to this day to send satellites into orbit; but it’s only an approximation. The only way to calculate an exact gravitational force is much more complicated. But “Force = (m1 x m2)/r^2” is such a neat and tidy “simple” formula! Occam’s razor has failed us. How is it that such a “simple” formula can be so close, yet be calculated using the wrong criteria to begin with? Well here’s the answer, brace yourselves: it’s coincidental. There is not an underlying simplicity that causes gravity, there’s an underlying complexity; this is the opposite of what physicists expect to find and look to find!

One of the biggest problems with the mainstream scientific approach is its inability to predict anything that does not conform to the conditions that were observed. For instance, consider Roulette. A ball is thrown in a rotating table with numbers and will come to a stop on a seemingly “random” number. It was widely believed that predicting Roulette is impossible. Then, in the 1970’s a group of physics students spent nearly two years developing complicated formulas that could beat roulette, and programming a computer to do it. Their system worked well enough to give them more than a 40% margin over the casino. Yet their work could do nothing else besides predict roulette. It was totally useless if there was no roulette table to use it with.

But again, it was never computable using a “simple” law. In fact the act of spinning the table and throwing the ball in is very simple; yet the way to calculate where the ball is going to land is very complicated, again Occam’s razor is wrong.

Consider DNA. DNA is often described as a blueprint – this would be rather “simple”, but it’s not a blueprint, it’s far more complicated. Genes. According to Occam’s razor they would be continual strips of information in DNA, right? Or would the actually be scattered in segmented form (as they actually are). This principle is wrong, it’s unhelpful, it’s out of date and it blinds serious scientific study.

Consider Langton’s Ant. For some reason Langton’s Ant loves building highways, which is a repeating process of 104 moves. It is widely believed to this day that given any starting conditions the Ant will always eventually start building a highway. The first question that came to my mind is: is there no other repeating set of moves the ant can get stuck in? The “highway” may be the simplest repeating cycle, but what if there’s another that takes 1004 moves or 10,004 moves? Or even 56,732 moves? Why was this question never addressed? Is it because Langton’s Ant is simply something that shows that predictable behaviour can arise out of chaotic conditions? And since that’s the only point they were interested in they stopped once that was answered? (By the way I do believe that Langton’s Ant would build a more complicated highway or something else with a repeating pattern if the 104 repeating move highway was bypassed, and yet physicists and mathematicians didn’t even bother to ask this important question!)

Crystallization, as I mentioned in my previous blog entry, is yet another example of something which can take place in chaotic conditions (that is to say, it doesn’t matter what jumble of particles were in the “starting conditions”, once crystallization gets started it follows a predictable path just like Langton’s Ant). But it’s something we can only “observe”. We don’t know for sure, but it’s believed that the underlying structure of the crystal formation is generally the densest stable state for that matter. So in the basic, simple sense, crystals form because they arrange themselves into a dense state (given the right conditions). Okay, fine. So why do some crystal formations like salt and pyrite form as cubes? And more importantly, why do physicists tell us emphatically that it is due to quantum mechanics, when crystal formation itself is not implicit in QM, nor is it understood using QM as it is?

Evolution is not understood by QM either. Now in my topic on this I didn’t criticize the inaccuracies, but I will now point some of them out. If life begins its process through non-living chemistry that gives rise to simple life, then this process has produced something that even if it is living does not know it is living. Let’s jump forward. We’re told that sexual reproduction evolved because it was enormously better than asexual reproduction. Right. So then why is there still plenty of life on Earth that still reproduces asexually if it’s at such a massive evolutionary disadvantage? We’re told that good genes eventually work their way into the species and bad ones die out; but how is this possible with such diversity in genes in the first place? These are not easy questions, they do point out major flaws in the evolutionist view.

Physics is in some degrees time-reversible, but there are also things that are not. And it’s on this point I want to dwell for a moment. Usually irreversibility is attributed to Thermodynamics, but I want to ask the question: what in Quantum Mechanics makes it implicit that there will be irreversible processes, when correctly interpreted QM should work equally both forward and backwards in time? It’s just like crystallization, it happens because we can see it happen, observe it, theorize about it and even talk about it – but there’s no link to Quantum Mechanics at present to explain why some processes only work forwards in time and not backwards. There are no shortages of physicists who will claim time symmetry still exists, but the simple fact of the matter is that our observations are consistent with time irreversibility, it baffles the imagination as to why anyone would think otherwise.

Then there are those physicists who argue that QM predicts that the universe is not truly deterministic, and therefore that aspect of quantum chance or dice-throwing is what makes us perceive time as irreversible. But there’s a problem with that argument: QM itself still claims to work equally in either direction of time; quantum dice-throwing doesn’t mean things can only happen in one direction, it simply means a certain set of events cannot be “undone”. But that doesn’t explain why atoms in the atmosphere can buzz around and spontaneously form molecules FORWARD in time, but not BACKWARDS in time (in which case, from our point of view with our concept of time there would be molecules buzzing around and spontaneously un-forming into their atomic components)…

The quantum uncertainty principle does not make irreversibility explicit in any way. There’s nothing to say that molecules shouldn’t form backwards in time as well as forwards in time according to the uncertainty principle. You may argue gravity plays a role, but gravity is negligible to this process. Furthermore, the uncertainty principle is favoured because of Occam’s razor; really there is probably a more complicated process going on at the quantum level. This is in fact my sincere belief, and it was Einstein’s belief too. There isn’t “hidden local variables” either, but there’s something there, some process that is irreversible, something that causes physics to work in one direction and not the other.

Scientists often belittle those with religious convictions and belief in God. Yet quite popular among physicists is the Multiverse theory. Yet Multiverse theory is no more scientific than creationism or intelligent design. It’s not science. For the simple reason that it’s un-testable. So even by the most liberal definition of science, Multiverse is not science. It’s not science if you can’t test it. I find it amazing to find serious scientists who take this nonsense seriously! If the theory was correct then the universe at any given time should be no older than one unit of planck time! Does that make any sense to you?

Recently there has been a shift in the academic view on Quasars. It’s now quite acceptable to accept that Hubble’s Law is wrong. Now here’s the little problem. Hubble’s Law is still used to calculate the distance of any galactic object that isn’t a quasar. Now I know that might sound a little strange, but remember we still use Newton’s Laws of Motion even though they’re wrong. However, they are consistently wrong, all the time. That is to say, using Newtonian Gravity to calculate gravitational force is only marginally wrong here on Earth, it’s only when used to calculate really big objects that the problems arise. However, Hubble’s Law has the problem that his law is now interpreted as “all galactic redshift, except quasar-redshifts, is cosmological in nature” if he had presented his “law” like that to begin with he would have been laughed out of the academic community. So why would we use something when we know that it’s wrong; and we do not fully know the consequences of it? Is this a case of just “assuming” that quasars are the only galactic objects who’s redshifts do not accurately indicate their distance? Does it make cosmologists cry when they think about not being able to determine the size and age of the universe?

I hope I’ve opened your minds to the limitations of physics and science in general. It’s not that I think scientific study is bad, I just think there’s an awful bloody lot of “junk science” in there that is blinding serious scientists, pointing them in the wrong directions, biasing them with poor “intuitions” and even making them follow non-scientific ideologies like Occam’s razor and Multiverse theory.

Until next time…
Take care of yourself, and each other.

QM prediction fails!

There was a woman in the shop today. She had three bags nested inside each other in addition to her handbag. Her behaviour was mathematically computable; as long as you appreciate the process of chaos. Her bags are arranged in that way for shoplifting, she’s a thief. She thinks she can take her bags apart and then put them back together while stealing. She spent over an hour in the shop, and never bought anything. But, here’s some of the interesting behaviour (in addition to one which is plainly obvious): 1. she claimed she was a repeat-customer; we know every repeat customer we’re not stupid, 2. she asked a question about an item on the other side of the shop to try and get me away from the counter so she could shoplift there, 3. she tried to offend us so we would avoid her by saying such things as “I can’t stand f-ing jews”, and finally she claimed she “lost” her glasses and wanted me to find them for her, to which I responded “no, find them yourself” (within 1 minute of realizing I wasn’t going to vacate the counter she promptly went and picked them up from where she had planted them). Does she seriously think I was born yesterday?

The human mind is an incredible thing. About 20 years ago, it was widely believed that there were around 1 trillion connections in the human brain, it’s now believed that number is anywhere between 100 trillion to one quadrillion (1,000 trillion), and opinions vary widely on this. But if it’s a mere 100 trillion that’d be roughly equivalent to 100 terabytes in computer terminology. Have you ever heard of a 100 TB HDD? No, we’ve only just cracked the 3TB mark, yet our brains contain far more bytes of information, they compress the information better and they can even access it much faster. So it’s not surprising that we can appreciate patterns as I’ve just explained above, apply a mathematical rule to it and appreciate how it is “predictable behaviour”.

Today I’m going to point out an explicit failure in current QM theory that I discovered last year. To do this I will explain the science as I go along, so that hopefully you understand the experiment and what it means. The reason I discovered this inconsistency is because I noticed an error in the assumptions made by the observations of the experiment. Too often people – including serious scientists – will fall into the trap of being impressed by science they don’t understand; indeed we even do the same thing when speakers talk using their detailed knowledge about subjects that we don’t understand. But when we use our minds we can notice predictable behaviour, exactly as my example above.

First is the theory of wave-particle duality, and for the purpose of this blog all you need to be aware of is that the theory says that quantum particles behave both like waves and particles, but never display this characteristic at the same time.

The double-slit experiment was first designed by a brilliant scientist named Thomas Young, and he designed it in order to prove that light is a wave. If you can imagine a tub of water with a wall in the middle and a singe slit in that wall, then you could imagine that if you made a wave on one side that went towards that wall, when it hits the wall a single wave will “form” from the wall on the other side. On the other hand if you had two slits in the wall and you made one wave as before on one side, when the wave hits the wall with the slits in it, each slit generates its own wave and you have two waves that form on the other side. If you follow this you’re doing great. If not, then refer to Young’s sketch of this principle:

Young's Sketch

For the double-slit experiment light is passed through two slits, and on the wall behind the slits an interference pattern is generated (as above). All you really need to know is that an interference pattern means wave-like activity is present; as if the photons are passing through “both” slits! The experiment has been performed with the same result shooting one photon at a time through the slits, and even shooting electrons one at a time too.

Now we’re going to get complicated, so I’m going to use this image from Wikipedia, and if you need more information you can view the page here.

Quantum Delayed-Choice Erasure experiment.

A BBO crystal is used immediately after the slits. All you need to know about the BBO crystal is that it generates two photons from one photon, that are in a state of entanglement (something I will explain in a moment), and they travel in different directions, but the pair otherwise has the same properties.

Entanglement is a quantum phenomena which is not too well understood really, but basically once an entangled pair is created they are not only identical in every way to each other, but what happens to one affects the other; even in the future or the past, and even at any distance apart (in other words, there seems to be no way to break this connection). Once enough interference has taken place with each particle it is generally understood the entangled connection breaks down. However, for simplicity they can literally be thought of as being the exact same particle occupying two places in space-time.

Now, after the slits and the BBO crystal, the entangled photons each take different paths. One goes straight to a detector just like a normal double-slit experiment. The other one goes through a rather complicated series of beam splitters and mirrors, which allow each of the “top slit” and “bottom slit” photons to end up either 1: at a detector only with photons from the same slit or 2: at a detector where both end up.

This is called a “delayed choice” experiment because the pathway the “signal” photon takes (that’s the top photon) is shorter, it simply goes straight to a detector like a normal double-slit experiment. It’s called “erasure” since they think “which-way” information is erased when the idler photon (the bottom one that goes through mirrors and beam splitters) “randomly” ends up at a detector with photons from both slits.

They came to this conclusion because it is consistent with the predictions of Quantum Mechanics, but they failed to correctly observe their own results. I said earlier that an entangled pair for the purposes of this experiment can be thought of as being a single particle, right? Well then the “signal” photon tells us whether or not the “idler” photon has an interference pattern. When you look at it like this, the results tell a different story. Just like you need waves from both slits in the tub to make an interference pattern, the same is true at the quantum level. Just because they jumped to the conclusion that each particle is literally “interfering with itself” (because the pattern can be observed with shooting one photon at a time through the slits), doesn’t negate this fact.

So in this case, the signal photon is telling us that we need photons going through both slits to get an interference pattern; it is telling us that the interference is a consequence of photons from each slit interfering with each other (from the past and the future) somehow. But more to the point, the theory on wave-particle duality is that one can’t ever read both the information on each “state”, you can only make an observation on one or the other. But, let me tell you another thing, we don’t possess the technology to perform this experiment with each pathway the same length down to the planck length, no no no. Therefore, which-way information is still present at the conclusion of this experiment with the interference pattern, all you need is a precise enough stop-watch that can time the difference between each of the four possible pathways the “idler” photon takes (while arriving at the “shared” detectors). By the way if you have the means to reproduce a simplified version of this experiment and wish to confirm what I’ve said, please make this and let me know of the result:


Make the paths measurably different lengths (by say, 1mm or so) and use the detectors merely to detect an interference pattern (after all that’s all we’re interested in). I pridict, with confidence, that you can still achieve an interference pattern.

Over the past couple of months I will admit I have been a little worried over my conclusions. But I’ve realized that every form of this experiment which attempts to read the “location” and the “signal” does so in such a way as to separate the path of the photons. Now you might say “well what about using light polarizer’s, how does that separate the paths” the answer of that is a wave going in an up-down direction can’t interfere with a wave going in a left-right direction. I’ve also realized that a total failure to recognize the special state of entanglement means that the results have been misinterpreted as if entanglement doesn’t occur. I’ve also realized that if splitting the top and bottom photons was enough to “collapse the wave form”, then QM does not predict that the waveform will reappear after you combine them again; rather QM predicts that you can’t “know” both the location and the motion of a particle; yet I say you can know, and we do know since the length could potentially be timed using an accurate enough device to allow us to distinguish the paths without “collapsing the waveform”.

But do you know the most important thing I realized? Scientists are interpreting the quantum world by their own field of reference (that’s the “human scale”). Newton’s laws of motion are widely used by scientists to this day, although they can never be used outside of the “human scale” because they are inaccurate. Instead you can use relativity or QM to predict with greater precision (yet it is a more complicated formula). Scientists think that the behaviour of quantum objects is “wave” or “particle” – because that’s what they see at their level! No, no, no. There’s some other form of matter that is neither wave nor particle that exists at the quantum level and as yet we just simply haven’t been able to fathom or imagine what it is, so what we do is we go to things we do know like “waves” and “particles” and insist on imposing those constraints on the quantum world. No doubt QM is extremely successful in many ways, and that proves that it is “on the right track” – but the same can be said for Newton’s laws of motion which we know to be nothing more than “interesting behaviour that happen at the human scale”.

The theory of Evolution

Today I’m going to deliver by my blog the textbook definition of Evolution, without referring to a textbook since I know the theory back-the-front. I’m surprised by how many people try to say “that’s not in the theory of Evolution” or “it’s not like that”, they’re usually Darwinists who don’t actually know the theory of evolution! Darwinists are the ones who will say “evolution is fact”. I’m not one of them. Every thing about physics is theoretical including what we call laws of physics (that’s not to say there aren’t laws of physics, just that anything we call laws are probably based on an actual law of physics, they may be close to the actual law of physics, they may just be good approximation, or they may be entirely wrong, they’re theoretical). I’m not here to make anything up or add my own bits this is the current standard theory as it stands, take it as you will.

I’m not pointing out everything that’s inconsistent at the moment, that will be done in the future. In the future we will address the questions “how could life start from nothing”, “how could life evolve from single-cell organisms to complicated life forms which develop from infants and breed using two distinct sexes”, not to mention “has there been enough time for life to develop to its current complexity” and of course the examples that seem to defy the theory, like the banana tree. As I go along I will compare it with other theories of physics, which you may actually believe we know more about then we actually do… let’s get started.

Evolution is just one of those things. Evolution starts with chemistry, the chemistry of this universe is fine-tuned to allow life to develop. If there was no carbon in the universe, life would not be possible. If just one of the so-called “fundamental laws” of nature were to be altered even slightly, such as the speed of light or the planck density, or the rate of expansion in the universe, life would simply be impossible.

The chemistry of the universe allows certain molecules to form, once formed they can even entice nearby chemicals to do the same. Crystallization forms by this process, this is why if you break apart coal you discover a single diamond somewhere, rather than scattered diamond bits throughout. But remarkably there are also predictable traits and patterns that appear. For instance, salt crystals (sodium chloride) form in squares, physicists can’t explain why, we simply do not know. There is no known law of physics we can apply that predicts this formation; yet we can infer that such a quantum law must apply since the formation takes place.

Seeing as we cannot explain why salt crystals are square, it’s not surprising we can’t explain how to make life from chemistry alone; but the basic building blocks appear to be there, in theory. The textbook theory is that life somehow formed in the primordial soup stage of the Earth’s development – but the truth of the matter is that the theory of evolution believes this step to be nothing more than a property of physics which will occur just as crystallization occurs, so that a simple life-form can start. Imagine if you will that somehow DNA has randomly assembled itself but is completely dead and unusable and unable to duplicate itself. If this material was allowed to reproduce by the same method as crystallization for a long enough period of time, something usable will eventually occur, and hence life will start.

Darwinists often insist that this is not a part of, or requirement of the theory of evolution. They’re wrong. They’re simply unaware that evolution is a theory of physics and as a theory of physics it must obey scientific laws, and those laws must be predictable and consistent. It is perfectly consistent with physics that chemicals can attract and crystallize without any form of life being present, given enough time (like say, an eternity) it is theoretically possible to generate living DNA by a lengthy process of trial and error duplication. The framework that allows evolution to take place is not “life” but physics! Without physics it’s not possible. Without carbon life cannot exist in any form, not even in the simplest form. The physics of the universe and the current rate of expansion of the universe allow life to exist; but if you were to build a time machine and travel back in time 5 billion years to before the earth existed, you would cease to live because the laws of physics would be unable to sustain your life, the universe would not have expanded enough.

Once life has started, it goes through the process of duplicating, splitting and starting again (duplicate, split, duplicate, split, duplicate, split, etc). Wonderful life for bacteria, right? Somehow it evolves overtime to use reproduction by two different sexes. How this occurs is a scientific mystery, but it’s explained by the fact that such a life form will have an evolutionary advantage. Unless I’m mistaken there is no bacteria in the world that reproduces sexually; except those which fuse together and produce something totally unlike what they used to be (hence they don’t really reproduce, the produce the product of the two different bacterium). So presumably it was this process by systematic trial and error that eventually produced bacteria that reproduces sexually; however it no longer exists today only asexual bacteria exists.

Evolution works by natural selection. Natural selection works on the organism itself, but evolution works “randomly” on the DNA, which is related but not identical. Much of natural selection is now known to be working on characteristics that are often independent of DNA; so at best Evolution works from a subset of Natural Selection. Let me give you an example. Roger might be a carpenter. He has big muscles, and so do all his children that work in the family business. When a large number ferocious beasts attack the village, Roger and his family are strong enough to fight them off, but many of the people in the village die in this attack. Note that this example could also be applied by disease or just about any other “natural selection” you can think of (a massive flood for instance), and there will always be a group of people who are equipped to handle this better than the others. Roger and his sons survived because they’re strong enough to fight back – but this has nothing to do with DNA at all, it simply has to do with their chosen lifestyle.

Darwin himself acknowledges this problem. If a competitive advantage is not based on DNA, but rather on lifestyle, then it forces evolution to work even more slowly – or not at all. Even more problematic is that if a competitive advantage is not present in DNA then it can be eliminated with a DNA mutation which inhibits such an advantage; yet it may not “harm” the species in such a serious way as alter its normal development, but just enough so that certain lifestyle choices available before are now impossible (but remember, this may not lead to destruction, after all just because they can’t be carpenters anymore doesn’t mean they can’t be bankers, or otherwise even more “successful”). But if your minds can handle it, imagine what would happen if all the carpenters in the world suddenly changed into bankers; improving the lives of all the individual bankers… but depreciating the species as a whole!

This differs from the viewpoint of Darwinists, and I’ll explain why. Darwinists believe that evolution operates on DNA – it doesn’t. It is always at least one step behind, possibly more. When a complicated organism reproduces itself, there is always information passed along from the parent alongside DNA. This information is made by the parent’s DNA; but the information (RNA and proteins) operate on the child’s development before their DNA is accessed. This is now at least two-steps behind DNA, since mutations that affect this process won’t be visible in the primary offspring, but in the grandchildren. Darwinists believe that DNA mutations occur extremely minutely at a time, and those that are beneficial give “just enough” evolutionary advantage to spread throughout the species. But they’re wrong since neither the theory of evolution specifies this, nor is it even remotely compatible with our understanding of genetics. Evolution can only occur with genetic advantages that are so great as to outweigh other non-genetic advantages in the real world. For instance, giving an organism the ability to travel further so they can evade more predators may be such an advantage that no matter what kind of behaviour the species is involved with it is always better for the species. But giving Roger’s sons a minute advantage in something like smell may well have no effect whatsoever; in fact it may have no advantage for 99.8% of the population and only assist the 0.2% of the population who’s profession involves something to do with smell.

Darwinists also believe that DNA is the blueprint for life; but as I’ve just explained it’s far more flexible then they give credit to. You can be “bigger and stronger” and still have the same DNA. You can even be smarter and wittier with the same DNA.

A single mutation itself is usually not enough to make any difference to the organism. Evolutionists believe that what actually occurs is that anytime harmful mutations in the DNA occur they are quickly eliminated, by natural selection, while mutations continue to (slowly) occur on strips of unused DNA. Then, when a new mutation in the DNA suddenly switches that unused DNA into action (so to speak) nature can “try it out” to see if it’s of any benefit, or if it’s harmful (and it usually is). If it’s of benefit the mutation will spread through the species until it’s dominant.

One of the difficulties to explain is the complexity of life. And we will cover this in the future. But to give you an idea, we can’t make a computer that can compute anywhere near the level that a human can, in fact one of the most puzzling things of all is how our vision works. Not only do we see an incredibly detailed picture of our surroundings, but the instant each pixel in our vision reaches our visual input information has already been passed along that details what objects are distinct, what those objects are, where shadows are, what’s in motion, what’s not, what’s up, what’s down, etc. Your brain is doing it right now as you read, it’s telling you before you can even see the screen what letters are in front of you and what they mean, you may only be able to read what you’re looking at directly, but you can certainly notice that this entire blog is filled with English letters, you even know what they are – a computer can’t read letters anywhere near as accurately as a human can, and even if it could it still wouldn’t be able to compute it anywhere near as quickly as you can (you know what the letter is before you even see it, a computer doesn’t), yet just by glancing you can see where the paragraphs are and your brain already knows what all the letters are. We can’t program a computer that can analyse to even a modest degree of what our visual system does – no matter how long it takes (ie not in real-time). By the way, it’s not as if we get a “picture at a time” either, each pixel in our vision is literally constantly updated as fast as the nerves can travel; that’s faster then we can film.

It’s been experimentally confirmed that we can see and identify an image flashed for a mere 1/220th of a second. Now I am surprised at claims by some people that we can actually see as little as 20fps since it’s well established that any flicker below 50Hz is noticeable and even at 60Hz can cause eyestrain. What we actually process is closer to 500fps of information – maybe even more. We only discard around half of the information (or so it is thought) and while we discard this information, it’s only on a per-pixel basis and the image itself is still passed along, so that essentially you get a frame with around half the pixels conveying complex meaningful information, but all the pixels still convey the picture information. We don’t actually see a “picture at a time” though, we process each pixel as fast as its nerve carrier can take it, so the image we see is just a collaboration of this information. Our eyes don’t flash “on and off”, they are always on; and if one pixel isn’t updating itself, there are literally billions of others that are. I can easily calculate on the basis of being able to distinguish every single pixel on my 1080p computer monitor that each eye can see at least 26,5420,800 pixels (and probably much, much more), half of this information overlaps and also has 3d-dimensional processing applied. Our eyes are more sensitive to colour than a computer can produce, but let’s play by these rules. 24bit colour. A computer monitor represents well less than half the contrast we can appreciate, so we can immediately take this to 48bits. Furthermore RGB produces less than half of all visible colours, but let’s not nitpick it’s already complicated enough as it is. 48bit RGB is called “Deep Colour”, let’s use that. That would mean it would take 25,480,396,800 bits of information to represent all the eye can see at a level of 8×8 1080p resolution. The pixels that we can actually see are even finer then that, but you can see where I’m going. How could you process 500 frames per second of this information with a brain that only has 100,000,000,000 neurons? Aha… another one of life’s mysteries!

Evolution explains complexity as essential. Organisms compete by “adding” bits and pieces to their bodies, not by systematically removing them. Hand to eye coordination is also swiftly computed. Frogs can see a gnat or a fly that buzzes by them and react so quickly as to flick out their tongues in exactly the right position at lightning speed to swiftly grab their meal without moving a muscle on the Lilly Pad.

So as life evolved it first learned to be self-sufficient (the earliest, simplest form of bacteria), then it learned to produce sexually as an evolutionary advantage, then it began adding sensory information in all kinds of manners, and a processing centre, and it learned to digest different types of food, and also to compute at a very high level. It learned to add new organs, and features as well, and to adapt different ways of breathing, and even different ways of determining sex. Life also learned to grow not just twice as large before reproduction, but millions or even billions of times as large and as complicated as its starting condition (a fertilized egg).

Many physicists will hate me for saying this, but I actually don’t believe in wave-particle duality. You see, at our human level we observe things we call “particles” and things we call “waves”, and because we see it at a human level, we “think” that particle and wave duality behaviour is distinct at a quantum level with the quantum uncertainty principle applied. I don’t know what’s going on at the quantum level but the idea that particles behave like waves until an observation (or interaction) causes them to collapse their waveform and behave like a particle is utterly ridiculous. It’s simply the best we’ve able to come up with, since we can’t yet imagine a state of matter that exists in a way that contains properties of both particles and waves. Let me state this more clearly. If the quantum world was obvious then we would know why sodium-chloride crystals form in squares (I mean cubes). Yet since there is no explanation as to what law of physics causes this, it’s obvious we don’t yet know or recognize all the laws of physics.

This is actually helpful to the theory of evolution, because it means that we don’t need to explain exactly how life got started, other than to say that like square sodium-chloride crystals, it’s the result of a fundamental law of physics (one of a combination of quantum mechanics and chemistry), one which we don’t yet know.

Every so often observations throw a spanner in the works of our understanding of physics. I had one of those moments last year when I realized that wave-particle duality is just a delusion. In my next physics article I will address just this point and show, I hope conclusively, that what we “think” we know is just a good approximation. I hope you enjoyed the first entry in this year’s physics series, I really look forward to bringing you many, many more, as promised last year. Until then, the wheels of science continue in perpetual motion…