I
have been motivated to make this analysis from foundational
logic derived in computer software which depicts evolutionary
modeling of Newtonian-Planck gravity. These models use Newtonian
gravity in conjunction with quantum time, thereby solving the
many-body-problem for Newtonian gravity. I know of nobody else
that has done this.
My
first gravity simulators were published online in 2008. In these
models, gravity propagates to any distance in one quanta of
time for any number of bodies without contradiction. This algorithm
rests on the notion that quantum gravity is defined intrinsically
by quantum time; so gravity is not propagated in zero time,
but in the smallest possible unit of time. As far as I can tell
Newtonian-Planck gravity yields the only mathematical explanation
for the gravity assist (slingshot or whiplash affect). Of course
the gravity assist has been measured, but that is not the same
as the reason for it.
Nowhere
have I yet encountered any viable attempt to calculate or compute
a system for placing three or more bodies in a Relativistic
paradigm. All such attempts yield the blatant contradictions
that should by now be self-evident to the reader. Thus, I find
it methodologically untenable that logical proofs for the Theories
of Relativity can be claimed unless one has first solved the
many-body-problem for Newtonian gravity within the rigid mathematical
structure of an evolutionary computer programming language.
After all, 3-body-Newtonian-Planck gravity yields a logical
algorithm; whereas attempts at 3-body-Relativity, from an evolutionary
computational perspective, simply do not. They can not.
Such computational
software necessarily evolves graphics that are demonstratively
superior to mere numerical or formulaic answers. These processes
go beyond static graphs as well, as they are graphically dynamic.
Moreover they are very easy to demonstrate to any observer.
There are a number of these gravity-simulator software applications
freely available to download on my website.
These
applications show the evolutionary structures for numerous bodies
in solar systems and galaxies. This includes proof that our
solar system was once a binary star system, and that the planets
are the debris of the Sun’s companion which went nova
at least 10 billion years ago. The inner planets at one point
were moons of Jupiter. Jupiter is the core that remains of the
Sun’s binary companion star, which split up with excessive
spin. This explains the existence of the ecliptic plane, and
why orbital rotation in the solar system is uniform in direction
as well as mostly uniform for planetary axes. It also explains
the rings of Saturn and the equatorial wall on Saturn’s
moon, Iapetus. I had computed that the Earth’s Moon would
be departing from the Earth at a miniscule rate due to the gravity
of the Sun, before I had heard that this measurement had actually
been observed.
A solar
system that formed without spin as a fundamental structure would
more easily yield planets orbiting in opposing directions, and
most easily yield orbits perpendicular to each other. This is
because such non-uniform orbits would interfere with one another
less than uniform orbits would – and this would make the
non-uniform orbits vastly more likely to persist. A solar system
with an ecliptic plane and all the planets orbiting in the same
uniform direction would be the least likely structure to form
without spin as a force. This answer was forthcoming from the
models before I even tried to compute Relativity…
In addition,
spiral galaxies must be binary systems or white holes. Each
of the pair in this binary system emits stars from its equator
due to excessive spin. This spiral binary structure explains
why spiral galaxies typically have two arms. This also solves
the problem of rotational curves of spiral galaxies (which are
essentially empty at the center). Celestial bodies (stars and
galaxies) can only form as binaries in such abundance because
a single body spun apart to form the binary. The odds of a binary
pair forming due to gravity and random starting positions are
so unlikely that there would only be a few of such binary pairs
in each galaxy.
Dark
Matter is the remains of stars that spun outwards from the binary
white holes at the center of each spiral galaxy. These outer
stars are dark because they no longer emit light due to old
age. These systems of dark matter stars (Dead Stars) are simply
solar systems where the central body is some un-shining body
like Jupiter. Typically such bodies are termed ‘brown
dwarfs’.
Some of
dark matter may be something similar to the black-holes described
by Relativity, but the theoretical foundation for black-holes
is at this point a total mess of contradictions. I will thus
only get back to correcting the theory on black-holes in Chapter
30. I shall have to avoid the term ‘black-hole’
as the connotations to Relativity are too severe. But it is
still thoroughly vital to understand what happens to a body
of mass when it exceeds the Chandrasekhar limit. A better term
for a body that is massive enough to collapse under its own
gravity would be a Chandrasekhar-star, or C-star. The word ‘Black-hole’
only has relevance historically as belonging to a theoretical
paradigm which has been disproved.
Dark Energy
is spin and can only be a fundamental force which was prevalent
from the start of the universe. The reason why the universe
is uniform – is that spin as a force separated the singularity
at the very beginning. There was no ‘big bang’,
instead a very smooth ‘big unwind’. Spin as the
fifth fundamental force would have had to have overpowered gravity
at the beginning. But it seems that spin could have subsequently
tapered off as the universe has expanded.
The entire
universe still spins, and this I have called the ‘Cosmic
Coriolus’ which is the source of why it is that an object
starts to spin as it approaches the velocity of light. This
is similar to how Earthly weather systems like hurricanes start
to spin with increased rotational velocity as their linear velocity
increases. But instead of a 3-d planet; the entire universe
is a 4-d rotating sphere. Cosmic Coriolus would fit into the
same ontological place that Einstein figured his Cosmological
Constant should be (in opposition to gravity). Even if his answers
were mostly wrong, his questions were utterly exquisite.
Thus space
is curved extra-dimensionally, but not due to mass and gravity.
The curvature of space is the curve of a 4-d rotating expanding
sphere. By 4-d, I mean four dimensions of space. Time is not
the fourth dimension. It is something else entirely.
All of
this (and much more) leads me to make this computational analysis
of gravitational waves and General Relativity from the foundational
basis of a functional multi-body Newtonian-Planck paradigm with
absolute clarity of purpose.
I
am hoping that the reader has made a close study of the Time
Dilation Conundrum whereby it was clearly shown
that time dilation in the Special Theory of Relativity is utterly
impossible in a logical universe. That same computational method
has been applied here. Non-computer programming theorists such
as Einstein were able to believe that Relativity was a logical
paradigm because their various formulae all stood alone. Only
when we place them into a single algorithm, do we see that the
pieces of the puzzle just do not fit together. Let me try and
explain what it takes to construct these models using normal
language.
If we have
10 principles which must work in the same computational model,
we have to be utterly certain of the precise logical structure
of all 100 interactions between these principles. Every principle
must interact with every principle in all instances without
violating the integrity of any other principle. Each principle
must also have an exactly defined relationship with itself.
Likewise,
if I have 10 physical bodies interacting in the algorithm, this
requires 100 relationships between the bodies for each principle.
Not only must each principle not contradict itself, but they
must not contradict each other within the same body –
and – as regards all the other bodies and all their potential
interactions.
When functioning,
this software will then compute 10 000 separate functional relationships,
each of which often require quite a number of arithmetical and
trigonometrical formulae as well. I cannot at any point make
any assumptions or take for granted any mathematical relationships
between any of the 10 000 relevant processes. Every relationship
must be spelt out in computer code to the tiniest detail, or
the computer program will crash. But before the computer crashes,
it becomes fairly obvious that it will crash if one has spent
a significant amount of time simply thinking about how all the
details must fit together.
In this
article I have just expressed the logical consequences of how
that computer code functions in ordinary language for the benefit
of theoretical physics and philosophy of science. Luckily the
only part I do not have to worry about is the arithmetic; which
the computer executes perfectly to the 14th decimal point; which
constitutes an error margin of 1mm per light year. That is quite
a luxury which Einstein did not have over a century ago.
In creating
such a computer model, the programming language will simply
not allow me to make an algorithm which is self-contradictory
or it will generate a critical error; whereas pencil-and-paper
math can happily be riddled with logical contradictions; and
nobody be the wiser. This process was applied to Newtonian gravity
in my earlier models and it lead to the conclusion that time
must exist in quantum jumps as Planck had concluded (and indeed
Zeno as well).
Those
models operate perfectly without any contradictions. The principle
is flawless, the only drawbacks being that no computer can ever
get remotely close to operating at quantum time itself (5 x
10^-44s or
0.00000000000000000000000000000000000000000005
seconds per iteration). And, the more bodies in the model, the
slower its evolution.
So
because the computer does have a margin of error due to being
exponentially slower than quantum time, it could be considered
that there is an exponentially high margin of error. But interestingly,
I can make the margin of error worse – and when I do so,
there is no fundamental change to the results. Even if I increase
the margin of error by many thousands of times I get the same
results. So there is no strong inductive reason to suspect that
the results will improve by improving the margin of error! Of
course no model can ever be entirely accurate. But at least
it can be non-contradictory. Internal logical consistency may
not seem to be a very high aim, but it is a far more difficult
goal to achieve than one at first thinks, given a century of
Relativists.
And then
one day I decided to take an afternoon and simply tack the Relativity
formulae onto the Newtonian-Planck models. At this point I had
no idea that three and a half years later I would be obsessively
and thoroughly disclaiming the most popular idea the modern
world has known.
But
it should all have been obvious without the algorithms. It should
be easy to see that the alleged fluctuations in Relativistic
time are totally inconsistent with the concept of time as an
indivisible quantum unit. Zeno would never have accepted relativity.
So why did Planck not disavow Relativity? That will be answered
in the next section.