Jump to content
Science Forums

The 3 Methods Of Propagation For A Type Iii Civilization


Super Polymath

Recommended Posts

Types III civilizations:

 

 

Post-biological, but born biological

 

Short range, high propagation rates

 

 

Mid range, mid propagation rates

 

I question whether a warp drive would be possible in my cosmological model. The issue is that you can only warp spacetime negatively (i.e. artificial gravity) in an antimatter universe, and warp spacetime positively (i.e. antigravity) in a matter dominated universe. Due to annihilation, it's energy requirements to continuously replenish the antimatter as the charge of the expanding vacuum medium continuously cancels out the shell of positrons suspending the nucleus in a negative charge. I think with the tremendous efficiency of a postmodern quantum computer controlled muon catalyzing fusion reactor could sustain a spacetime bubble.

 

As opposed to creating antihydrogen at the front of the craft, an artificial black hole created by a dyson swarm focusing the energy of the solar wind into the planck length in order to surpass the

by several orders of magnitude. Unlike the violent gamma ray bursts of the antihydrogen decaying, the energy released by the artificial black hole as it absorbs massive amounts of the hydrogen your craft passes through in space at warp speed releases energy in the form of angular momentum, condensate quasar plasma, & polarized gamma bursts in a much more controlled & recyclable way that can help power your craft (a kugelblitz engine) while also condensing spacetime at the front of your craft to generate a much more stable spacetime bubble than the perpetually decaying antihydrogen.

 

 

The issue is matching the condensing spacetime at the front of your craft with antigravitation inherently produced about the atomic parameter of the electron shell using fusion powered majorana oscillators, via man-made torsion in the photon aether (

). Making the artificial black hole just as impractical as the antihydrogen  unless the spaceship is the size of a city..

 

Carbon nanotubes can't handle a black hole, but QCD femtostructures, such as gluon junction buckyballs, can.

 

The rear of this city-sized craft would be invisible save for hundreds of ion beams the size of a house, the middle visible & would contain a building sized quantum controlled fusion reactor, the front would be a relatively tiny orb made of a series of centrifugal disks that are composed QCD femtostructures, that are spinning at relativistic speeds, completely outshone by the rotating matter jets produced by the quasar inside of it.

 

Long range, low propagation rate

 

There will be kugelblitz engines to provide all the energy needed by a Type III for googols of years, reducing the need for propagating to other systems other than to network a greater amount of collective processing power.  QE information panspermia is very long range, several decillion light years, due to how rarely Goldilocks worlds are conducive for remotely evolving civilizations via QE & probability manipulation:

 

 

He has a chaotic set (x1,x2,...,xj,...xn); where the variables are filled in randomly.

 

He has proposed an ordered set (x1+a,x2+a,...,x+a,...xn+a); where the variables are filled in according to logical extrapolation (I assigned that achievement being made by some quantum computing/matrioshka brain networking multi-galactic Type III Civilization in a microverse (recursive fractals) experiencing more radical variations in their perceptions of time than a type III civilization with our perception of time (scale relativity) would)

 

What is left to do, is to find the probability, that is, the 1 in (n) chance that (x1,x2,...,xj,...xn) = (x1+a,x2+a,...,x+a,...xn+a)

 

f(n) = basically the maximum dynamical range of reality's form, like how many forms matter can take.

 

In my thread, creating a universe by compacting light, with space, time, matter, and energy all existing as one continuum; with a reverse counterpart, there is a way to extrapolate that dynamical range. & with a quantum computer, perhaps f(n) can give you the answer to anything you could ever possibly inquire by eventually actualizing that probability in it's computations. In the process of doing so you will have constructed every possible & impossible virtual reality there could ever be. Only the possible ones are included in the set (x1+a,x2+a,...,x+a,...xn+a)

 

I proposed this as a necessary method for quantum controlled, superluminal entangled state particle-based information panspermia. You'd need to know how ESP will alter the evolution of alien lifeforms and civilizations in other galaxies, the only way to know is to have every possible variation considered using all possible virtual renditions of reality.

 

 

 

Considering the fact that one of the first things a solar quantum memrister (Matrioshka Brain) is going to do once fully operational is start running simulations, than they probably know the future. 

 

The inverse of a particles spin, that is, it's anti-particle, are synchronized at a rate that has to be at least 4 orders of magnitude or ten thousane faster than the speed of light if not instantaneous as the standard quantum interpretation assumes. My interpretation calculated entanglement to occur non-instantaneously, the calculation was 7 times faster than speed of light assuming particles are entangled by micro gravitational waves: Relativistic anti particles will produce superluminal gravitational waves in the photon aether opposite to the direction they're going, positive charge relativistic particles will produce superluminal gravitational waves in the photon aether congruent to the direction they're moving. That's the addition of velocities in a displacement esque domino effect with the relativistic particle & the photons phasing it, which alone does not account for 4 orders of magnitude faster than light. Time dilation means the speed of light is faster relative to the increased mass of relativistic particles as all mass begins in the photon aether in my interpretation, any increased density will compress time, in order for c to remain constant when time is longer the speed of light has to increase, that combined with the addition of velocities did add up to over 7 times faster than light.

 

Entanglement under that notion can send information about something as well as effect, via gravity - albeit on an infinitesimal scale, matter and energy. Information systems can alter the chemistry in, say, organic molecules - the information can be sent & impacted with the organic molecules on a world a thousand galaxies away and arrange them into nucleotide sequences causing abiogenesis starting a chain reaction that can lead to an extension of your Type III civilization (information panspermia) given you have already accounted for all the variables, which are contained within the set (x1+a,x2+a,...,x+a,...xn+a).

 

So, because these gravitational entanglement of these particles is non-instantaneous in this theory, there actually is a way to send superluminal signals out into space this way. Even set up a remote computer made of these signals, of pure information, that can receive & send ftl signals all on its own. I call it the boltzmann brain astronaut. 

 

However, for a network with that kind of complexity (basically making us Gods capable of creating intelligent life that can do our bidding on very rare exoplanets in the goldilocks zones of remote star systems) you'd have to first be capable of time travel. Miniaturization of computer technology would need to be staggeringly more advanced for such communication that it would have to rely on such communication but on much smaller scales that our calculations can handle. You'd need to run googolplexes of simulations based on this model until you get a universe that looks exactly like ours, once just happen upon such a simulation & have proven that it acts just like our universe we can assume that the location of all the particles & their trajectories will match those in the real universe - giving us omniscience of the past present & future. Then we will be able to tell exactly how moving this electron or this proton will effect entangled particles across the entire universe. 

 

That whomever may stumble upon some insightful equations, will naturally learn from applying that unclouded physics, that quantum jumps are not instantaneous, but law-breakingly fast, that any matter form can be replicated or transformed into any other matter form freely through nature's own cellular automation. It's a simple matter of plugging the right code into nature & nature will carry out whatever operation that makes fluctuations in the medium of reality a crystal, or a precious metal, or a stable repeating process that always results in a release of energy, & finally a process that can carry out computations on it's own, & even perform self assigned tasks dependent on those operational parameters. Nature is math btw

Edited by Super Polymath
Link to comment
Share on other sites

Alright polymath what is the structure of a kugelblitz engine, 

 

you have said that term several times, what exactly is the thrust on one of those.

It's impossible to thrust it without unified field oscillators (entropy acceleration) redshifting the photon aether in the rear. Together the black hole's blue shifting the vacuum aether at the nose and the UFO red shifting the vacuum aether in the rear, generates a space time bubble. The ion drives at the rear of the city sized ship provides the thrust. At warp the amount of radiation in the vacuum passerby increases relative to the velocity of the spaceship, that and a sustain barrage of anti-protons generated by the ship should keep the baseball sized black hole from evaporating. It will evaporate once you deactivate the redshifting UF oscillators.

 

Entropy acceleration occurs at the edge of a planet's atmosphere or the core of a star. It's why a star doesn't collapse under it's own weight. Between a proton & surrounding electrons, the photon aether gets negatively charged, outside the electron shell the photon aether gets positively charged.

Edited by Super Polymath
Link to comment
Share on other sites

 

the front would be a relatively tiny orb made of a series of centrifugal disks that are composed QCD femtostructures, that are spinning at relativistic speeds, completely outshone by the rotating matter jets produced by the quasar inside of it.

 

Passerby protons, positrons, & redshift photons in the cosmic rays of the vacuum aether would rapidly evaporate the baseball sized black hole, but antiprotons, electrons, & blueshift photons would help sustain. The spin of the disks could be programmed as a lineman to block out and filter out all the protons, positrons, & redshift photons only letting in the antiprotons, electrons, & blueshift photons, provided you've used a quantum processing solar memrister to calculate every interaction in the motion of every elementary & composite particle in the local universe.

Edited by Super Polymath
Link to comment
Share on other sites

  • 1 month later...

The problem with that is you would still be limited by the processing power of the original construct in the first universe. Though the additional processing power in each layer of simulation would increase the complexity of the next simulation.

 

 

What you are talking about is something like this.

 

 

But, unfortunately the limit of the processing of the internal device is still that of the external device.

 

 

The Internal complexity cannot exceed the power of external device's hardware due to the external device would be unable to process the fluctuation that the internal device uses to generate its processing power. Basically, the Internal circuit cannot overtake the complexity of the external circuit due to the external circuit would be unable to process the internal circuit and would stack overflow the external circuit.

 

MyjRe.png

 

 

I find this problem when folding proteins for cancer research that it takes 2 hours to fold a protein that the universe folds via temperature in microseconds from that you can find that a Nano-meter cubed of real universe is 7.2 * 10times more complex than my computer's circuitry being that a Meter cubed of "Real Universe is 7.2 *1018 more complex than my computer's circuitry.

 

https://stats.foldin...g/donor/1780563

 

image.jpg

 

Which a workunit is a single protein folded correctly via the specs of real universe for a protein.

 

hqdefault.jpg

 

 

If the internal complexity could overtake the external complexity which is impossible from a computing standpoint these proteins would fold themselves instantly or faster than a microsecond within the simulation.

Edited by VictorMedvil
Link to comment
Share on other sites

The problem with that is you would still be limited by the processing power of the original construct in the first universe. Though the additional processing power in each layer of simulation would increase the complexity of the next simulation.

 

 

What you are talking about is something like this.

 

 

But, unfortunately the limit of the processing of the internal device is still that of the external device.

 

 

The Internal complexity cannot exceed the power of external device's hardware due to the external device would be unable to process the fluctuation that the internal device uses to generate its processing power. Basically, the Internal circuit cannot overtake the complexity of the external circuit due to the external circuit would be unable to process the internal circuit and would stack overflow the external circuit.

 

MyjRe.png

 

 

I find this problem when folding proteins for cancer research that it takes 2 hours to fold a protein that the universe folds via temperature in microseconds from that you can find that a Nano-meter cubed of real universe is 7.2 * 10times more complex than my computer's circuitry being that a Meter cubed of "Real Universe is 7.2 *1018 more complex than my computer's circuitry.

 

https://stats.foldin...g/donor/1780563

 

image.jpg

 

Which a workunit is a single protein folded correctly via the specs of real universe for a protein.

 

hqdefault.jpg

 

 

If the internal complexity could overtake the external complexity which is impossible from a computing standpoint these proteins would fold themselves instantly or faster than a microsecond within the simulation.

Not so, you're not running it live under that much detail - it's storage capacity as opposed to processing power for the sims. You don't even get enough detail to simulate your own processor along with a replica of your entire universe, there's just enough detail in order to see which planets get targeted.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...