Jump to content
Science Forums

Declaration Of Evolved Existence


cal

Recommended Posts

Declaration Of Evolved Existence

Intellectual Prescience Expunged

 

If I ever was or am anything, I was or am any of the things my mind contains. From me unto you, in the following document, shall be excreted information that bares an intellectual weight I can no longer carry as a single person. These equivocated 'conceptual projects', if you will, all contain a strong burden of scientific, medicinal, and ethical scrutiny as well as potential breakthroughs and revelations within each formerly listed category. My writing is terrible, and I apologize in advance for it being so, but make note that this has not been made in hopes these words would look nice together; it has been made to break through to the next level of evolving thought beyond what we call the Intellectual Singularity. Let us begin.

 

***

 

We're going to start by answering all of the deepest psychological and most challenging biophysiological questions humans have ever come up with about ourselves in all of recorded history. It's not hard to do, the problem arises with where one could obtain a perfect representative model of the human body & mind and then have absolute (down to the subatomic level) control over it. So far, no workable model like this has ever been made, not even conceptually. Most of the non-workable conceptual models have major flaws, and what's been derived from years of questioning about this subject is that in order to truly understand the way we work, to the level of detail that answers any questions we could ever possibly have about ourselves, biologically and psychologically, we would need a full-scale, real human that you could freeze in time, slice open and examine, and then paste back together and unfreeze for another millisecond, rinsing and repeating indefinitely until we've seen every biological process that could possibly take place in complete cycles. And not just on the micro scale, but on the macro scale, to track neuron firing patterns individually, cell by cell (and then to map them), to derive exactly what happens when you perceive anything within reality (and ergo how one comes up with things outside of reality -> imagination), or exactly where every single food molecule of your 4-by-4 Animal Style burger just went throughout your body from initial consumption to 4-year recycling within cells.

 

There is now a working conceptual model of this, arguably with more precise control and more accurate readings, that can be easily made into a physical model. Easily. Not until very recently has technology existed that could make this possible, but presently it does and the problem is almost alleviated. I'm now going to propose the conceptual model that would make this possible and the physical technological (hardware and software) elements that would be required as well as a plan of implementation that can achieve the goal within a feasible and timely (about six months) fashion, helping to expand intellectual advancement more than it has in the last millennia. I understand there may be some skeptical baggage being loaded onto this plane, but if piloted the right way, we'll make it to Houston without delays, so hear me out.

 

There is software that has been developed that sifts through point-cloud data much like Google's search engine sifts through all the data of the internet to give you specifically what you asked for in only a few milliseconds. This software is called

and is developed by Euclideon. The back-end on this software is so clean that any average consumer tower or laptop can run a 21 trillion polygon count environment in real-time without even needing a graphics card (or much RAM for that matter). This is several hundred thousand times more efficient than anything else in existence (and they're looking to optimize it by 3-4 times that amount). This software is the key to breaking the barrier in visual processing power and easily makes itself the greatest advancement in computing power since the integrated circuit. It should be noted that like any other high-end 3D rendering program or game engine - animation, lighting, physics, and other miscellaneous effect processing is also a part of the engine. This software is the basis of the project, and because of how easily it facilitates production, simulation, and real-time rendering, it will be incredibly easy (comparatively) to model a full human embryo cell within the render environment.

 

Granted we will need to hire and work with biologists, physiologists, physicists, chemists, organic biochemists, and several highly-experienced C++ programmers (among others) to include full, correct properties of every object type (which we will group into atomic element categories), and to correctly arrange and assemble all the known tissue and organelles within the cell, as well as getting an accurate composition of the molecular make-up of what would normally be the surrounding fluids of the placental environment. Once we have a cellular model detailed enough to be indistinguishable from how we study, monitor, and understand real human cells to be, and once it has been copied and pasted into an environment indistinguishable from that of a normally developing embryo, we will be able to spawn objects (chemical proteins and nutrients needed by developing cells) in the program just like ammo spawns in shooter games. Once we've set up the spawn placements and interval times the only thing left is to click "run". This is the moment the swelling start to become alleviated and the pain recedes, this is the moment every question starts to get answered.

 

Any major chemical misplacements or even entirely missing organelles that were previously undiscovered would start to be regulated and created and facilitated properly by the cells' DNA (the specific code of which we will probably pull from a team member, and then edit slightly to get rid of known genetic mishaps). This is a video game 3D environment, keep in mind, so we have absolute god-like control over everything that happens. Right in the middle of cellular replication we can pause. We can rewind. We can fast-forward. We can copy and paste sections of the cell or its environment or even inject new molecules into the DNA at any time to see immediate results of its direct effects, or into any place in or around the cell for that matter. There are a lot of possible scenarios we can run from here. There are two that need to be tackled ab initio and simultaneously though; whatever team or group of researchers that does this HAS to do both of these at the same time, you are ethically and socially obligated to do this in order to efficiently resolve major medical and scientific problems our world currently faces. These two rendering scenarios are:

 

<1> Pressing play and letting the cell carry out its normal biological processes, never interrupting it by introducing new factors. The pause button will be crucial at this point (even though we will probably be running the environment at 1/1000th real-time speed) as well as whatever database collection modules that we code into Unlimited Detail's source to use for data collection. This will be the control experiment to base everything else on and this render will also become, for lack of a better name, Joshua (the reference being derived from the movie, War Games). We can run this rendering indefinitely for all practical purposes. Eventually what will happen is the cell will split, and then those two will split some more and eventually we'll have a few thousand cells which will make up a fetus. The amount of data we will be able to draw about human development as well as tissue formation will be unmatched to anything in history. This is a bold statement, but it is a measured one; nothing with this level of analysis has ever existed before.

 

As the fetus develops more and more, we will be able to start monitoring cell by cell the growth and formation of the human brain as each individual connection is made, as well as the growth and formation of every other organ in our entire bodies. The amount of collected data from here on out grows exponentially, as you can imagine, with Joshua's growth. The protein and nutrient spawn intervals and amount regulation will have controlled and timed increases as the control model takes up more of its environment. This is mostly a rinse and repeat process from here on out- regulating what the fetus takes in while collecting enormous amount of data to study. Hopefully it will get to a point where we will have the formation of an actual baby ready to birth, and there will probably be quite a bit of moral, religious, and political baggage tied into the project at this point. Most of that can be disregarded when the brain is developed enough to start forming low-level sentience.

 

Very human thought patterns will start to emerge from Joshua and regardless of everyone's opinions on the research, the real problem will become how we define life and what you can call life, Human life, if all it takes is a genetic disposition and a little bit of sentience. Joshua will be made of Oxygen, Hydrogen, Carbon, and the rest of the normal organic soup that we are all made out of, granted that it is the point-cloud data equivalent of all those elements, so what separates a computer file from an 'actual' human life form (and existentially a human consciousness) will start to be blurred. This is where the fun starts for the team, if it even gets this far. I am predicting, along with most others of you reading, that it will not get this far, maybe not even past the first cell, either because of hardware/programming limitations or social/political discouragement. In either case, once we have the initial cell started, we can begin on scenario two. </1>

 

After reviewing this a bit, I've noted that starting with an embryo cell may lead to several complications. The first being that an embryo cell is more developed and would be harder to build, as well as the fact that this is meant to help further our understanding of the human body and we skip the first few steps that show he we are created. To remedy this, and if another assessment from the team confirms it, we may choose to start with an egg and a sperm cell instead and perform a sort of "Artificial Artificial Insemination" (repeated on purpose), the first of its kind.

 

<2> Set up a series of part-cells, or cell sections (mostly DNA segments), and run a large database (full of models made with point-cloud data) of viruses, proteins, and chemicals through in various combinations. The reasons for this render scenario are obvious, I hope, in that we can easily observe and then experiment to find the chemicals or proteins needed to create cures for all kinds of diseases, various infections, and things our cells find problematic, including cancer and HIV.

 

Normally running through just a few chemical combinations to observe mass-cell reactions takes months to plan and then execute. We could go through hundreds of thousands of possible cures for various illnesses within hours on a single cell since we will finally have a real digitized cell to work with as well as the most limitless chemical repository in existence which is only going to be limited to the number of dots we want to connect (literally, it's that easy to create molecules when using point data - just click, connect dots, and then assign element types to each dot, you don't even need to account for polarity or bonding types since the engine source will be augmented to automatically figure that part out for you); and the database can be cloud-oriented where registered researchers for the project can add molecules that weren't previously there (after going through an approval process like the Steam Workshop). </2>

 

Save the two templates before pressing start, and throw them in a zip file together. If there's social/political pressure to hide the findings then copypasta the two 3D template files and any special engine source files into mediafire, and open a webpage for the free distribution of the project. This may seem crazy to most researches, businessmen, or anyone who would be looking to make money off of this. Those people are looking for the wrong thing here. The principal altruistic idea behind this whole project is so intrinsically bonded to freedom of thought and the expansion of knowledge and flowing information, that to be so egocentric and logically perverse to say it's okay to put a price tag on this and then sell it to people is ****ing retarded. Doing so is inherently flawed: logically, ethically, semantically, and existentially. It defeats the purpose of the project to begin with, creating this project with an aim to market it commercially (or for profit at all) is an opposing value of what the project represents and will lead it down a road of failure as well as skepticism of its plausibility to begin with. If anyone tackles this project, you cannot let this happen. The product and all the research materials must be made free, open-source, and easily accessible for mass distribution amongst the populous. We are trying to breach a new era of evolution in thought here, and to limit the distribution of information that brings us there is to stop us from advancing at all. I hope you understand the weight of these words as they are most likely some of the more important ones in this entire document.

 

That being said, even though this project will not aim to turn a profit, that does not mean it is not feasible. "How will we pay for all of this?", you ask. Let's break down exactly what we have to pay for:

 

[1] - Unlimited Detail Engine License ~ $ 7,000.00

-> (Access to source code)

[2] - A couple of moderate-to-high-end servers. ~ $ 6,000.00

[3] - Maybe a couple 1.2 Petabyte Hard Disk(s) [available this year] ~$ 750.00 ea.

[4] - Programmer Team ???

-> (1 Network Admin, 1 System Admin, 1-2 DB Admins, 4-8 Software Engineers)

[5] - Scientist and Research Team ???

-> (2-3 Biologists, 2 Chemists, 1-2 Organic Biochemists, 1 Physiologist,

1 Particle Physicist, 1 Neurologist, possibly 1 Psychologist,

and 1-7 others from various fields)

 

Now we don't even need to set up a business entity for this, instead the group running the project can just obtain a grant for research (this is relatively easy to do in California) and most of the costs will be paid for in full by the grant as well as any donations the team can get from public, government, or organizational groups. This could make for a really interesting Kickstarter or Indiegogo campaign as well. Keep in mind most of the listed dollar amounts are estimated and grants tend to be limited, but this is still a very doable project with a small budget (small in comparison to major labs and most major experimentation teams).

 

You may also note that there's no hundred-step-process I've laid out for implementing this project. This is intentional; the full process is in my head and will stay there for some time. I gave a very crude description here to show some of its substance but I've retained the rest of it to ensure that my idea stays my own. If someone still ends up borrowing this concept, they are free of course to implement their own process by which to complete this project, as long as they keep to the guidelines of making all the products and data that come out of this free and available to the public. I'll reiterate that concept one last time to make clear its necessity to this whole project.

 

Knowledge is NOT power. Shared Knowledge is power.

 

If you had a perfect photographic memory and memorized every document ever written, you would be a totally useless sheeple, entirely purposeless until you did something that effects others with the knowledge you gained. Knowing something means nothing, it's how you act upon your knowledge that gives meaning or importance to it. Ergo the reason I am finally making this document. It is possible that I am not the only one with ideas similar to this specific project build, in fact other research teams are trying to do similar builds, but in different ways.

 

is an example of one of them, but his team only uses algorithms to mimic neurons rather than building actual neurons, and they can only process a small part of the overall brain, the neo-cortex, rather than a full developed brain.

 

The difference is in how this knowledge is trying to be accessed, and in a lot of other teams, it is being accessed in a way where only small parts can be examined at a time. Outsiders to the team are also not allowed to observe what the team does on a daily basis, most teams are very secretive. This is a flaw. The concept of sharing knowledge is retorted by closing off who it gets shared to- knowledge is not power if you don't act upon and share it.

 

Now that we have a plausible means of setting all this up, let's talk about the political sides to this, "political" meaning the process by which the scientific method is used and then peer-examined whenever a claim is made.

 

As previously stated, we could run hundreds of thousands of experiments a day, making it a hassle to go through and claim that a breakthrough is made in a couple hundred of them and then having to wait months and months for committees and organizations to refute and cross-examine every individual find until maybe a few are accepted. Instead of having to represent what we did in front of a panel or require someone to document every little step of our process, a huge chunk of time can be saved simply by setting up an environment logger correctly. What I mean by this is- writing code into the source that not only takes a snapshot of the point-cloud data in three dimensional space rather frequently, but also creating modules to handle and record where specific things are, at specific times in the environment (like where all the iron atoms are or cholesterol molecules are), throughout the experiments. Throwing all that and a boat load of other variables into a well set-up and maintained database that can be easily traversed will make this project experience exponentially easier for everyone else. Instead of spending months going over how we carried out using all the equipment (which we won't have a whole lot of) and sanitizing the environment and blah blah, we can just give the log and recorded simulation to the reviewers and documenters and say, "This is it, have at it."

 

Another side to this is the politics that go on within the team itself. There would be many benefits to doing research and experiments with this kind of a setup. One of the best being the extremely expensive lab equipment we won't have to pay for. A five million dollar electron microscope won't be needed for cell examination (where it usually would be for this kind of work) because we will be able to zoom in indefinitely, at any time we want, with zero blur or distortion. Having working models like this that exist within virtual space is a powerful tool. The lack of physical tools should, in theory, make for smoother experimentation since everyone will be able to share all their information with each other instantly, only a button click away, and everyone in the team can check everyone else's full work in detail at any time, without them being there (if the logging is set up right).

 

If having a science team for research and experimentation ends up being too much of a hindrance on the project, or if affording their time costs too much to begin with, a more elegant beauty arises from this. We don't need a team of scientists for the first scenario at all. If affording that part of the team is really out of the budget, the rest of the team could just run the initial cell, log the whole thing, take video, save simulation renders, etcetera, and then copy and paste the entire project and all the recorded data onto the internet. Any scientist in the world can look at it at that point and figure out what went on and make their own inferences outside the initial team that set up the project. This is probably a better way to handle the project until funding for a proper team comes in, it would help promote the whole freedom of information thing I've been pushing.

 

Because this would be a totally new avenue for research and experimentation and because it opens so many doors and promotes free intellectual flow, the social culture of the team should be structured to match it. Whoever runs the team shouldn't have the final word in everything, and should even be limited in power to only have equal say in how tasks are handled. It might even be best if there is no project leader at all, and merely one person is chosen to represent the team for public appearances. What I'm implying is that the team members come together as a collaboration of free and intellectual thinkers and that the project is handled carefully and logically, as every member should know the potential their work will create for the rest of the world. This is probably an idea of grandeur, but I have hopes that the team members can get along and act accordingly like this, as it will make for a much smoother flow of work pace.

 

To conclude this sector of the document, I'd like to say that even though I have just turned 19 and have not received a PHD or any degree in any field yet, I have spent a large portion of my waking hours into probing the plausibility of making this project something that can be realistically done. So far, there have been no objective statements proving this is a faulty idea and cannot be done. Permit me to say at this point that this is a very doable concept, and hopefully will be realized as an actual research project in the near future. I know that my age and lack of a degree (I am majoring in astrophysics if that changes your mind any) will discredit a lot of what I have proposed, but please stay open to the possibility and try to present better ways of handling this if you feel there are some- I'm not worried that this project is flawed, as long as what it represents is achieved at some point.

 

***

 

I am saving the next few sectors to be written after I have gotten some notable peer review on this document. One major thing I know I am missing is what the requirements of a team member would have to be, but if this goes anywhere at all, we will probably have a website up with an "apply for the team here" tab. I have high doubts on this paper being taken seriously, as most people do not possess the care or the will to make the first project happen. But I also have hopes that a few people do.

 

What I will cover in the other sectors after uploading this first draft (after all I am essentially selling myself to be accepted into the team that handles this), are other original concepts that can be applied to governments, belief systems, and new ways to define "life" scientifically. I realize that I have to validate my arguments for all these things, not just the arguments, but myself. Who am I to say these things, and why does my opinion matter? I will cover this in the later sectors.

Lastly, before I upload this, I have been pondering whether or not it is purely logical to put my name on this document. I know it's highly egotistical and extremely arrogant to claim ownership over my ideas as if they hold any meaning or importance, or even if not just because of the insecurity held over these concepts escaping my intellectual ownership in the first place. But that being said, even though I am principally against egoism, I am more principally against these projects being handled incorrectly and for that reason I will permit this conscience error and say that I, Matthew Robert Garon, have created this original content, and I hope that whatever team starts this project, if not one of my own, that they include me in the development of the initial concept. Thank you in advance for consideration.

And thank everyone who read this and gave critique or support. If you have further suggestions or just want to talk with me, you can contact me here - [email protected]

Edited by Matthew Garon
Link to comment
Share on other sites

I'd like to be able to say that after a few days, if there aren't any objections to the current draft, that the first concept I've put up here is a doable one and that I should start recruiting. But I need feedback and I'd like to hear some retorts or criticism on any flaws I may have missed.

 

If you think this was a good idea over-all, please help spread it, I'd like to get this done sooner than later, so that we (as a species) can move on to bigger and better things! :lol:

Link to comment
Share on other sites

My writing is terrible...

 

Oh I beg to differ. Your writing is quite good. However I think that some would start to disagree when you say:

 

...make note that this has not been made in hopes these words would look nice together

 

because you probably lost your entire audience with your opening paragraph:

 

We're going to start by answering all of the deepest psychological and most challenging biophysiological questions humans have ever come up with about ourselves in all of recorded history. It's not hard to do, the problem arises with where one could obtain a perfect representative model of the human body & mind and then have absolute (down to the subatomic level) control over it.

 

That all?

 

Seriously, if you want to get people to listen to your ideas, it's really best not to start off with "I've solved all of the world's problems, and it's simple because I'm so smart and everyone who's ever thought about it is a worthless idiot."

 

You've just offended everyone who might actually find your topic interesting.

 

Now the biggest issue is you've posted this gigantic thing with 87 points that's hard to follow (despite well-formed sentences and good grammar), hints at things without saying them, and then ends up with an "well this is just part of it because I don't want to give it all away", leaving the reader perplexed and unmotivated to respond.

 

You might try breaking it up into little topics and introducing them one at a time, posed as questions that people might want to answer.

 

Who knows, some of us might actually find what you have to say interesting. But it was so overblown, convoluted and unclear, that few people probably got beyond the first couple of paragraphs, and those that did just saw it get, well, let's just say, less and less interesting.

 

 

Belief in myths allows the comfort of opinion without the discomfort of thought, :phones:

Buffy

Link to comment
Share on other sites

can't fault your ambition or self belief there, matthew - but i think you have made one or two assumptions that you might need to address, apart from the points that buffy made about your bedside manner.

 

you assume that with your team of biologists, physiologists, physicists, chemists and organic biochemists you will be able to model a cell, or a sperm and an egg, and all their processes with enough accuracy to ensure that what unfolds once you press go is identical to what would happen in 'real life'. i would offer that we don't know any where near enough about these processes and their unbelievable complexity to be able to render them in software. just go through the history and development of A-life and artificial intelligence and you will see how far away we are. access to amazingly sophisticated software won't fix the overall knowledge gap we have about cell development, the actions and interactions of every single gene, and all the external factors that dictate how an organism grows and develops. your model could be beautiful, but ultimately useless for the ends you envision.

 

i also think you underestimate just how many cells, and therefore parallel interactions, you would need to model too - even in a relatively young fetus.

 

don't kid yourself into thinking that a whole host of bioinformatics experts and software wizards won't already have daydreamed about doing this, or something very similar. if it were anywhere near possible it would be happening already.

Link to comment
Share on other sites

I have to disagree with Buffy and agree with you: your writing is terrible. It may be grammatically correct, but it lacks simplicity. Take this example from your first paragraph:

 

From me unto you, in the following document, shall be excreted information that bares an intellectual weight I can no longer carry as a single person.

 

Its pretentious verbosity, unjustified and pedantic, simultaneously stifles interest and obscures meaning. Do you see? Would you not have got the message more clearly if I had said instead "Your use of big words and archaic phrases lost me."

 

I was unable to bring myself to read most of the rest of your post, but what I dipped into suffered the same problems. So I was not offended by being talked down to, as Buffy suggests some readers might be. I couldn't be offended because your writing style led me to suspect your opinions were of no value. I don't think this was the effect you were aiming for.

 

The second thing that infuriated me - which is a plus; at least you are getting some reaction - was the absence of an abstract. Here you are, if I glean correctly from Buffy's observations, proposing something that will change the world and you cannot even be bothered to offer a one or two paragraph opening overview. If your opening sentences had been written in sentences of such elegance and passion they could rival Shakespeare I would still choose not to read ahead because of that absence.

 

So, if you are still with me, here is the positive advice.

 

1. Go back and edit your words aiming for simplicity.

 

2. Add a short abstract that summarises your post.

 

Don't do this on the original, but repost the revised item in this thread - if you want to get the attention that you believe your proposal deserves.

Link to comment
Share on other sites

Firstly, thanks guys, this is the feedback I was looking for. Keep in mind I wrote this at 2 in the morning so yea, It's pretty convoluted. I will try to retort everyone sequentially.

 

Seriously, if you want to get people to listen to your ideas, it's really best not to start off with "I've solved all of the world's problems, and it's simple because I'm so smart and everyone who's ever thought about it is a worthless idiot."

 

This is not what the words meant. I only said I have a conceptual model to help solve most bio-physiological questions we have. The "in all of recorded history" thing is just hype, I mean, it technically would solve questions we've had for the last several hundred thousand years, but that's not the important part. I don't think I'm smart at all, I'm not a fact-memorizer, my memory is ****. I even go over specifically all the other kinds of super smart people that would be needed to do this and then humble down into admitting that this entire paper's purpose is simply to market myself to be allowed into whatever team tackles this problem. That is all, nothing else.

 

Now the biggest issue is you've posted this gigantic thing with 87 points that's hard to follow (despite well-formed sentences and good grammar), hints at things without saying them, and then ends up with an "well this is just part of it because I don't want to give it all away", leaving the reader perplexed and unmotivated to respond.

 

It's only 5 & 1/2 pages typed, it's not even long enough to be considered an essay. I don't think 87 points is literal, but if you counted 87 then congratulations, you've put more effort into this than I have, haha. Also, I should thank you for saying my writing isn't terrible, but with the amount of rejection this thread has already received, I'm going to start to say it is pretty bad. The thing we need to cover here is that this IS just part of it, these 87 points are just one sector, there's supposed to be about five more, each about another five pages. This is the summarized version, sorry guys. I don't think I could possibly give a single paragraph summary without making it seem a thousand times more amazing than it actually is.

 

i would offer that we don't know any where near enough about these processes and their unbelievable complexity to be able to render them in software

 

What we model would essentially just be floating atoms that then take into bonding and polarity interactions with each other. Everything would carry out like atomic interactions in the real world (hypothetically). Their complexity can be easily broken down with this concept, and that's why it needs to be done, the level of examination of this model is unparalleled with any other model I've seen, granted I haven't seen every model out there.

 

Its pretentious verbosity, unjustified and pedantic, simultaneously stifles interest and obscures meaning. Do you see? Would you not have got the message more clearly if I had said instead "Your use of big words and archaic phrases lost me."

 

If the use of big words looses you, you've also lost the right to use big words. Go back to child's play, tyke.

 

I couldn't be offended because your writing style led me to suspect your opinions were of no value.

 

They aren't... This paper isn't really about my opinions though... it just discusses a model in which these problems can be fixed. That is all.

You suggest that I was talking down to the reader in this paper, and now that I've read it myself, I feel slightly belittled. I probably should note that prescience isn't a constant mindstate for me and that again, I wrote this at 2 in the morning, so it's unrevised and raw, and written essentially from an entirely different consciousness. I can't promise this helps or answers your qualms about the tone of the author, but get over that.

 

As to your suggestion of writing a preamble or summary, that first sector there is the summary... I say in it that I haven't listed the full process. I haven't. It's much, much longer. And if you hate me for sounding like a pretentious dick in the document, a paragraph summary would make me sound more pretentious. I'll do it out of humor anyways:

 

Are you read to explore the most revolutionary concepts in all of science? Are you ready to delve into the rectum of biology itself and pucker the ******* of deep consciousness? WELL THEN PREPARE YOUR ANUS. We have points, we have clouds, we have data, and just about everything else that makes children lie awake at night! This is the next step, this is the next leap, this is the next wheelchair push into the bus that is Discovery! Come on down and take a look at this cool **** that I don't even know what it means anymore! - Story of my life.

 

I apologize for being a dick, but honestly this is the best I can do.

Link to comment
Share on other sites

The "in all of recorded history" thing is just hype

Yah, not recommended you do that dude. I do marketing for a living. Trust me, bad idea.

 

It's only 5 & 1/2 pages typed, it's not even long enough to be considered an essay. I don't think 87 points is literal, but if you counted 87 then congratulations, you've put more effort into this than I have, haha.

See what I mean? Hype: doesn't work too good.

 

What we model would essentially just be floating atoms that then take into bonding and polarity interactions with each other. Everything would carry out like atomic interactions in the real world (hypothetically). Their complexity can be easily broken down with this concept, and that's why it needs to be done, the level of examination of this model is unparalleled with any other model I've seen, granted I haven't seen every model out there.

To echo Mr. blamski, I've spent a lot of time modeling stuff, mostly traffic simulators. The only way any of the modelling you hear about today works is through absolutely *amazing* amounts of oversimplification. The kind of stuff you're talking about here suffers from some serious extrapolation problems:

 

  • You're actually talking about going down to the atomic level, which when you start counting all those itty-bitty little things, the computational requirements exceed all existing computing power by several orders of magnitude.
  • At the atomic level, you've got quantum uncertainty to deal with, which will pretty much introduce so many sources of random behavior that there will be no way that the system will produce any consistent behavior without a lot more understanding of systems we don't really understand.
  • Most good models try to abstract behavior of underlying complex systems in a way that makes them highly "predictable" by just basing them on historical data, and giving them ranges that are well within known (and oversimplified) values. When you try layering a complex model on top of a complex model on top of a complex model ad absurdium, the output is more than likely going to be garbage, because none of the models will be more than rough approximations due to their lack of accuracy, and errors cascade to the point where nothing is anywhere close to modeling the real world.

 

All this is not really solvable by "knowing more"--we're of course learning lots of new stuff over time--it's a systemic issue that takes you winding through Heisenberg's Uncertainty Principle all the way out to Godel's Incompleteness Theorem. You can't get there ever no matter how hard you try!

 

What's much more practical is not trying to build a perfect--and impossible to build--model, but rather picking very specific small problems and trying to simplify and isolate the behavior of the system in a way that lets you make useful predictions with the least amount of effort.

 

That's what we do in the real world as scientists and programmers, and while it's not revolutionary or very beautiful (it's like making sausage most of the time), we do end up solving problems and making the world better! :cheer:

 

 

...I wrote this at 2 in the morning, so it's unrevised and raw, and written essentially from an entirely different consciousness.

Like love letters and college essays, it's always better to read them over and edit them in the morning before they're submitted... :)

 

 

I can't promise this helps or answers your qualms about the tone of the author, but get over that.

Nah, that's always the author's obligation. Authors who expect their readers to "get over" the inadequacies of their style/approach simply lose readers....

 

 

As to your suggestion of writing a preamble or summary, that first sector there is the summary... I say in it that I haven't listed the full process. I haven't. It's much, much longer. And if you hate me for sounding like a pretentious dick in the document, a paragraph summary would make me sound more pretentious.

Oh you might actually want to read "A Modest Proposal": Modesty will get you everywhere when you're trying to change the world....

 

 

I apologize for being a dick, but honestly this is the best I can do.

That's okay. Just like your project though, recognizing that getting there is real work can pay off if you put in the effort! :cheer:

 

 

Mustard's no good without roast beef, :phones:

Buffy

Link to comment
Share on other sites

Who keeps downgrading my posts lol? I think I'm doing a good job at responding to everyone, regardless of my social value. I don't go around downgrading other people's posts just because I don't like them... Anyways-

 

  • You're actually talking about going down to the atomic level, which when you start counting all those itty-bitty little things, the computational requirements exceed all existing computing power by several orders of magnitude.
  • At the atomic level, you've got quantum uncertainty to deal with, which will pretty much introduce so many sources of random behavior that there will be no way that the system will produce any consistent behavior without a lot more understanding of systems we don't really understand.
  • Most good models try to abstract behavior of underlying complex systems in a way that makes them highly "predictable" by just basing them on historical data, and giving them ranges that are well within known (and oversimplified) values. When you try layering a complex model on top of a complex model on top of a complex model ad absurdium, the output is more than likely going to be garbage, because none of the models will be more than rough approximations due to their lack of accuracy, and errors cascade to the point where nothing is anywhere close to modeling the real world.

 

  • The thing is, the computational requirements are immensely bridged with this surge in computing power I discussed in the paper. Not even including hardware, just the back-end processing power of the software is incredibly clean, I wasn't creating hype when I said it was the largest leap in computational power since the integrated circuit. It really is.
  • We aren't modeling quantum physics into the engine, just like any video-game with a physics engine, quantum mechanics aren't needed. Our smallest unit we'll be working with is also the largest one we can work with. Youtube more on Unlimited Detail's point-cloud processing to get an idea of how this differs from normal 3D polygons. We can factor in uncertainty into the engine, and we will, but it doesn't have to be semantically defined within the code for every quantum possibility, a lot of our observations will be like that of a normal lab without physics computing: we will observe through aggregate reactions. Not singular ones.
  • This is incredibly inaccurate for the type of data we are trying to collect. Simplifying things that much defeats the purpose of what we're trying to find out. The only thing ad absurdium here is your argument, there is nothing incredibly complex about making a 3D object model in an editor. You're looking at this weird, look at like it were a video game (because it technically is) and then think about how the game engine handles object properties, etc. This will be the most realistic model ever, I don't see how running a full, real, living cell all of a sudden becomes less realistic and less observation worthy. This is more of a programming challenge than a science challenge at this point.

 

What's much more practical is not trying to build a perfect--and impossible to build--model, but rather picking very specific small problems and trying to simplify and isolate the behavior of the system in a way that lets you make useful predictions with the least amount of effort.

 

Like love letters and college essays, it's always better to read them over and edit them in the morning before they're submitted... :)

 

Oh you might actually want to read "A Modest Proposal": Modesty will get you everywhere when you're trying to change the world....

 

That's okay. Just like your project though, recognizing that getting there is real work can pay off if you put in the effort! :cheer:

 

Again, this is not impossible to build lol, if it were impossible to build, then the default environment they run in their demos (27 something trillion polygons) would be impossible to build. And it's not. I'm not sure how you're coming up with that assumption that this idea is so impossible, since it's already being done. Sure, there will be more than 27 trillion atoms in the finished cell, but we don't have to start with the finished product, like I said we could do something much more basic- the egg and sperm. Their atom count would be significantly less. Let's say the computational power really is magically not there, and we (royal "we", I mean the team) never get hundreds of cells, hell, we never get past the very first cell. It still wouldn't matter. All we need is one perfectly observable cell to watch all the basic and complex processes take place in. All we really need for most of the unknowns out there is the initial cell.

 

That being said, this here is my morning read-over, you guys are my edits, and this paper is far from complete. I only threw it out here on hypography first because you guys tend to know what you're talking about when it comes to science, but I now realize this is more of a programming issue than a science one, and I will spare you hypographers the next few drafts of the paper.

 

What I find most interesting about all this is everyone's rejection to my personality rather than the concepts in the paper. Granted, I don't write like I talk at all, but I don't think I came off so pretentious as everyone is reading it haha. "An intellectual weight I can no longer bare", I can't even bare it now, I got this idea in a state of prescience, hence the subtitle. I wrote that I wanted to expunge this knowledge much like Niel Degrass Tyson finds something out and wants to grab the nearest person and yell at them, "DID YOU KNOW THIS?!?" That's the way I worded it for this document. It was an awesome idea to me and I wanted to tell everyone in the world, to alleviate the anxiety of not being able to talk about it with other people. If you're still reading this like I'm some alpha-male pretentianaught looking down at you guys, read this in a British voice and imagine yourself twirling your invisible mustache. I don't look down to you guys at all, quite the opposite. I cannot stress how incredibly grateful I am that this post got any views or feedback at all. And in all honesty, the figures I gave for the processing power and levels of examination for the project are quite modest. I dialed them down from what they actually were because I felt the claims would be too bold, but I now see they're still too bold for most people to accept.

 

Your last statement about real work confuses me. I'm pretty sure I recognize this as real work, and I'm pretty sure I'm really working on this. Did I not write a paper about it lol? I don't mean this as a counter-point to your statement, I simply don't understand what you meant, please clarify.

Link to comment
Share on other sites

matthew, i think there is still something that you are not quite grasping here. in simple terms, what you want to do is to model a cell in so much detail and so accurately, that as it develops, splits, becomes a fetus etc... it will behave in exactly the same way that a live growing cell would. thus you will provide a tool that enables us to observe and understand the process of life in a virtual environment, and this will be a giant leap forward in our understanding of how the organism works. right?

 

what we are all generally saying is, that despite the undoubted power and capability of the software engine you are talking about this is more than likely absolutely impossible. not that it is a silly idea, just that it can't be done in the way that you wish to do it. there is too much complexity and uncertainty in the thing you are modeling for your outcome to have any meaningful value. we currently don't know enough for you to be able to include all the factors and variables in your program, therefore it is certain that what comes out of your system is completely unlike what happens in a life. this remains true if you model a sperm, an egg, a cell or a 12 week old fetus.

 

you can't seperate out the science issue from the programming issue as you are dealing with both. just as you can't seperate out the quantum processes from the other processes if you want to successfully model an atom. if you simplify the model you will never recreate the complexities of life.

 

by the way, i'm british and have no moustache to twirl.... what should i do? ;)

Edited by blamski
Link to comment
Share on other sites

Mathew, just for the record I didn't downgrade your post, though it may well be deserving of it.

 

Thank you for making an effort to write a summary, unfortunately you failed completely to deliver. An summary is not, as your effort was, simply an introduction. It is what it says it is - a summary. It is an abstract. It conveys the totality of all that is important about the idea.

 

This is a science forum, but here's a blindingly good example of a such a summary from religion. (It fails in as much as it is not presented at the beginning, but is otherwise spot on.) This is an executive summary of the New Testament: ""For God so loved the world that he gave his one and only Son, that whoever believes in him shall not perish but have eternal life."

 

Now, do you want to try again.

 

Second subject. You said this:

 

If the use of big words looses you, you've also lost the right to use big words. Go back to child's play, tyke.

Get real. The use of big words when they are not required, as is the case in your opening paragraphs, is either pretentious, ignorant, or - as you suggest - the consequence of a tired mind. That's fine: just don't get snappy when someone gives you some good advice.

Link to comment
Share on other sites

Matthew, you made nearly the same proposal last year in An Expirement Never Tried:Digitized Organic Lifeform.

 

I think you’re still laboring under the same misconceptions you appeared to have in that thread about (1) the nature and (2) the status of Euclideon’s Unlimited Detail graphics engine.

 

(1) The nature of the Unlimited Detail graphics engine:

Despite the “unlimited detail” name, its underlying amount of detail – that is, number of points (AKA voxels) representing the 3 dimensional objects to be rendered – is finite and pre-determined. It’s chosen based on the expected requirements of the program to use it – in the case of the “island” demonstration, they suggest that they can render down to about the scale of pebbles. It’s intended only to display visible surfaces, so has no need and should not contain “invisible” points inside modeled objects.

 

The model’s point collection is also essentially pre-rendered and static, although Bruce Dell has stated that dynamic changes necessary to support animation of some objects, which is essential to using the engine in its intended role in video games, is in the early stages of development, and appears to be feasible.

 

Your idea that this graphics engine can be used as for physics modeling down to the scale of biological cells or sub atomic particles is simply not what it is intended or suitable for, or a purpose that Dell has suggested.

 

(2) The status of the Unlimited Detail graphics engine

As of a 3 Aug 2011 interview, Dell has stated that “this is not a finished product” ready to be sold or leased to developers. Although Dell tends to be evasive when asked when it may be ready, giving responses like “later”, “once we’re finished”, and “maybe sooner than we think”, and comparing the development effort to the Star Wars movies, he’s not corrected suggestions that it may be years. Some respected critics have suggested that it may never be finished, and that the entire products and company is essentially a scam to get money from investors and government business assistance programs, such as the Commercialisation Australia initiative, which granted Euclideon $2,000,000 in 2010.

 

That Euclideon has filed only 2 applications for Australian patents, withdrawn 1 of them, and not yet been granted the 1 remaining (see here), gives credibility, I think, to suspicions of Dell’s work being a scam, rather than toward a legitimate product. Also very suspicious is the lack of company information, such as fixed address, evidence that the company has any full-time employees other than Dell, or a report by a trusted third party reviewer who has actually used the prototype software interactively.

 

Because Dell is attempting to attract financial support (whether with honest or nefariously intent), the text and videos at Euclideon’s small websites are very self-promoting, and evasive on technical detail. I recommend reading at least Euclideon’s wikipedia article and its linked-to references.

 

In short, and independent on the legitimacy of Euclideon, you can’t use static model graphics rendering optimizing techniques to program dynamic physics models. Though the two kinds of programming – and practically every other kind of programming – share some central concepts, such as optimizing by pre-computing “build maps” to limit the amount of calculation needed by a program – this doesn’t mean one kind of programming can be used to accomplish the goals of another.

Link to comment
Share on other sites

Matthew, you made nearly the same proposal last year in An Expirement Never Tried:Digitized Organic Lifeform.

Quite surprised you remembered that, or did you just click on me and see the other thread I started?

 

(1) The nature of the Unlimited Detail graphics engine:

Despite the “unlimited detail” name, its underlying amount of detail – that is, number of points (AKA voxels) representing the 3 dimensional objects to be rendered – is finite and pre-determined.

Not anymore.

 

It’s chosen based on the expected requirements of the program to use it – in the case of the “island” demonstration, they suggest that they can render down to about the scale of pebbles. It’s intended only to display visible surfaces, so has no need and should not contain “invisible” points inside modeled objects.

>down to pebbles.

Those pebbles are a few hundred atoms each. It's not only intended to display visible surfaces; only older game engines are intended for that. It's true most of their models don't have any points inside the modeled objects, but that doesn't mean there can't be or that having them would change anything.

 

The model’s point collection is also essentially pre-rendered and static, although Bruce Dell has stated that dynamic changes necessary to support animation of some objects, which is essential to using the engine in its intended role in video games, is in the early stages of development, and appears to be feasible.

Again, the status of this has changed.

 

Your idea that this graphics engine can be used as for physics modeling down to the scale of biological cells or sub atomic particles is simply not what it is intended or suitable for, or a purpose that Dell has suggested.

False. Dell himself said point-cloud data is almost intrinsically linked to modelling chemicals for scientific study, and in emails I sent the company a few months ago, it was stated that using it for large-scale molecular modelling would not be a hindrance on the program or the user.

And nothing would be sub-atomic, just normal atomic lol.

 

As of a 3 Aug 2011 interview, Dell has stated that “this is not a finished product” ready to be sold or leased to developers. Although Dell tends to be evasive when asked when it may be ready, giving responses like “later”, “once we’re finished”, and “maybe sooner than we think”, and comparing the development effort to the Star Wars movies, he’s not corrected suggestions that it may be years. Some respected critics have suggested that it may never be finished, and that the entire products and company is essentially a scam to get money from investors and government business assistance programs, such as the Commercialisation Australia initiative, which granted Euclideon $2,000,000 in 2010.

Can you not think of several good reasons that they're secretive as all ****? They're in between a large buyout from Nvidea & ATI and they're not allowed to release a lot of info on their product until that's sorted out. Not to mention Brisbane business reporting law is rather lax compared to that of standard American business reporting laws.

 

You also mention that there was no third party interaction or any videos of outsiders using the product, this is another thing that has changed.

Nine employees. Eight not including Bruce.

 

independent on the legitimacy of Euclideon, you can’t use static model graphics rendering optimizing techniques to program dynamic physics models.

Animation and physics have been apart of the program from the start, I am confused as to why you think they have not. We won't be using static models, or even any normal "models" at all. What we'd be modelling is an arrangement of atom objects. There would be no full cell model. Every individual atom object would interact with every other atom object around it.

 

I'm glad to see you looked a little into them though, and I'm glad to see an admin read the article and didn't gag at my language (what's that bad about it seriously? Using higher language is boring? **** you Eclogite?), but again, I am finding it hard to understand why this concept is so impossible. Let us say Unlimited Detail's engine isn't as grand eos as it seems, or let us say it's a total hoax altogether. If something else like it came out, and at the rate processing power is going, why is it so impossible for this concept to be implemented and finished within the next decade? I truley honestly don't understand. I would like you to explain it to me, but I think you will just come back with the same arguments you did the last thread and tell me the software can't handle it, when it really can. But if you have a better argument, and if Unlimited Detail is a load of crap, then why can't a similar engine do exactly what my project would need it to do? Please tell me.

 

And on a side note, my conceptual project is already being started. Did I mention that in the paper? I don't think I mentioned that. I should mention that. This Joshua cell project is already being started... I just wanted to lay claim to the idea.

Edited by Matthew Garon
Link to comment
Share on other sites

....there is nothing incredibly complex about making a 3D object model in an editor. You're looking at this weird, look at like it were a video game (because it technically is) and then think about how the game engine handles object properties, etc. This will be the most realistic model ever, I don't see how running a full, real, living cell all of a sudden becomes less realistic and less observation worthy. This is more of a programming challenge than a science challenge at this point.

Well the thing that you're missing here--which you avoided in your reply to Craig's last post--is that you're completely misunderstanding *why* the model you're proposing is complex.

 

As Craig pointed out, the body is dynamic. You're trying to say that a tuple with a handful of values (a polygon) is equivalent to the very complex rules and interactions that bubble up through hundreds or thousands of layers of abstraction that is represented by everything all the way down to the atoms (which are indeed non-deterministic due to quantum effects, and if you don't understand this, you're really not going to end up with anything useful).

 

The body is not a few trillion polygons, it's a few trillion parallel, interconnected processors.

 

Your paradigm isn't even in the ballpark, and it does not appear that you're aware of it.

 

Really, the stuff that Dell is doing is important because it's an economical implementation whose driving force is more Moore's law first and some interesting graphics coding second. Might it be applied to creating other "static"/centrally controlled models? Sure, but the complexity of control is a much bigger issue than simply storing a whole lot of elements that sit there. Honestly, this kind of computing power and techniques (multiprocessing) has been around for quite a while, and that begs an obvious question:

 

Your last statement about real work confuses me. I'm pretty sure I recognize this as real work, and I'm pretty sure I'm really working on this. Did I not write a paper about it lol? I don't mean this as a counter-point to your statement, I simply don't understand what you meant, please clarify.

It seems you've skipped the most important part of any process like this which is to ask yourself: if it's so simple, why hasn't anyone else already done it.

 

Jumping to the conclusion that it's obviously because you're a genius just engender's visions of Wile E. Coyote rubbing his paws in self-satisfied feelings of brilliance as the anvil heads straight for his gigantic over-sized noggin.

 

Don't want to discourage you of course. You may be on to something, but skipping a bunch of obvious steps shows a lack of "real work" and is not terribly effective in convincing others of the brilliance of your ideas.

 

 

An intelligence test sometimes shows a man how smart he would have been not to have taken it, :phones:

Buffy

Link to comment
Share on other sites

Matthew, you made nearly the same proposal last year in An Expirement Never Tried:Digitized Organic Lifeform.

Quite surprised you remembered that, or did you just click on me and see the other thread I started?

I remembered it.

 

I’m a professional medical information programmer with a specialization in the use of sparse array technology, so I’m always immersed in the sort of metaphorical “search engine” Dell’s talking about in the context of reducing the computing requirements of forward ray mapping (which, as best I can tell from his assorted videos and a few webpages, is how Unlimited Detail does its final rendering), though I’m selecting and presenting medical, not graphical data. Although most of my attention is within my profession, I like to keep rough track of related areas, like those apparently being developed by Euclideon. Although about half of IT folk with whom I’ve discussed it p-to-p, by my rough count, lean to the conclusion that the company and its products are fraudulent (which isn’t as great a stigma as one might think, as folk like I who’ve worked in IT for several decades have often been involved in promoting what could be charitable called “vaporware”, uncharitably “fraudulently misrepresented” software), Euclideon is pretty well known, and a subject I like to bring up in casual conversation.

 

So I’ve had the UD engine on my mind, in a low-key, wait and watch way, since I first heard of it, along with most interested folk, in late 2010 – early 2011.

 

You also mention that there was no third party interaction or any videos of outsiders using the product, this is another thing that has changed.

:thumbs_up Big thanks for this link, Matthew. I’ll admit to a preference for text over video, and a faultfull tendency to watch only the first video about something for a particular date – in this case,

, and so miss something valuable contained in another video. It’s a pity text transcripts are made of such a small fraction of videos, making their content more searchable and quickly consumable. :( For example, the best text SF and gaming enthusiast John Gatt appears to have written about his interview with Dell is this blog entry, which is mostly about the possibility of UD being used in the SWTOR video games.

 

Gatt’s interview is the first I’ve seen where it’s clear a graphics engine (excluding the possibility of a masterful tech bait-and-switch being pulled, which I don’t think is very likely, UD) is being used interactively. I’ll give Dell his sought after “wow”, and gratefully set aside worries that that UD demos are actually railed.

 

Though the video inclines me to believe that Euclideon is not an overt scam, having more employees than just Dell (though none are shown in it) and a real, interactive graphics engine rather than fake, pre-rendered videos, it nowhere contradicts my most important point, which is that UD is intended to be a graphics engine, not a physics or biological modeling program. So I continue to think you critically misunderstand the subject, Matthew, and that your idea of buying the source code for UD and using it to develop physics and biological modeling programs – which I believe you call “the Joshua cell project” – is badly misguided.

 

Dell himself said point-cloud data is almost intrinsically linked to modelling chemicals for scientific study, and in emails I sent the company a few months ago, it was stated that using it for large-scale molecular modelling would not be a hindrance on the program or the user.

And nothing would be sub-atomic, just normal atomic lol.

What’s your source for this, Matthew? I’ve not read or heard Dell make any such claim.

 

I believe you’re being confused by the use in graphics programming of the term “atom”, as an alternative to “point” or “voxel”, with its use in physics and chemistry.

 

The common voxel density described for UD is 4 per linear mm (4 x 103/m, 1.6 x 107/m2). A silicon atom (the major element found in rocks) has a density of about 1 per 22 nm (4.5 x 107/m, 2.1 x 1015/m2). Since the number of atoms in point cloud are proportional to its area, this means that were one scaled down to match that of physical atoms, it would require on the order of a hundred million (108) times the storage.

 

More importantly than differences in scale, models of physical atoms are concerned with their motion and interaction with one another. In point cloud graphics programs, "atoms", or points simply have attributes of position, color, brightness, and other properties needed to render them.

Link to comment
Share on other sites

It seems you've skipped the most important part of any process like this which is to ask yourself: if it's so simple, why hasn't anyone else already done it.

 

Jumping to the conclusion that it's obviously because you're a genius just engender's visions of Wile E. Coyote rubbing his paws in self-satisfied feelings of brilliance as the anvil heads straight for his gigantic over-sized noggin.

 

Don't want to discourage you of course. You may be on to something, but skipping a bunch of obvious steps shows a lack of "real work" and is not terribly effective in convincing others of the brilliance of your ideas.

I didn't skip that step. The reason that no one has done this before (although they have...) is that no one has access to the software (well, people with money and established teams do, which is how they got alpha versions of it)... There is no genius involved here, I know other people have had similar ideas and again, they are already starting on this, I just wanted to lay claim as being the first person to make this public so I can work my way into a team. Explain to me how getting a jump on the other researchers makes this whole thing a "lack of 'real work'". If anything I am working ahead of others.

 

I remembered it.

This fascinates me and I bow my head in respect to you because my memory is absolutely terrible. I have to take ginco-beloba just to help me remember if I have class after I wake up. I had honestly forgotten about my previous post here a year ago and so again, you are amazballs.

 

:thumbs_up Big thanks for this link, Matthew... it nowhere contradicts my most important point, which is that UD is intended to be a graphics engine, not a physics or biological modeling program. So I continue to think you critically misunderstand the subject, Matthew, and that your idea of buying the source code for UD and using it to develop physics and biological modeling programs – which I believe you call “the Joshua cell project” – is badly misguided.

Tis' not good sir. We wouldn't need a programer team if we were just using the engine as it came. The physics and chem modeling would be tacked on to it (or really inside the source, not on it). True this whole thing is badly misguided, I mean I wrote the old article at 2 in the morning, and I wrote this newer one again at 2 in the morning, but that doesn't mean it's not salvageable. Also, it doesn't really have a name, it's just a concept.

 

I would like to tackle this difference in principles we are having here though, as it appears to be the same ones we were having in the last thread- I'm going to jump to the bottom to clarify this, after ***.

 

What’s your source for this, Matthew? I’ve not read or heard Dell make any such claim.

 

I believe you’re being confused by the use in graphics programming of the term “atom”, as an alternative to “point” or “voxel”, with its use in physics and chemistry.

 

More importantly than differences in scale, models of physical atoms are concerned with their motion and interaction with one another. In point cloud graphics programs, "atoms", or points simply have attributes of position, color, brightness, and other properties needed to render them.

Source is videos I've seen or email chains between me and them.

I'm using atom interchangeably because the points would be representatives of individual atoms. And yea, they have very few properties to start out with, but that doesn't mean we can't create new classes or assign them new properties within the code.

 

***

 

Okay, this is the perfect feedback I was looking for, thanks again guys; HERE'S WHAT'S HAPPENING WITH IT-

 

I've re-written a lot of my original document to make me sound less like a Bret Harte narrator and to include some details I was being a little obscure about. I've made a sort-of aggregate summary of all the flaws and problems you guys have pointed out and included it right after the first section to show objections and problems with the concept.

 

There were a few equations that were thrown into the previous thread illustrating just how many atoms are in a single drop of water, and the processing power needed for a single drop (these will be included in my revision). I'm meeting you guys half way and am adding a retort to the retort in my paper scaling down the project to something more manageable. I agree, that even though I still think UD is amazballs, that there are hardware limitations, and Craig's concern being code limitations (which I still feel can be added into the program). So here's my new proposal-

 

Given everything everyone in this thread has read and said, what if we took the same type of concept, using the same engine, and used it on much more isolated parts? I think this is what Buffy was pushing. Instead of trying to create a whole initial cell, would you guys agree that instead creating maybe part of the matrix of a mitochondria and running that, would be a much more manageable project to start with? And maybe make a bunch of these smaller snippets of the cell and analyzing those would be actually doable and manageable and something we could start on today given everything we have now? I know there may be something like, "Even just a few membranes and fluid is complicatksjdf blah blah", but consider the following- an isolated portion of an organelle is something that the hardware and software CAN manage, and it's something we COULD model, so permit me to say that an isolated part of the cell is doable with the original concept. Can we agree on that?

Link to comment
Share on other sites

Given everything everyone in this thread has read and said, what if we took the same type of concept, using the same engine, and used it on much more isolated parts? I think this is what Buffy was pushing. Instead of trying to create a whole initial cell, would you guys agree that instead creating maybe part of the matrix of a mitochondria and running that, would be a much more manageable project to start with?

 

Well, I certainly think that you'd be better off starting with something smaller! :cheer:

 

But to repeat for clarity, the problem is not tweaking your equations a bit to make the estimate a little bit larger. The model you are using as well as the software are wholly inappropriate for the task.

 

Again, an n-tuple is not an algorithm (even Lisp recognizes this even though programs are represented by lists in it). The complexity you need to deal with is that every element must be not only a set of data but a process, and these processes interact in real time. That's not the same as having a static database of tuples that get processed (even if you have multiple processors).

 

You're of course welcome to continue to follow your current course, but if your goal is to get smart people interested in working on your project, demonstrating so clearly that you do not understand the nature of the problem you're trying to solve doesn't look too good.

 

Furious activity is no substitute for understanding, :phones:

Buffy

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...