Jump to content
Science Forums

The Digital Demise Of Darwinism


Recommended Posts

@greylorn. You declare that this thread indeed should be about abiogenesis, yet spend post #49 talking about evolution. That is not helpful. If you wish to discuss evolutionary mechanisms in a parallel thread I can move that post to that thread. I would be most happy to refute your arguments there. Otherwise I am not about to confuse the intended topic of the thread, which is discussing Paul's thoughts on abiogenesis.

Link to post
Share on other sites
  • Replies 114
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

The genetic code of life really is digital, like the code underlying computer software. I’ve always been compelled by the analogy: The genome as the software instructions for embryology. But I thin

Consider these two successive paragraphs, right out of your fingers...   "Evolution uses a random process, mutation, as raw material for natural selection, this is not random but highly deterministic

About two years ago I sent a book chapter challenging Darwinism to an internet correspondent for review. Unbeknownst to me, he had already written an essay in which he had compared Darwinian theory t

Eclogite,

 

Get ready to be displeased with the following reply.

 

Yes, I know the current theory reasonably well. And, I've taken 6 credits in probability theory, with a B grade. Maybe it was B-. And I adore common sense.

 

It does not make sense to me that a random sequence of events, e.g. the arrangement of DNA molecules and magical appearance of a functional living cell, full of nifty little cellular mechanisms needed to make it work, can give rise to a larger-scale non-random process. It makes no sense whatsoever, but even worse, the math simply does not work.

 

Your incredulity isn't evidence of anything...

 

The probability for the random occurrence of a smallish 900-base-pair gene is a trivial calculation---1.4x10-542. That is a generous calculation. When complexities are included, it becomes more difficult to calcuclate, but yields an uglier result. I would have thought before I joined any science forums that any honest and competent scientist, who knows that the "Can't happen" level is one shot in 10-40, would immediately see the absurdity in the notion that a single useful gene could evolve by random processes so as to produce a single useful protein, but that is not the case. When the brain gets to running a program, it keeps running that program, whether it is the brain of a scientist, educated science camp-follower, or Jehovah's Witness.

 

Seriously greylorn, odds have nothing to do with this. Atoms do not form molecules at random, it's chemistry and the combinations are limited by chemistry not some long shot odds...

 

I'd have thought that the further application of math would totally void any rational man's belief that random events could produce living bodies. The average size of a human gene is not 900 base-pairs, but 1200, and they max out at around 1500 bp's. Naturally the probability for any of the larger genes gets even uglier. Must I do the calculation?

 

Again, your "calculations" would be meaningless since chemistry is not random...

 

But suppose that we make the simplifying assumption that the average gene size is only 900 bp's. There are about 23,000 genes in the human body. From basic math theory we know that probabilities multiply. Thus the probability for the random arrangement of all the protein-making genes in the human body (only 2% of the entire genome) is 8.8x10-12,462,640. How anyone capable of understanding the negative magnitude of such a number can say, "Yep, that could have happened, given enough time," is, in my completely disrespectful opinion, either a fool or an idiot or some combination thereof. If that applies to you, kindly do not take it personally--- instead, look at the numbers for yourself. Do your own research. Perform your own calculations.

 

More meaningless calculations...

 

Now I am willing to believe in miracles, but not in Harry Potter magic. I think that we are all compelled to accept the reality of at least one miracle--- that anything at all exists. After that miracle-admission, cosmology becomes (by my theories anyway) a matter of finding the fewest and simplest set of miracles from which a universe might come into being. But that is another topic, so...

 

Miracles are by definition magic...

 

Back to point. Belief in an occasional Miracle (I prefer three, personally) is required, but belief in absurd magic is for children. No matter how adept at psychic skills one might be, there is a line. I do not believe that any number of Gods can wave magic wands and turn rats into teacups, or "nothing" into a universe. The very notion is disrespectful of basic physics as well as common sense. My unwillingness to believe in absurd levels of magic, or repetitive complex miracles, means than I will not accept a theory that tells me that the mechanisms leading to my existence stood such an absurdly tiny level of actually happening.

 

Give the long odds argument a rest dude, it's nonsensical...

 

And in this simple presentation I've ignored the complexification of accounting for the other critters to which our planet plays, and has played host, including the humble fern which has a larger genome than humans. The numbers yielded from keeping it simple are absurd enough for me.

 

And again your incredulity is not evidence of anything..

 

So, I completely reject the notion that critters evolve as a result of random DNA mutations. You or someone else once claimed (as I've heard elsewhere) that there are other processes which serve to reduce the absurd probabilities I've presented here. But, no one has detailed these processes, or demonstrated a different mathematics.

 

Sigh...

 

I recall that someone said that here is a process whereby an entire long chain of genetic material can substitute for a different chain within a gene. I knew that. It does not change the probabilities except to make them worse by figuring in the likelihood of a successful large-scale genetic swap, an event that seems to be chemically akin to transposing two adjacent villages, complete with villagers-- but I've not run the math on that one.

 

Again, odds have nothing to do with it...

 

The issue of "Natural Selection" is the Darwinists' red herring. It is as irrelevant as it is inevitable. Let us suppose that an Almighty God created every critter on the planet, and its predecessors. They would still be subject to natural selection. If God made a beastie that was mostly blind, could not hear, had four tiny slow-moving legs, no instincts for self preservation, was highly efficient at turning grass into delicious meat, and placed it in the heart of Africa, would we ever know?

 

There is no evidence of anything popping into existence like that and yes it's probable we would find it and know it wasn't part of earth life.

 

N.S. applies irrespective of the creation process. It applies to every product that appears in any marketplace, including the marketplace of biological life.

 

Good to know you do understand natural selection...

 

Now, just because I find that the agreed-upon mechanisms for evolution are absurd does not imply that I am a creationist. I do not believe that creationism vs. Darwinism is an either-or choice. I have a much more credible alternative, IMO. If my ideas are ever examined and disproven, then I or someone smarter will find the truth. It does not lie in Darwinism, or in the shmoo-like amalgamation of ideas that continue to ignore the evidence, no matter how mealy-mouthed a name the amalgam is given.

 

I suggest you tell us what it is instead of trying to baffle us with horse feathers..

 

You have been challenged. I hope that you reply with real numbers if you want to contradict mine, not a reference to a hand-waving agreement system that is essentially just another religion. I don't care if you like me or approve of my style-- this is supposed to be a science based forum. So show me some science. Refute my numbers with numbers.

 

You haven't challenged anyone, all you have done it make empty claims...

 

Ultimately I wish to see a refutation of the evolutionary back-tracking problem that Michael Behe explained in The Edge of Evolution. Another conversation, perhaps.

 

 

Micheal Behe has been discredited... He was discredited in open court BTW!

Link to post
Share on other sites

 

 

I'd have thought that the further application of math would totally void any rational man's belief that random events could produce living bodies. The average size of a human gene is not 900 base-pairs, but 1200, and they max out at around 1500 bp's. Naturally the probability for any of the larger genes gets even uglier. Must I do the calculation?

 

 

 

its even worse than that. the average gene has about 3000 bases and the biggest, dystrophin, has 2.4 million. source the human genome project

 

this will fuel your fire somewhat, i'm sure, but as moontanman has repeatedly pointed out, the long odds argument doesn't apply. i'm no chemist but i believe the notion of attractors has a large part to play in all this...

Link to post
Share on other sites

its even worse than that. the average gene has about 3000 bases and the biggest, dystrophin, has 2.4 million. source the human genome project

 

this will fuel your fire somewhat, i'm sure, but as moontanman has repeatedly pointed out, the long odds argument doesn't apply. i'm no chemist but i believe the notion of attractors has a large part to play in all this...

 

 

Can you give me some idea of what an attractor is? I am not familiar with that term.

Link to post
Share on other sites

i'll do my best...

 

 

an attractor is a mathematical concept and is defined as something towards which a variable evolves over time. it can be a point, a set of points or even a curve and if it involves fractals it becomes a strange attractor. wiki

 

i mentioned above that i'm no chemist... and i'm no mathematician either!!

 

the parallel i see with the appearance of life is that a certain factor becomes more attractive to other factors and accretion begins to take place. this attraction may be just a fleeting thing or it may become permanent. i'm sure there is a fixed terminology and set of equations for this in organic chemistry but i have no idea what they are or even how to begin looking for them. it's closely tied in with this idea of the non random appearance of tRNA, genes and stuff that troubles greylorn so much. its not that nature tried a combination, didn't like it and broke it apart to start over again every single time until it all came together by chance. chemical bonds took place as they commonly do and these new compounds became attractors to others. obviously there were a huge number of these bonds and compounds that were unsuccessful, or unstable, impermanent or whatever, but eventually some of these became stable and pervasive enough to become the foundations of what we are made of.

Edited by blamski
Link to post
Share on other sites

DW6,

 

Curious questions, but what the hell? Around the onset of puberty I undertook being liked. Knowing that this would be a new experience and would require new social tactics, I went from exemplary student to petty hoodlum, carrying a set of brass knuckles in the front pocket of my leather jacket and becoming a fairly competent car thief. The teachers who once loved me changed their mind, but the kids and girls I was trying to impress continued to see me as a nerd, albeit a useful nerd when they needed a ride. So that did not work. Throughout the rest of my life I undertook various strategies in an attempt to be liked, or even get laid. No luck, meaning no success, meaning failure. I hate failure.

 

A few ticks back in time a wise man with a horse told me that the easiest way to ride that critter was in the direction that it wanted to go. I got that. Since I look like Quasimodo without makeup, but think of things that others miss, I've always been good at alienating people. Being disliked and disrespected is, for me, riding my horse back to the barn from whence it came. I've learned to do the things at which I am competent, and see no reason to repeat childhood mistakes. This choice has proven to be greatly freeing. It is inherently honest, for I do not need to try to be someone I'm not. For the first time in my life, my social relationships are now successful. They work out exactly the way I planned. Well, not exactly. My approval rating of -14 on this forum is nowhere near my -1000000 target.

 

The last time I admitted to a mistake was this afternoon. Years ago I had a loaded self-built computer that crashed in a power outage. I figured that the power supply had died, and replaced it, solving the problem. But, feeling poor at the time, I had installed a cheap power supply. Come rainy season and the inevitable power outages and surges, the box died again. This time I bought the finest power supply I could find and installed it, but unlike before, the replacement heart did not resurrect my computer. I figured that the cheap PS blew the motherboard on its way out, and just gave up on the box. --- Until last week our little town manifested a really good computer technician, and for the fun of it I brought that box in to him. He called this afternoon to inform me that the machine ran perfectly--- after he plugged in a little 4-pin connector that I'd ignored. Dumb! What could I do but laugh?

 

Look, I've been making a living for a half century using computers in one way or another, including exotic classified projects, and not a day of work went by without a machine with an IQ of zero, that did nothing but what amounts to counting (rapidly and accurately) on what amounts to two fingers, telling me that I'd screwed up again. Machines and people have been telling me for most of my life what a nitwit I am, and I am in no position to disagree. I look up into the night sky, again, and see thousands of tiny lights fixed on the inside of the great iron globe that surrounds the solar system, and wonder who or what did the wiring. I listen to or read diverse opinions about every subject that I can study, and must conclude that I don't know squat about what's going on.

 

But instead of feeling sorry for my ignorance, I have decided not to try to correct it by adopting the ignorant ideas of others. That tendency, not my ornery personality, is what really annoys people. I've looked around this forum enough to see that there are many more abrasive people than me. They are accepted because their beliefs are mainstream.

 

Look around your intellectual universe and you will see multiple sides to every subject--- thousands of different religions (atheism included) whose members have but one thing in common--- the certainty that they are right. I look at Darwinists and religionists and find both sides and all sects thereof pretty much F.O.S. Each side ignores a different data-set than the other side. I insist upon a theory that embraces all data-sets. And I will not adopt a belief system that is clearly wrong just to be in agreement with a gang of intellectuals who ignore half the information at their disposal.

 

I cannot accept atheism because I've had way too many psychic experiences that atheists claim cannot happen. I can even create such experiences in others, when important for them. I cannot accept any known religious system because I am logical, and believe (one of few beliefs) that logic is independent of all possible manifestations of reality.

 

Of the things that tire me greatly, top of the list is chosen ignorance. Stupidity I can deal with, and usually do so by ignoring the stupid person, or teaching him if appropriate. It is the persistently ignorant who annoy me the most, and I seek ways to get them out of my life. That can be difficult, because I appear to have arrived on this crummy planet with a built-in dogmatic-nit magnet. Telling them simply that you wish them to go away and ignore you works about as well with such people as with blow-flies. Annoying them more than they annoy you is the only solution I've found, short of termination, a strategy that has not been widely approved for critters smarter than blow-flies.

 

I hope that this helps you to understand that my attitude is more defensive than personal. I've developed a keen sense of insult detection. A straight-up person or street person will lay his insults out clearly, for all to see. Forums like this, and university faculties, are well-populated with pseudo-intellectuals who hide their insults within sneaky innuendos. I will not let that crap pass. What might appear to you as an unprovoked insult from me is simply a reaction intended to induce the snarky insulter to get upfront with his complaints.

 

Now, I realize that the "last time wrong" admission I offered earlier is pretty trivial. Here is another. I began writing the best of my ideas down about 50 years ago, and have been working on the project with the goal of getting the ideas right, ever since. I actually did publish about 30 years ago, but because my presentation context was fiction, the only notable person who paid them any attention was Douglas Hofstadter. My limited insights have actually been used in several university classes about the nature of human consciousness. Back to point--- Ten years ago I was halfway through another book, 200,000 words already on page, when I realized that I'd made a fundamental error. Really fundamental! So I downloaded a pint of whiskey, staggered into bed, and the next morning my old ideas looked just as wrong as the night before. So I threw out everything I'd written and began again.

 

Now before we both go to sleep, I must write Eclo and correct another error. Thank you for your feedback. I'm kind of sorry that it was such a waste of your time.

 

This post clears things up a bit for me, thank you. I appreciate your honesty, it's one aspect of your delivery that I do agree with, and I can't take that away from you.

Link to post
Share on other sites

It does appear that this process of definition is exposing some clear misunderstandings on both sides that cloud the inner conflict.

 

 

Excellent! Clearing up misunderstandings is exactly what I hope we achieve in this conversation.

 

The interesting thing here is that even though you've done some restriction by that definition, the critical part of my statement still stands according to your revised definition: that is that it is "the mapping from the symbol to the physical entity...constitutes meaning in some mind," the essential point being that it is a mind that creates both the mapping and the symbol.

 

Interesting, I agree. It is even encouraging. You and I seem to be drawing closer to understanding one another. You are correct that the "critical part of [your] statement" is also critical to mine: that mind creates mapping and symbols.

 

Also critical to mine is the notion that it is mind and only mind that creates mapping and symbols.

 

But this still leaves out the most critical part of my assertion. and that is the causal aspects of the creation of a symbolism. That is, in some cases, the one at hand in particular, the creation of a symbol and the appearance of the symbol in some special context, causally influences the behavior of a completely physical system such that the behavior will conform to some intended outcome anticipated by the mind that created the symbolism in the first place.

 

IOW the symbol "causes" something specific to happen that was planned to happen in advance by the mind.

 

Now the "special context" or the "physical system" may contain complex causal chains within it, for example your self-driving car fitted with a neural net computer and a set of servos, which eventually "learns" to stop at stop signs. One symbol in this example is the stop sign. But the mind, which is crucial and necessary, is that of the person who built and programmed the car, the neural net computer, painted the stop sign, and set the whole thing up. Without that mind, the entire system, IMHO, is impossible—unless of course, by some improbable occurrence of a tornado hitting a junk yard, all of the required parts just happened to assemble into a working system. (That analogy is not original with me and probably doesn't represent anyone's idea of a plausible mindless possibility. You may substitute your own favorite mindless possibility if you like).

 

Minds create models, which consist of conceptual symbols, and the usefulness of the model as a tool to the person with the mind is that they allow understanding (or not), based on the closeness of the model to the physical system.

 

Ahhhh. This helps elucidate our misunderstanding. We see a different "usefulness". You see it as helping the mind understand an observed physical system. And, of course, that's what scientists do so it is natural you would see it that way. But in the abiogenesis context (I have recently learned that that is what we are discussing), the usefulness is in figuring out how to construct the system in the first place so that it will work. It is the kind of symbolism you used when you made your drawings of the self-driving car, the circuit diagrams of your neural net computer, the CAD information you sent out to the circuit board fabricator, the verbal and written conversations you had with colleagues who collaborated on your project, and all the other myriad pieces of symbolic information produced before your ever powered up your self-driving car.

 

These various pieces of symbolism may be useful later on in the sense you described to explain to people at the science fair how your magic car works. They will look at the symbols and diagrams, observe your car in action, and exclaim, "Oh yes! I see how you did it."

 

But that usefulness is completely different, and different in kind, from the usefulness the documentation and communication you and your team used during the development of your project. The former is unnecessary (except maybe to give the judges something to work with) whereas the latter is absolutely necessary for the successful construction of the car and its test track.

 

So, back to my original assertion, it is clear to me that the most plausible way in which the 60-odd sequences of DNA which contain the complements of the various species of tRNA molecules could have been originally constructed, is if some mind decided on the code, i.e. the mapping between codons and amino acid groups, and then somehow (yet to be figured out how) deliberately, with forethought and intent, placed the correct nucleotides in the right places in order to construct the "codebook". After that, the thing runs by itself, like your car, without the mind's involvement.

 

How else could it have happened?

 

Actually, I think you should follow your own lead here and use a more restricted definition: let's say a mind is self-aware (e.g. sitting on a tack is bad) being that is capable of creating models and using them. That's to the point and I don't think it's a point of contention here.

 

I'm not getting your point here, Buffy, except for the fun with your play on words.

 

I have never been impressed with the putative importance of "self-awareness" to the problem of consciousness. I don't think the mirror test reveals much of importance, for example.

 

Off the cuff, here are the aspects of consciousness that I think are important to my argument:

 

1. Perception – The mind needs the ability to perceive some part of the physical system in which it is working.

2. Understanding – The mind needs the ability to conceptualize cause and effect relationships in the various parts of the physical system.

3. Imagination – The mind needs the ability to conjure up desired configurations and behaviors in subsets of the physical system.

4. Design skills – The mind needs to be able to come up with a workable design to accomplish the construction of the desired configuration and cause the desired behavior.

5. Free will – The mind needs the ability not only to willfully attend to the perception, understanding, imagination, and design necessary to accomplish this project but also to take the steps necessary to implement the design.

6. Deliberate causal influence on the physical system – The mind needs some way of moving, or changing, or otherwise arranging the parts of the physical system in order to implement the design.

 

Whether or not the mind is aware of itself, or even aware of 1 through 6, is unimportant to making this work.

 

No problem, except that I'd note that you are in this definition inextricably linking "mind" with "meaning": that is you need a mind to see a meaning in an abstract.

 

Good. I do indeed inextricably link "mind" to "meaning".

 

If I replace a human in that car with a typical self-driven car, there are two fundamentally different ways to have that car deal appropriately with that Stop sign:

 

With respect, I do not think these two are fundamentally different. They are simply two of many thousands of other possibilities for accomplishing the same thing.

 

What's important to take away from this issue is that you can have physical systems that are designed that perform what appear to be complex symbol recognition and mapping onto behavior, but not only can systems that are *undesigned* replicate all the functional requirements of the designed system and we with our minds and ability to perceive multiple models (as we did with orbital mechanics above) can actually tell the difference between the two, and as a result, not all systems that perform complex enough behavior have to be designed.

 

I don't agree. What is important here is not the appearance of the operation of the system. In your earlier sense of the "usefulness" of the symbols in the system, the observed behavior and the mappings of various symbols to physical parts, is useful to the judges, or to the students, trying to understand your system once it has been designed, built, and now performing.

 

But in my sense of "usefulness", it is the importance of the symbols and the mental activity that were involved in the original conception of the idea, the planning and design of the car, and the actual construction of the car and its support apparatus.

 

In my sense, the origination (design and construction) of the system is what is interesting and under discussion. You want to keep dragging my attention to the operation of the system after it is running.

 

Now I must refute your claim that your neural net system is *undesigned*. Did this system somehow arise in your laboratory without anybody ever talking about it, or writing about it, or proposing a strategy for designing it, or building host systems into which you could load up your self-designing software? If not, then I maintain that all these things constitute a "design".

 

I have no real problem with your explanation here, I'd just like to clarify that I've use "mapping" in the preceding explanation to map onto actions in a physical system.

 

OK. But I hope you now realize that I distinguish between two different uses of mapping: The mapping onto physical actions after the actions have taken place, and the mapping onto future physical actions that are anticipated and partially caused by the mind's will. (I say "partially" because I doubt that the mind can violate the laws of physics (no miracles allowed), so the designer has to be clever enough to exploit seemingly random events and somehow nudge them into compliance with the overall design. I am convinced that QM opens exactly this sort of window of opportunity.)

 

(to be continued. I blew out the size limitations.)

Link to post
Share on other sites

if you're into object oriented programming objects contain both data and methods and from the object's viewpoint, they're all the same. I don't think this violates anything in your definition here so I think we can proceed.

 

Good, let's proceed. I am familiar with OOP but not into it. It was invented shortly before I retired and never got into it. I thought it was a bad approach at the time and I still do, but hey, I'm no expert.

 

You might however want to think right now about what I've said in the preceding section about the *perception* of mapping, because it has a huge impact on your argument about mapping between codons and amino acid groups.

 

You'll be happy to know that I have given this a lot of thought, not just since you have written, but for the last 65 years.

. . . .

 

I said "If you accept my definitions, then, as my argument spells out, mind is necessary in order to establish the symbolism in which the codon is a symbol."

 

then you said (I have to learn how to do those embedded quotes.)

 

Absolutely!

 

With respect, I think you overlooked the all-important word 'establish' in my statement.

 

This is where I think you are completely missing the very important distinction I have been trying to make in the preceeding grafs:

 

I do not know the word 'grafs' (I suspect it might be a contraction of 'paragraphs'; it certainly can't be 'giraffes', can it?)

 

But I need to quickly move on to learn what I am completely missing.

 

I absolutely agree with the notion that a codon is a "symbol" ONLY when a mind intervenes to identify it as a symbol. But you're missing the fact that context is exactly what the mind uses to perceive the symbol from its physical representation

 

No, I am not missing that fact. Refer back to my list of necessary mental capabilities for the design of the genetic code system and look at numbers 1, 2, and 4. They definitely provide the mind with the physical context in which the design is to operate. And number 3 clearly involves the mind dreaming up the various symbols and the role they will play in the eventual system.

 

What you are missing is the fact that I am discussing the mind of an original designer while you want to discuss the mind of an observer of the system long after it has been designed and is now operating.

 

What is "CAT"? A codon? Computerized Axial Tomography? A small furry animal?

 

Actually, it is -.-. .- -

 

Before you complain that I'm violating your definition by dealing with mapping symbols to symbols (and remember that you said that was kind of okay, but it wasn't your concern), I'd like to point out that you've *already* done that:

 

I have no complaint here.

 

WE define what a codon is as a combination of 3 nucleotides, and nucleotides as a combination of some sugar and phosphate groups and we put all these names (symbols) on them because it makes it easier to understand.

 

My complaint lies here. You have emphasized my complaint with your capitalization. My complaint is that you insist on referring to us contemporary humans (WE) as the minds we are discussing. I am trying to get you to consider a mind that predates life and was possibly involved in the ESTABLISHMENT of the genetic code (sorry about the use of caps but I followed a precedent).

 

But for the physical system, it's just a whole bunch of atoms that hang together because physics says they can.

 

Most true.

 

So in when you claim that I'm saying that "The symbolism is all imaginary and has no impact or effect on any physical system," you got it. Absolutely!

 

Good. We may be making progress in communicating. Now let me place your statement in the two different contexts that I am about to turn blue in trying to point out.

 

In your context, i.e. of the minds of interest being those of contemporary human observers, "The symbolism is all imaginary and has no impact or effect on any physical system,"

 

In my context, i.e. of the minds of interest being those of the original designers of the genetic code, "The symbolism is all imaginary and has [a serious and long-lasting] impact [and] effect on [all living] physical system,"

 

 

 

But this is exactly what you just claimed you believed: that a mind is necessary to map a symbol onto a physical system. Any physical system can be observed by a mind to produce any number of arbitrary mappings on to symbols and we can go further to utilize those symbols in a model and make predictions about what might happen in that physical system under various conditions.

 

True. But once again, for emphasis, I am not talking about observers; I am talking about designers.

 

What you appear to be doing here is to try to say that if a mind can identify a "symbol" in a physical system, then the existence of that symbol is intrinsic evidence that that the physical system onto which it maps was created by a mind.

 

Well I suppose that could be construed to be what I am trying to say. To make it correct, I would say that if a mind can identify a "symbol" in a physical system in which the symbol produces an effect as a result of the mapping of the symbol and not as a result of the symbol's physical representation, then the existence...

 

Now, you will no doubt point out that all effects are caused by physical effects, but we observers can distinguish between a mapping according to an arbitrary codebook versus a mapping that depends on the physical characteristics of the symbol itself. If you can't conceive of that difference, it's no wonder I am having such trouble trying to explain myself.

 

But you have tried to turn this into an equivalence (logic terms, that is asserting the reverse implication) by saying that since a codon is a symbol, it is evidence that the codon must have been designed.

 

Would it help if I agreed to change "evidence" to "a clue" and "must" to "might"?

 

Here's the thing: if you've been following what I've been saying and keeping an open mind, you'd see that my Bible Code example from the previous post was not at all a red herring, in fact it is interesting precisely because it resembles very closely the exact example you are trying to use: the perception of codons as symbols and my contention that that perception is simply the activity of a mind doing the perception. For the physical system, it is all just letters, we can see all sorts of stuff in it that may or may not have anything to do with the physical system's operation.

 

I have been following you and I have been keeping my mind open. But let me use this to make yet another attempt to show the important distinction I am trying to present.

 

Bible codes do not resemble my example; they resemble yours. The "mind doing the perception" in your case is the Bible code reader, the biologist, the human observer. The "mind doing the perception" in my case is the non-human ancient original designer of the genetic code. That's what my essay was all about.

 

Now I can apply my distinction to humans as well, and for the moment forget about the ancient designer. The distinction is between you as a designer of your automatic car (assuming you are the original designer) and the observers of your car as it magically negotiates among the stop signs.

 

Maybe I should encapsulate my argument in a nutshell with due priority given to Paley:

 

We humans are familiar with systems we have designed. We know, if we think about it that is, that for any very complex system that we designed, we must use symbolism in many different forms. Many of those symbols, represented by specific physical configurations, appear as functional parts of our designed systems. We observe the protein synthesis mechanism and see that it uses symbols as functional parts. Since symbols, and in particular functional symbols within a complex system, require a mind, in our experience, it seems reasonable to suspect that a mind was involved in the establishment of the symbolism within the protein synthesis process.

 

That is not a proof but simply an argument that if you dismiss out of hand the possible involvement of a mind in the process of abiogenesis (see how easily I conform?) then you have limited your chances of finding an explanation for a very difficult problem in biology.

 

 

So, did "Mother Nature know what she was doing?" Well, it is *certain* that the physical world does not *require* her to have known.

 

*certain*??? How can you be so certain?

 

Now I sense at this point that you at least recognize this unconsciously

 

Your sense did not betray you. I am certain of nothing—except for one statement alone. That statement is that "Thought happens". All else I can, and usually do, doubt.

 

(to be continued. I blew the size limit again. Hopefully one more will do the trick.)

Link to post
Share on other sites

 

Now the "special context" or the "physical system" may contain complex causal chains within it, for example your self-driving car fitted with a neural net computer and a set of servos, which eventually "learns" to stop at stop signs. One symbol in this example is the stop sign. But the mind, which is crucial and necessary, is that of the person who built and programmed the car, the neural net computer, painted the stop sign, and set the whole thing up. Without that mind, the entire system, IMHO, is impossible—unless of course, by some improbable occurrence of a tornado hitting a junk yard, all of the required parts just happened to assemble into a working system. (That analogy is not original with me and probably doesn't represent anyone's idea of a plausible mindless possibility. You may substitute your own favorite mindless possibility if you like).

 

That analogy was coined by Hoyle, he refused to believe that life could have come about by random chance but his analogy was fatally flawed. Life did not come about by random chance, Hoyle's argument that the the odds of a cell coming together is illustrated by a tornado sweeping through a junk yard and assemble a 747 is nonsense.

 

Chemistry and physics are not random, molecules do not come together by random chance, there are only certain possibilities that work and those possibilities only work in certain definable ways. Random chance has nothing to do with it qdogsman. You can ignore my argument and me if you want but all you are doing is making your self look uninformed.

 

There is yet another aspect to this as well, the chemical combinations didn't start from scratch each time but they build on what came before. And the earths oceans are quite capable to running billions of chemical combinations per second, the basic building blocks of life are easily made in what is thought to have been the conditions of the early earth, actually several different scenarios of what the early earth might have looked like still work.

 

And finally your argument equates the idea of a modern cell, most people who follow this line of argument use the formation of a fully functioning eukaryotic cell from random chance as their yard stick. But lets assume you are more reasonable and are only talking about the most simple of known life forms, the analogy still fails. The first cells were orders of magnitude simpler than even the most simple modern cell. Modern simple cells like bacteria are extremely complex and not representative of what the first reproducing chemicals were like.

 

I do have a few videos by scientists that illustrate this quite well, they do it far better than i can do it justice, and even though there are several competing hypothesis my personal take on this is that a synergy of three or more pathways resulted in the first cells.

 

These cells probably wouldn't have been recognized as living by us, division was done by physical processes instead of the biochemical processes that result in the division of cells now.

 

i will again state my assertion that DNA is not information, it is chemicals reacting governed by the laws of physics not random chance. Random chance has nothing to do with evolution or abiogenesis.

 

I suggest you read very carefully what Buffy is saying I am smart enough to know she is head and shoulders above me in education if not raw intelligence as well. i don't think I've ever won a debate with her, if she says something I listen (no buffy I'm not smooching, just stating the facts) You seem to think buffy can be trivially falsified... No, her arguments in this case are spot on...

 

 

 

Ahhhh. This helps elucidate our misunderstanding. We see a different "usefulness". You see it as helping the mind understand an observed physical system. And, of course, that's what scientists do so it is natural you would see it that way. But in the abiogenesis context (I have recently learned that that is what we are discussing), the usefulness is in figuring out how to construct the system in the first place so that it will work. It is the kind of symbolism you used when you made your drawings of the self-driving car, the circuit diagrams of your neural net computer, the CAD information you sent out to the circuit board fabricator, the verbal and written conversations you had with colleagues who collaborated on your project, and all the other myriad pieces of symbolic information produced before your ever powered up your self-driving car.

 

This is just obfuscation of the issue, the main issue, the real gist of this argument is this "is DNA information", this applies to all the other aspects of the cell as well.

 

While abiogenesis theory is no where near as robust as Evolution it is well supported and many possible pathways to reproducing chemicals have been identified, you talk as though it is a complete mystery, this is not an honest appraisal of the science...

 

Show me why DNA is information and then you can go on with your assertions but with out showing DNA is information you are just spinning your wheels...

 

:edited for sp:

Edited by Moontanman
Link to post
Share on other sites

The probability for the random occurrence of a smallish 900-base-pair gene is a trivial calculation---1.4x10-542. That is a generous calculation. When complexities are included, it becomes more difficult to calcuclate, but yields an uglier result. I would have thought before I joined any science forums that any honest and competent scientist, who knows that the "Can't happen" level is one shot in 10-40, would immediately see the absurdity in the notion that a single useful gene could evolve by random processes so as to produce a single useful protein, but that is not the case.

 

Thanks for giving an example computation that convinces you.

 

Let me disabuse you of the argument.

 

It is quite obvious from both your result and the fact that you state that it is a "trivial calculation" that you are using simple combined probabilities of INDEPENDENT EVENTS.

 

"Independence" in probability means that the individual events are unrelated and do not effect one another. An example of this is coin-flips: while the chance of getting heads on 1 flip is 50% (0.5 probability) the chances of getting 10 heads in a row is very very low, [math] 0.5^{10} = 0.0009[/math]. That's how you get the absolutely astronomical odds upon which you base your argument in this quote.

 

The problem is that changes to DNA are not at all independent:

 

  • Changes to one base pair in a single generation are highly correlated to other base pair changes because they come from a common underlying cause. The changes can come from a flaw in the tRNA which causes similar transcription errors to occur during the transcription process
  • Similarly, changes in environment in the cell, can cause common transcription errors that cause changes in multiple locations.
  • Most importantly however is the need to recognize that DNA operates as if it has "subroutines", where there are different versions of subroutines throughout the DNA, normally in "junk DNA" segments. Changes to a single base pair can switch entire segments of DNA on and off.

As a result, a single base pair change can cause a cascade of not only other changes, but changes that have seemingly "intelligent" consequences because of their sophistication. In the terminology of probability, the "random changes" to base pairs are NOT independent, and thus must apply Bayes Theorem to compute the combined dependent probabilities. To try to be brief, what Bayes is all about is that you end up with sets of dependent events where if they were independent would be virtually impossible, while when you take into account their dependence upon one another, the "impossible" outcome becomes "nearly certain to happen."

 

But the important point here is that the application of "trivial" combination of independent probabilities is wholly and completely inappropriate to calculating the probabilities in the evolution of DNA.

 

Doing so makes every conclusion you draw from such an argument utterly meaningless and invalid.

 

Just because Behe and Dembski follow this path does not make it legitimate, in fact it is at the core for why most people who understand these things think of them as charlatans.

 

I've read enough of Dembski to think that he may just be incapable of accepting that he's wrong. I've read enough and seen enough video of Behe to be convinced that he knows he's wrong but he also knows that if he admits it, his whole body of work comes tumbling down, and he's gotten very good at avoiding the issue and dissembling and sidelining when people try to pin him down on it. Some of the video of the Kansas court case is pretty amusing and if I have time to go back and look at it again, I'll try to point out some cases of this.

 

The point I'm making here though is that the argument you make in the paragraph above is in the realm of "not even wrong" and will bring you nothing but grief around here, or any place else, because it indicates that either you wasted that 6 credits in probability and really didn't understand the section on "independent" probability and the application of Bayes Theorem, or you do understand it and like Behe, you're just trying really hard to repeat the same falsehood enough times so that the folks that don't understand it will think it's true.

 

 

When a person cannot deceive himself the chances are against his being able to deceive other people, :phones:

Buffy

Link to post
Share on other sites

Qdogsman, do you seriously think that gaps in our knowledge point to an unknown creator?

 

Yes.

 

How does the concept of "god did it" or an unknowable "unknown did it" contribute to to the sum total of human knowledge?

 

It doesn't contribute directly, but by opening up new possibilities for inquiry, it opens the way for those new inquiries to produce new knowledge.

 

Our entire first world civilization is based on the scientific method, the idea that something is mysterious and therefore somehow the mystery is an answer has never advanced human knowledge in any way.

 

I disagree. Think about the many mysteries in ancient times that had to do with vital issues affecting survival. It was believed that the heavenly bodies influenced the behavior of earthly processes and events. As a result of these (now known to be ludicrous and erroneous) beliefs, a great amount of attention and energy was spent studying and documenting the positions and movements of the stars, planets, etc. This data proved to be instrumental in the discoveries of Kepler and Newton.

 

I am not surprised the gap as you call it is bigger, we know quite a bit more about the universe now than we did and as knowledge increases so to does the unknown because an answer always brings to light new questions.

 

I agree. You seem to be supporting my position here. Since new questions are brought to light, it only makes sense that we should begin investigating them. We need to open our scientific horizon, not close it off.

 

If we had just accepted Newtons theory of gravity and assumed anything else was an unknowable mystery we would never had the questions came about after Einsteins theory of relativity. all big answers open up new areas of questioning.

 

My point exactly. We need to identify and explore those new areas of questioning.

 

You apparent need to ridicule science by stating it doesn't know the answers to everything is troubling.

 

I am surprised and somewhat dismayed that you could have inferred that "need" by what I have written. I am not aware of any time I have ridiculed science.

 

I have been critical, however, even extremely so, when I discuss egregious examples of where the official scientific community deliberately suppresses the work of sincere investigators who are working outside the official scientific bounds. Here are some glaring examples of this:

 

1. Barry Marshall was vigorously opposed by the official scientific "consensus" in his efforts to prove that stomach ulcers could be caused by bacterial infection. It seems highly unlikely to me that the huge business in manufacturing and selling anti-acid pills did not have a large influence on how grant money was spent, which in turn motivated the scientific "consensus" to take the position they did. In desperation, Barry courageously took it on himself to swallow a concoction containing Helicobacter pylori, thereby causing him to contract a severe case of stomach ulcers, which he subsequently cured with anti-biotics. Although he was treated shamefully by official science, he was at least subsequently rewarded.

 

2. I am doing this off the cuff so forgive me for not knowing names, dates, details, etc. but there was a father-son team, I believe, who were convinced (as I too was at the time. (In fact I was personally ridiculed by classmates who were studying Geology at the time and being put down for being so ignorant of the "true" geologic processes.)) that the continents had drifted apart. The official science community vigorously opposed his investigations and caused him a great deal of difficulty, until he finally produced the proof that he was right. He didn't get the credit as far as I know, and science cleverly changed the name from the much derided "Continental Drift" to the official and respected name of "Plate Tectonics". Again, shameful.

 

3. There was a guy, forgive me again for forgetting his name, who spent virtually his entire adult life studying the barren landscape of central Washington state. He was convinced that the topography was formed by an enormous flood event. Again, the official science community gave him no support whatsoever and instead placed impediments in his way. It was only after satellites were able to provide pictures from a high vantage point that it could clearly be seen that indeed a giant flood had taken place and he had been right all along. Then the scientists belatedly entered this forbidden area of inquiry and figured out what had happened and how. (The bursting of a big ice dam near Missoula).

 

4. One of the most egregious in my opinion, is the case of Dr. Money of Johns Hopkins. He was the official spokesman for the then-current position of official psychology and he had convinced the scientific "consensus" that there was no gender difference among humans (except for some minor and insignificant anatomical and physiological differences) that was not caused by culture. As part of his work in hoodwinking the scientific community, as well as a significant gullible part of the public, he deliberately falsified the data he accumulated relating to a case in which he had botched the circumcision of an identical twin and as a result the unfortunate boy lost his genitalia. Money seized on this rare opportunity to exploit this poor boy and his family and conducted a controlled experiment on them. He had the family raise the child as a girl and I think he did some more surgery to try to achieve the anatomical effect. The deliberately falsified reports of the experiment were used for years to bolster the stupid and false position of official science until the boy finally rebelled (some years before his eventual suicide) and the truth came out. Most shameful.

 

5. Now we have the absolutely stupid position taken by official scientific "consensus" that human activity is causing global warming. Since there is so much money involved here (much more than that of the drug interests in the H. pylori case) the official scientific "consensus" is not likely to admit the facts any time soon. Instead they tried the same ploy as the geologists and changed the name from AGW to "Climate Change". Curiously they removed the anthropogenic reference from the name, but they still indoctrinate our youngsters and the gullible public with the ludicrous notion that CO2 is a poison and a pollutant. Again, shameful.

 

In such cases, if I thought ridicule would be an effective approach, I might ridicule such "science". But since I don't think so, I don't ridicule them in spite of how ridiculous the "science" is.

 

Science doesn't know everything... Of course it doesn't if it did it would stop B)

 

Most true.

Link to post
Share on other sites

this is a really interesting discussion, and i wish i was half as intelligent as the main protagonists and contributors. though i have my feet firmly planted in the natural evolution camp i like to think i am open minded enough to consider any other possibility. so if we are to consider this mind / creator scenario there are definitely questions that need to be asked that are thrown up by paul's above post. these are questions relating to the origin of the mind (the perennial creator problem), the length of time this mind was experimenting, and whether there are or have been other of these minds.

 

if we accept that this mind created life, and maybe the universe itself, where did it come from? we have to consider the idea that at some point there is an absolute origin. something had to spontaneously or randomly appear. if the fossil record shows a pattern of trial and error does it also show the point at which the mind stopped experimenting? or is it still experimenting? and if so, how can we sense it? as the search for exoplanets is revealing (or at least strongly suggesting) that planetary systems are the norm and not a rarity, we can extrapolate that there must be a huge number of rocky planets in stars' habitable zones and therefore conducive to life. might the same mind that made life on earth also have made life on those planets? or are there more of them? or did the one mind randomly choose planet earth?

Blamski,

 

Don't worry about your intelligence. You are the first person on this thread to open yourself up to an interesting possibility, and then to evaluate some of the questions to which that possibility gives rise.

 

The issue of the origin of minds is well-covered in my book and in a fairly simple manner. A new hypothesis is required. Well, it is not actually new. I believe that Galileo was the first to understand it, while Newton was the first to include it, implicitly, in his laws of mechanics. It is that two primitive but opposing forces are essential requirements of any physical event. (E.g. the collision of two balls on a pool table, or the deflection of a freely moving electron by a magnetic field.) Now, that part was explicit-- implicit is that two things, each capable of manifesting one of the needed forces, are also required. This hypothesis is essential to my theory of the beginning.

 

That theory differs in some respects from your conclusions. For example, your opinion that there must be an absolute origin seems to suggest a Big Bang like event, which is an event without a cause, and thus IMO a violation of the counterforce principles found in Newtonian mechanics. My theories do provide for the reality of at least two critical primeval, and physical events:

 

1.) One that creates minds.

2.) Another in which at least one of them becomes self-aware.

 

The theory does go on to explain other things that you would want to see explained. It inevitably proposes that no mind which normal humans would put into the God category has had anything to do with our planet, except perhaps to drop by at billion-year intervals to check up on progress. Minds, in my theory, are only peripherally related to the standard psychological notion of mind, which is too poorly defined to be of use. Suffice to say that they are severely limited, but not so much as ours. My theory also predicts a universe teeming with intelligent life forms.

 

And yes, the fossil record is punctuated by fits and starts that IMO are nicely explained by a new engineering team taking over. (e.g. the Cambrian explosion, which gave Darwin some fits of his own.)

 

You are asking the right questions.

Link to post
Share on other sites

Rather than focusing on more abstract ideas such digital vs. analog and symbolic representation, I think you’d do better to directly consider to what I gather is the key material assertion of your essay: that self-replicating molecules, such as RNA, could not have evolved in a pre-biological world, without artifice.

Thanks for the advice, Craig, but I doubt that I would "do better" by re-focusing my attention onto a subject I am not interested in. The only reason I focused on the abstract ideas was to establish a terminology which would allow me to communicate with the forum. I learned that I chose a confusing terminology so I attempted to correct that by being very careful and specific with Buffy about the definitions I meant. I hope that is now behind me.

I think you’re committing a critical “cart before the horse” kind of error in being uninterested the chemistry of self-replicating molecules and abiogenesis in the context of your An Argument from Design essay, because were these subjects well understood, either affirming or negating the hypothesis that self-replicating molecules formed on Earth without artificial, “mindful” manipulation, you would almost certainly not be making the 3-step syllogism argument you make in that essay:

  1. Mind is required in order to produce a digital system,
  2. We find digital systems in living organisms
  3. Therefore mind was required to produce life

 

The science of abiogenesis aims to refute premise 1 of this argument. If any evolutionary biological theory of abiogenesis is true, the argument is vacated of meaning.

 

If just throwing elemental water, methane, ammonia, and hydrogen into a sterile apparatus, heating and discharging sparks into it (the Miller-Urey experiment) quickly produced RNA and DNA-using biological organisms, rather than just molecules including the 20 amino acid, we would not be having this conversation. Whether such a “mindless” experiment, actual or simulated, would, within the time constraint imposed by geological data, produce life, is of direct relevance to the argument that it would not, and that “mindful” procedures must be added to the experiment for it to do so.

 

Now I am concerned that you [Qdogsman] are conflating Darwinism - or, as I prefer, the Modern Synthesis, with abiogenesis. You remarked in an early post that the failure to account for the origin of life was a weakness of Darwinism. It isn't. You seem to repeat this belief in directing Buffy to think about the origin of the genetic code. We are not interested in the origin of the genetic code when discussing evolution. The origin of the genetic code is more or less equivalent to the origin of life, a related, but distinctly different topic.

I think we must acknowledge that the term “Darwinism” is used more widely than only as a synonym for the modern evolutionary synthesis, to refer to any process driven by natural selection of an analogous process, such as the fitness function in genetic programming.

 

Though abiogenesis theorists like Robert Shapiro (who’s work we discussed in the 2007 thread The metabolism first model of the origin of life) are careful, like you, to avoid terming pre-living chemical evolution “Darwinian”, the scenario they describe in which “proto-life” molecular organisms compete with one another for resources, “sustaining” themselves and experiencing “extinction”, is clearly “Darwinian survival of the fittest” in the more general sense recognize by most people familiar with the phrase.

 

As several hypographers have noted, different people see this thread heading in different directions, due to its mixture of concrete biological and abstract mathematical subjects. In my first post to it, I tried to steer it toward concrete biology, but now will address what I see as a key mathematical idea: the digital vs. analog dichotomy. I’ll be somewhat idiosyncratic, but think I can be helpful in clarifying the subject, and steering us toward consensus definitions of the terms.

 

First, the adjectives “digital” and “analog” apply to representations of natural numbers. I confine this part of their definition to the natural numbers, because representing arbitrary integers and rational numbers require the “-“ or “/” symbol, which are not inherent to the concept of digital or analog representations of numbers. “Digital” is not a synonym for “symbolic”, not “analog” an antonym. Numbers are represented with symbols, which can be used in analog or digital schemes.

 

Digital schemes for representing numbers have the general form (with an exception I’ll explain later):

[math]n = \sum_{i=0}^{\infty} d_i b_i[/math],

where [math]0 \le b_i < m_i[/math], [math]b_{i+1} = b_i m_i[/math], and [math]b_0 = 1[/math].

For efficiency, most digital schemes define all [imath]m_i[/imath] to a constant, [imath]B[/imath], known as the base of a positional numeral system. giving the special case of the previous general form

[math]n = \sum_{i=0}^{\infty} d_i B^i[/math]

 

For a representation scheme like the above to be written, each number [imath]b_i[/imath] must be mapped to a glyph.

 

Analog schemes for representing and writing numbers have the following general form:

[math]n = {\bigoplus}_{i=1}^{n} X[/math],

where [imath]{\bigoplus}[/imath] is a concatenation operator analogous (no put or tricky self-reference intended) to the [imath]\sum[/imath] summation operator, and [imath]X[/imath] is a unit glyph, such as a line segment.

 

So, assuming from here on the traditional number-to-glyph mapping, the same number can be represented

1101

via a digital scheme where [imath]B=2[/imath], meaning

[math]1 \cdot 2^0 +0 \cdot 2^1 +1 \cdot 2^2 +1 \cdot 2^3[/math]

or

XXXXXXXXXXXXX

via an analog scheme where [imath]X[/imath] is X

 

A key take-away from this short, provisional definition of digital and analog is schemes of both kind can be used to “encode” information by representing it as numbers, and that both require information to be “decoded”. In the case of the above examples, to decode the digital representation of the number, we must know that that it uses a constant-base scheme and the value of [imath]B[/imath]. To decode the analog representation, we must know precisely the kind of concatenation operator it uses, and the value of [imath]X[/imath]

 

Another is that digital representations of numbers map uniquely, one-to-one, to the numbers they represent. The number represent by the base 10 representation 123 is also represented by 1111011 base 2, but not by 1031011 base 2, because even though it evaluates to the number according to the definition above, the second representation violates one of its conditions.

 

Provisional definition in hand, we can now consider statements from AAfD, including that plant and animal genomes are digital, not analog.

A ten-kilogram stone "contains" or "carries around with it" the information that it weighs ten kilograms. That is analog information. By contrast, this string of thirty characters, 'The stone weighs 10 kilograms', contains the same information, but in this case the information is digital. To be symbolic, the information must be represented in some physical form, e.g. a string of letters, which has nothing to do with the physical system the information is about.

One of the key take-aways above is that both analog and digital representations are symbolic – that is, to borrow Qdogsman turn-of-phrase, presented in some physical form which has nothing to do with the physical – or, I’d add, abstract, non-physical – system the information is about.

 

In this example, we could represent the weight of the stone not only via a string of glyphs (characters), where the meaning of terms appearing in the string such as “kilograms” is understood, but by a drawing of the stone including marks indicating a reference object, such as a 1 kg stone, where again, the meaning of the various marks are understood. The picture could be very realistic, even a photograph, yet remain a symbolic representation of the weight of the stone.

 

The volume of the stone could be represented by the digitally as “the stone contains 3700 cubic centimeters” (base 128, assuming the ASCII scheme), of by an assemblage of string wrapped and tied, then slipped off the stone. Either scheme requires a human interpreter know how to decode the scheme. In the latter, this knowledge is arguable more intuitive – the human need know nothing of the English language or the metric system – but it is, nonetheless, needed knowledge, and the use of string no less symbolic than the use of language.

 

Another key take-away is that digital representations are unique. However, this isn’t true for representation of the weight of the stone like “the stone weighs 10 kilograms”. It can also be represented by “the weight of the stone is 10 kilograms”, and countless other variations.

 

A key claim made in AAfD is that DNA and RNA, along with the biological systems that transcribe and express them, are digital. By the uniqueness requirement from my definition, it’s clearly not, because the 3-base pair RNA codons that specify the amino acids to added to a protein being assembled by the ribosome, or that the protein is complete and should be released by the ribosome, don’t map one-to-one to the amino acids. Of the 61 of the 64 codons that don’t indicate “complete, release”, only one, A[denine],/U[racil],G[uanine][imath]\to[/imath][M]ethionine, maps a single codon to a single amino acid. For the other mappings, from 2 to 6 codons map to each amino acid.

 

More important to the arguments made in AAfD, I think, than whether D/RNA is digital, is whether it is artificial. AAfD compares Morse code to the D/RNA genetic code in reaching the conclusion that both are artificial. The argument for this is made most directly, I think in this paragraph

Now, finally, we are ready to ask the really important question. How did the assignments in the genetic code get made? What is the equivalent of Samuel F.B. Morse assigning dot and dash patterns to letters? The answer is easy if some mind made those assignments. It is simply a matter of filling out that 64 position grid. For each of the 64 combinations of A, T, C, and G, choose one of the 20 amino acids. Just make sure you use each amino acid at least once. I suppose this could be done by some mindless mechanism, but I don't know where that grid is, or was, nor can I imagine how any physical process could make use of it if it did exist. After all, the filled out grid is digital and its representation has nothing to do with any physical processes.

Here, I think, the author errs. The mapping of codons to amino acids does have something to do with physical processes. They fragments of molecules, that, when processed by the Ribosome, a “mindless” molecular machine, assemble individual amino acids attached to much larger transfer RNA molecules into larger proteins.

 

Referring again to Qdogsman’s paragraph about the stone, “there is nothing symbolic about” the RNA codon. Like the stone, it is not a symbol for itself or the actions it produces in various situations, it “carries around with it” the information that we humans can use to predict its behavior, such as a codon in a messenger RNA molecule binding with the complementary anticodon in a tRNA molecule, and affecting the aaRS enzyme to cause the amino acid mapped to it to bind to the tRNA molecule to which it is bound.

 

Qdogsman, I get the impression you are ascribing special significance the different molecular machines – DNA, RNA, tRNA, mRNA, ribosomes, enzymes, etc. – involved in protein expression. Continuing the stone analogy, you seem to me to consider the mRNA to be a description of a formed stone, tRNA to be an unformed stone, and the ribosome to be a mindful sculptor working from the description. However, these molecules aren’t like that.

 

This is not to say that a mindful being could not have, through artifice, assembled simple chemical compounds to create this mindless molecular machinery. That a machine is complicated, however, doesn’t inherently prove that it was made by a mindful being. The argument that “nobody has ever seen a mindless process create a complicate machine, therefore it is not possible” asserts this, but is logically incorrect, just as much as the argument “nobody has ever seen a mindful being create a self-replicating machine, therefore it is not possible.”

 

The digital vs. analog dichotomy can be obscured by the use of the adjectives to apply to devices, such as electronic computers, that store and perform calculations using representations of numbers. Digital electronic computers, for example, can represent numbers using ordered arrays of unambiguously recognizable charges or voltages analogous to each [imath]d_i[/math]. Analog electronic computers represent numbers using measurables such as single voltages.

 

Practical computer devices have both digital and analog components, converting representations of numbers in one scheme to the other via components know as digital-to-analog and analog-to-digital converters. For example, about 44,000 times per second, my little electronic music player converts digital representation of numbers from 0 to 16777215 or less to voltages in coil wires, moving via interaction with permanent magnets the diaphragm in its earbud speakers to make the sound I hear.

 

Typical sound recorders and players, such as ones following the MEG-1 (which includes MP3) do not, as claimed here

These days it [music being recorded] would be sent from the microphone into a different kind of recording device, one that performed mathematical Fourier transformations on the analog music so as to convert it to a series of digital numbers.

perform wave analysis, such as via Fourier transforms, to record or play sounds. Although most vary the audio sampling rate and bit depth in fairly complicated ways to make recorded sounds seem realistic to human listeners, and use data compression methods to reduce the amount of storage medium needed, essentially, digital audio recording devices simply store a description of the positions that a microphone diaphragm was in during recording, and which a speaker diaphragm should be in at precise instants in time to reproduce the recorded sound.

 

Perhaps I quibble in the preceding paragraph, but I think it’s good to point out technical inaccuracies, even small ones, in a science forum like hypography.

Edited by CraigD
Fixed typo
Link to post
Share on other sites

I have done so, actually i have looked at your assertion quite closely

Thank you for doing so.

 

I stand by my assertion, you are doing nothing but constructing an elaborate version of the information argument.

I think you have constructed an elaborate mis-construal of what I wrote. I am unfamiliar with "the information argument" but if it has anything to do with Shannon's Information Theory, then it doesn't apply. I do not agree with Shannon's definition of 'information' because he leaves out what I believe is a crucial component: the involvement of consciousness. But that is off-topic so I'll leave it there.

 

...your basic premise of the DNA being a code created by a mind is only true if you concede that the mind is the mind of man and the information is an arbitrary designation created by man.

As I have laboriously explained to Buffy, I do not make such a concession.

 

you cannot successfully show your assertion to be a true representation of reality unless you show that DNA is information apart from the designation humans give it...

Most true. I look forward to the day when scientific researchers do the investigation necessary to arrive at the conclusion that a non-human mind was necessary for the origination of life, not to mention the observable universe itself.

 

Thanks for your interest and your comments.

Link to post
Share on other sites

Yes.

 

 

 

It doesn't contribute directly, but by opening up new possibilities for inquiry, it opens the way for those new inquiries to produce new knowledge.

 

 

 

I disagree. Think about the many mysteries in ancient times that had to do with vital issues affecting survival. It was believed that the heavenly bodies influenced the behavior of earthly processes and events. As a result of these (now known to be ludicrous and erroneous) beliefs, a great amount of attention and energy was spent studying and documenting the positions and movements of the stars, planets, etc. This data proved to be instrumental in the discoveries of Kepler and Newton.

 

 

 

I agree. You seem to be supporting my position here. Since new questions are brought to light, it only makes sense that we should begin investigating them. We need to open our scientific horizon, not close it off.

 

 

 

My point exactly. We need to identify and explore those new areas of questioning.

 

 

 

I am surprised and somewhat dismayed that you could have inferred that "need" by what I have written. I am not aware of any time I have ridiculed science.

 

I have been critical, however, even extremely so, when I discuss egregious examples of where the official scientific community deliberately suppresses the work of sincere investigators who are working outside the official scientific bounds. Here are some glaring examples of this:

 

1. Barry Marshall was vigorously opposed by the official scientific "consensus" in his efforts to prove that stomach ulcers could be caused by bacterial infection. It seems highly unlikely to me that the huge business in manufacturing and selling anti-acid pills did not have a large influence on how grant money was spent, which in turn motivated the scientific "consensus" to take the position they did. In desperation, Barry courageously took it on himself to swallow a concoction containing Helicobacter pylori, thereby causing him to contract a severe case of stomach ulcers, which he subsequently cured with anti-biotics. Although he was treated shamefully by official science, he was at least subsequently rewarded.

 

2. I am doing this off the cuff so forgive me for not knowing names, dates, details, etc. but there was a father-son team, I believe, who were convinced (as I too was at the time. (In fact I was personally ridiculed by classmates who were studying Geology at the time and being put down for being so ignorant of the "true" geologic processes.)) that the continents had drifted apart. The official science community vigorously opposed his investigations and caused him a great deal of difficulty, until he finally produced the proof that he was right. He didn't get the credit as far as I know, and science cleverly changed the name from the much derided "Continental Drift" to the official and respected name of "Plate Tectonics". Again, shameful.

 

3. There was a guy, forgive me again for forgetting his name, who spent virtually his entire adult life studying the barren landscape of central Washington state. He was convinced that the topography was formed by an enormous flood event. Again, the official science community gave him no support whatsoever and instead placed impediments in his way. It was only after satellites were able to provide pictures from a high vantage point that it could clearly be seen that indeed a giant flood had taken place and he had been right all along. Then the scientists belatedly entered this forbidden area of inquiry and figured out what had happened and how. (The bursting of a big ice dam near Missoula).

 

4. One of the most egregious in my opinion, is the case of Dr. Money of Johns Hopkins. He was the official spokesman for the then-current position of official psychology and he had convinced the scientific "consensus" that there was no gender difference among humans (except for some minor and insignificant anatomical and physiological differences) that was not caused by culture. As part of his work in hoodwinking the scientific community, as well as a significant gullible part of the public, he deliberately falsified the data he accumulated relating to a case in which he had botched the circumcision of an identical twin and as a result the unfortunate boy lost his genitalia. Money seized on this rare opportunity to exploit this poor boy and his family and conducted a controlled experiment on them. He had the family raise the child as a girl and I think he did some more surgery to try to achieve the anatomical effect. The deliberately falsified reports of the experiment were used for years to bolster the stupid and false position of official science until the boy finally rebelled (some years before his eventual suicide) and the truth came out. Most shameful.

 

5. Now we have the absolutely stupid position taken by official scientific "consensus" that human activity is causing global warming. Since there is so much money involved here (much more than that of the drug interests in the H. pylori case) the official scientific "consensus" is not likely to admit the facts any time soon. Instead they tried the same ploy as the geologists and changed the name from AGW to "Climate Change". Curiously they removed the anthropogenic reference from the name, but they still indoctrinate our youngsters and the gullible public with the ludicrous notion that CO2 is a poison and a pollutant. Again, shameful.

 

In such cases, if I thought ridicule would be an effective approach, I might ridicule such "science". But since I don't think so, I don't ridicule them in spite of how ridiculous the "science" is.

 

 

 

Most true.

 

 

OIC, your position is indeed interesting, you equate the ideas of science that have been denigrated as being similar to your idea of a mind creating the genetic code. I do understand, you are correct in that science is often immovable, when science becomes dogma it benefits no one. but your assertion of a mind behind the genetic code fails, like I have already pointed out, at it's most basic level. DNA is not information, if it is not information it cannot be a code but more importantly you are equating the ideas of scientists that had evidence but were ignored with your idea which you cannot provide any support for other than incredulity.

 

All you have done so far is criticize real science by asserting something that cannot be tested. An untestable assertion is not valid because it cannot be proven to be wrong. You seem to be claiming that your assertion of a creator is justifiable in light of gaining new knowledge but how is an untestable assertion going to add anything but mystery to an already unknown?

 

All the examples you stated were of things that were testable, real physical evidence existed, no real physical evidence of a creator exists. your only evidence is your inability to understand the evidence.

 

You keep claiming that DNA is code, it is not anything but chemistry acting according to the laws of physics, to show a creators hand you would have to show that what DNA does is impossible. It can be shown that mindless evolutionary processes can bring about complexity, in fact things like speciation have been observed in complex animals both in the lab and in the wild.

 

Then you have the fossil record, the fossil record supports evolution absolutely, the DNA supports evolution absolutely, no other explanation has both explanatory power and physical evidence. If you actually come up with positive evidence to support the notion of a creator I would love to see it, you would be on a fast track to a Nobel Prize and most religions would embrace you in ways that might be disturbing :unsure: But all you have is an assertion that it couldn't have happened because you cant see how it happened, the whole digital analog things is just obfuscation, it really wouldn't matter if DNA is digital or not, there is no evidence of a creator, no need for a creator, more importantly there is no evidence of special creation of anything.

 

Now having said that it is entirely possible that a creator uses naturalistic methods to guide evolution, it's also entirely possible that elves and fairies are responsible but nothing supports those assertions other than my imagination.

 

Your stance on Global warming pretty much shows that you are a not paying attention and that science and the scientific method are really what you are against. The only people who believe that global warming is false are religious nuts and corporations that profit from the status quo.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...