Jump to content
Science Forums

Does a genome have entropy


Larv

Recommended Posts

Another entropy consideration has to due with polymers and monomers. Say we had 10 connectable blocks, which we connect into a linear arrangement, like a snake. If we threw the snake of blocks onto a table there are a large number of possible shapes it can form. One constraint is the entropy of the shape is limited to a circle, the diameter of which is the length of the snake, since they are all connected. It can't appear outside its own length restriction (restriction circle).

 

Let us compare this to ten individual blocks, not connected. With random throws we could theoretically form the same range of shapes as the snake. Also, there are even more degrees of freedom, since any and all single blocks can fall outside the snake's restricted circle adding an even larger number of possible shape distributions. Moving from monomers to polymers, by adding a level of configurational space restriction, lowers entropy. The entropy is not zero, just lower.

 

Let do this another way. We have ten blocks with ten different colors. Next, we turn out the lights, and ask someone to assemble these blocks into the ten block snake. This level of entropy is connected to the randomness within assembly. Using statistics, we will be able to repeat the random shape every so often.

 

Let us now set a constraint to the assembly of these ten blocks. We need to assemble the block snake by placing the ten colors in alphabetical order. With this constraint there is only one way to assemble. Random and statistics, don't really apply since they comes to a focus as cause and effect. There is only one way to do this or probability needs to equal 1.0, if we wish to satisfy the alphabetical requirement. If our person assembling the block snake, can't spell that well or is going to fast and gets sloppy, they may add entropy to the cause and effect. Now the final product is subject to some form of statistics.

 

Monomers lower configurational entropy as they become polymers, even if they assemble in a random way, due to the spatial restriction, which lowers the degrees of freedom of the monomers. There is still entropy. Next, as we specify more and more specific ordering, the degrees of freedom within the random positioning of the monomers, lowers, with a very specific ordering, lowering positional entropy to zero. There is only one way to do this, if we set the correct restraint, resulting in cause and effect.

 

Let us look at DNA and RNA. If we started with two equivalent lengths of DNA and RNA, both with two strands, and threw these onto the table, the RNA will have more degrees of freedom. The DNA is restricted to the double helix and a restriction circle. The RNA can do this double helix thing for a little while, separate anywhere along its length, curl upon itself to form a self double helix, etc. The higher degree of freedom of the double RNA, implies the RNA has a higher configurational entropy.

 

One last thing, is to compare linear to branched polymers. Branched polymers are not really found in nature, for a good reason. Let us start with 100 identical blocks. There is only one linear polymer configuration or shape with the identical blocks. It sort of defines cause and effect at some level, since the probability is 1.0, with respect to this singular linear constraint. With branched polymers, there are a large number of ways to add side branches to different positions and at different lengths, implying a higher configurational entropy. This is far more statistical in nature. The DNA and RNA are linear polymers. The proteins are also linear, with respect to the peptide linkage, but the different amino acids have side groups or -R groups. These -R groups adds a level of configurational entropy to linear protein polymers, that is not in the DNA and RNA.

Link to comment
Share on other sites

I would like to talk about entropy and proteins. To begin, consider a hypothetical protein composed of all inert components (Long string with little string branches). If we threw this onto a table, there are a large variety of shapes it can form, especially if we take into consideration all the possible positions of the side -R groups (strings). Each of these have degrees of freedom.

 

What I would like to do is add one chemical attractive force, which is the hydrogen bonding which occurs between peptide linkages. At each place in the main string, where the side strings attach, we add a hydrogen bond. These will cause the backbone of our hypothetical protein to form a helix. This will lower configurational entropy, since there is now a restriction for the backbone. Each time we throw the protein onto the table, the hydrogen bonds will try to turn the backbone into a helix. This will shorten the protein in space, like winding a wire into a spring. We would need to add some energy, if we wish to increase the entropy back to previous levels.

 

Let us now add polar and organic interactions. In this example, our hypothetical protein helix is within an inert solvent. Like will attract like side groups, changing the odds during each throw. During any throw non-polar and polar find each other because of their mutual attraction. This will further lower the randomness of the inert scenario.

 

Let us now change our inert solvent to water. In this case, hydrophobic and hydrophilic effects will also occur. Before we added the water, like would attract like, but still allow a large degree of freedom, such as canceling charges isolating themselves being neutral. But now with the water, we have push and pull effects. The water will push the non-polar groups to lower their surface contact with the water. High surface contact with water will increase surface tension and therefore the surface energy, making energy available for entropy to increase. Water prefers the organics lower surface energy.

 

The polar groups will create low surface tension within water. Their interaction with water lowers surface energy, also making less energy available for entropy. Now when we throw our protein into the water, it forms its helix, polar and non-polar begin to separate and attract, as the water begins to bunch them into segregated groups to minimize the surface tension and energy. There is still entropy.

 

When we duplicate the DNA, this occurs at many places on the DNA, at the same time, with the little segments of DNA connected, later. If we used this technique for protein synthesis, the entropy remaining within the protein would be higher than if we grow it from only one end.

 

The reason being, each little growing or unconnected segment would see the secondary bonding forces, separately, and therefore out of the context of the other segments, since they each would shape itself, independently. With the one growing end, as the protein comes out, hot off the press, this first segment, will see the secondary force impact and form a starter shape in the water. As more protein comes off the press, the secondary forces effect this new connected protein segment, but within the context of the original forming core. The new aspect, might add to a snowball, or it might need to island off. Later the little islands may find each other.

 

The final shape and protein entropy will also be a function of the monomer sequence. If we used the same number of the same monomers, but made the placement random, the same configurational shaping process, due to the secondary bonding forces, as it come hot off the press, would lead to the final shapes being different. For example, if we placed all the non polar side groups first, we might get one big non polar ball forming. If instead, if we split the non polar groups, into ten equal zones, separated by lengths of polar groups, we may get ten little balls, avoiding the connecting polar shielding. This last example was only for illustration purposes, to help you visualize how sequence can effect final shape using the same composition. All lower entropy with respect to the inert protein case, but not all lower entropy as much.

Link to comment
Share on other sites

HB, It takes a lot of time to visualize the proper boundaries (for calculating entropy) for all the situations and scenarios that you describe. It's all very interesting; but for me it could turn into hours of hedonistic pleasure, and I can't afford that.

 

I did spend 40 minutes watching "From Geo to Bio" and thought of you, and about how entropy would apply to each of the four stages that the speaker details. The discussion on chirality (at about 30 minutes) was neat info.

 

[He also mentions ID at the beginning, and God at the end of his talk (on abiogenesis?)]

ResearchChannel - From Geo to Bio: The Emergence of Biochemical Complexity

"Is life as we know it merely an improbable accident? Does life exist on other planets or just ours? Geophysicist Robert Hazen tackles these tough issues while acknowledging much mystery remains."

 

...

At about 40 minutes he talks about (those universally ubiquitous) Polycyclic Aromatic Hydrocarbons (PAH), and points out how the spacing of stacked PAH sheets is naturally 3.4 nm -the exact same spacing as RNA bases!

I may have the details wrong, but I can't believe he didn't sound more sarcastic when he mentioned that "coincidence."

From my studies on biochar (and its graphenic structure), I know about the wide array of chemically active functional groups located on the edges of PAH sheets; so my imagination went wild visualizing channels up the side of a stack of PAH sheets ...binding to (5C?) sugars ...dehydration ...polymerization ...templating....

 

Seems to me that in each of these 4 stages he describes, a system develops to split the more complex molecules (created by solar energy) into simpler molecules (plus heat). Light -->complex molecules. Complex molecules + dissipative systems --> simpler molecules plus heat. So dissipative systems are turning light into heat more efficiently than how the energy of that stored light (in complex molecules) would decay into heat (raising overall entropy) on it's own. Gosh, a network of such systems could synergize to raise entropy even more than....

~ :)

Link to comment
Share on other sites

The direction I was heading is to show that the molecular configurations within life, such as DNA, RNA and proteins, and those things made from these, all have lowered entropy relative to the simpler molecules from which they stem. We can predict proteins from templates because of the low configurational entropy (very specific). The impact of these molecular configurations is another story with respect to entropy.

 

Here is an interesting configurational entropy consideration. If we look at the bases of DNA and RNA, what stands out in my mind is the resonance nature of the bases. As an analogy of where I am heading with this, if we compare cyclohexane to benzene, cyclohexane has more configurational entropy, because the single bonds allow it to pucker and flex in 3-D. The resonance within benzene, restricts benzene more or less to 2-D space and therefore has fewer degrees of freedom. Life uses bases designed with fewer degrees of freedom, compared to having chosen higher entropy bases, without resonance. From a practical point of view this is better for the alignment of the base pairing resulting in less configurational entropy further down the line. It is more predictable.

 

 

Say we hypothetically made some RNA, but without resonance bases and use this as a template. The higher template entropy could actually have some practical value. A small length of such RNA, with its bases puckering, so the H bondings groups are all over the place, would create constant template entropy. One gene could theoretically generate a ton of possible proteins. This is not good for life as we know it, but this theoretically could help pre-life by helping to fill in a volume with protein variety.

 

Next, let us begin to to add one or two double bonds to the bases. This will cause the template to lose degrees of configurational freedom, such that the h-bonding groups remain in ways that assist with better alignment. This means less entropy down the line. Finally, we add the modern full resonance to the bases to minimize degrees of freedom. Now entropy down the line decreases even more, which is more conducive to life.

 

One direct way life uses molecular configurations to decreases entropy is the pumping of Na+ and K+ ions. In solution, these cations would maximize entropy by distributing uniformly throughout any given volume. One can take a micro-sample anywhere and the concentration of will be close to the same. When we pump these ions we are essentially causing these cations to sit on opposite sides of the beaker, which requires removing many degrees of freedom.

 

What is good about this situation for life, since nature and the second law wishes to always increase entropy, there will always be a natural push to reverse this situation and increase cationic entropy toward a uniform solution. The cell takes advantage of this using the stored entropy potential, within the segregated cations, to help drive other things such as transport. The cell constantly tries to lower cationic entropy, while the environment is trying to increase cationic entropy. The feeding of the cell, follows the natural potential to increase entropy. The cell can't help but be fed. But as it evolved specific transport, it becomes less vulnerable to random environmental entropy. But the second law still applies focusing entropy increase into fewer things. The cell tries to restore the low entropy gradient making the second law work continuously.

Link to comment
Share on other sites

  • 2 months later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...