Information

Is there recent info about the hypothetical ancient two-codon genetic code?

Is there recent info about the hypothetical ancient two-codon genetic code?



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Here is the latest I have found. link

This is the basic idea: Evolution does not look ahead and make plans. It would not create a system of mRNA with giant ribosomes to create proteins, until proteins were already useful. There must have been a simpler way to create useful proteins before, that got complexified after it was already useful.

Since enzymes can be made of RNA, presumably early life made just RNA enzymes and no proteins. It didn't matter whether RNA enzymes were as efficient as modern ones because protein competition hadn't evolved yet.

And since we now make peptides off an RNA template, probably early proteins were also made off an RNA template. It is hypothesized that particular amino acids would be attracted to particular RNA pairs, and they would be put in just the position that would help them link to neighboring amino acids. This could be slow and inefficient compared to modern protein production because again the competition was not stiff back then.

Between two pairs of coding bases, the RNA needed a noncoding base for spacing. Possibly the noncoding base might be particularly good for catalyzing the joining of amino acids.

The system did not have to use the same amino acids we use today. It could use amino acids that were good at attaching to RNA, and later after the system got changed around the modern list of 22 amino acids could have developed.

There are various ways this is a satisfying idea. For example, the machinery to make proteins is largely made of RNA. The mRNA carries the message. tRNAs decode the message. Ribosomes are giant machines which transcribe proteins efficiently, and they are largely made of RNA with a lot of little proteins that help them hold their shape etc. Presumably the first protein-building machinery was itself built entirely from RNA -- because that's what was there to build with -- and over time evolution found ways to improve that machine with protein scaffolding.

This 2005 paper link describes a reasonably plausible way it could have been once. They imagine a few alternate amino acids which might have once been part of the genetic code -- ornithine. Homoserine. 2,4-diaminobutyrate. They imagine ways that the crevices between pairs of bases could be like the binding sites of enzymes. They fit the ideas together.

Have there been advances since 2005?


Chapter 5 Biology Evolution

A fossil is the preserved remains or traces of any organism from the remote past
Preserved remains (body fossils) provide direct evidence of ancestral forms and include bones, teeth, shells, leaves, etc.
Traces provide indirect evidence of ancestral forms and include footprints, tooth marks, burrows and faeces (coprolite)

The totality of fossils, both discovered and undiscovered, is referred to as the fossil record

The fossil record shows that over time changes have occurred in the features of living organisms (evolution)

1. Fossils can be dated by determining the age of the rock layer (strata) in which the fossil is found:
- Sedimentary rock layers develop in a chronological order, such that lower layers are older and newer strata form on top
- Each strata represents a variable length of time that is classified according to a geological time scale (eons, eras, periods)

2. Different kinds of organisms are found in rocks of particular ages in a consistent order, indicating a sequence of development:
- Prokaryotes appear in the fossil record before eukaryotes
- Ferns appear in the fossil record before flowering plants
- Invertebrates appear in the fossil record before vertebrate species


Introduction

Antibiotic resistance has become a major threat to the fundaments of modern medicine during the last decades. Resistance genes have emerged against almost all classes of antibiotics, even against those considered last resort. Mobile antibiotic resistance genes (ARGs) are associated with a variety of mobile genetic elements (MGEs) that enable these genes to spread to new hosts, even across taxonomic boundaries. Insertion sequences (ISs) 1,2 and IS common region (ISCR) elements 3,4,5 have been shown to provide both mobility and strong promotors for the expression of ARGs. In several cases, these are key elements for the mobilization of ARGs from a bacterial chromosome to transferable MGE, such as plasmids or conjugative transposons 6,7,8 . Although novel ARGs frequently are reported, their origins, meaning the bacterial taxa from which these genes were mobilized, facilitating their transfer to clinically relevant mobile vectors, are unknown for the largest part. Understanding where ARGs come from and which environments, conditions, and/or human practices favor their emergence is necessary to effectively mitigate the emergence of still unknown ARGs in the clinics. Such knowledge will, however, likely be of limited use for managing the ARGs that are already found in the clinics, as it has been shown that once emerged, ARGs are maintained in parts of a population for a long time even in absence of selection pressure, making their emergence practically irreversible 9,10 .

Predicting the conditions that are likely to favor the emergence of new ARGs based on the known origins of single or a few ARGs is difficult. However, identifying the origins of a greater number of ARGs may enable us to recognize underlying patterns, such as shared characteristics of origin species, how ARGs are mobilized from their most recent non-mobile origin, the environments they thrive in or their connections to human- or animal associated bacteria. A strong hypothesis is that selection pressure from antibiotic use in humans and domestic animals has played a critical role in their emergence 11,12 . As the environmental reservoir of ARGs is much greater than what is identified in the human and domestic animal microbiota, it is possible that other environments, as well as anthropogenic selection pressures in those environments, play a critical role in this development too 13,14 . Although the origins of some mobile ARGs have been reported and extensive reviews exist on the origin and dissemination of specific groups of clinically problematic ARGs such as mobile AmpC enzymes 15,16 or CTX-M family β-lactamases 17 , there is no summary or analysis of all proposed ARG origins to date. Furthermore, the types and quality of evidence presented to identify those origins vary substantially, from mostly molecular methods to (more recently) purely sequencing-based approaches. This creates the need to carefully evaluate each reported ARG origin based on a fixed set of criteria, to ensure the integrity of our knowledge about ARG origins and guide future efforts to identify these.

In this study, we establish a set of comparative criteria that can be used to identify the origins of mobile ARGs on at least genus level with high confidence, based on patterns recognized from thorough literature research. We evaluate the evidence on each previously reported origin based on these criteria and supplement missing data (if available) through amendment with publicly available genome data and state-of-the-art comparative genomics analysis. Finally, we analyze the existing data and discuss what we can conclude from the curated list of origins with regards to overarching patterns on origin species traits and taxonomy, as well as mode of ARG mobilization from their origin and potential circumstances favoring their emergence in human pathogens.

Scrutinizing previous reports, we find that only 81% of suggested origins are supported by the data. We show that all confirmed origin taxa are Proteobacteria, several species being the origin of several ARGs and almost all associated with infection in humans or domestic animals. These results highlight the potential importance of the human/animal body as mobilization hotspots for ARGs. Still, the lack of known origins for the great majority of ARGs points towards unsequenced environmental bacteria as likely sources.


The Institute for Creation Research

A series of books and videos by Dr. John Walton, an Old Testament theologian at Wheaton College, has made a huge splash in the evangelical community in recent years, with considerable pushback from biblical creationists. 1-7 He presents a supposedly new perspective on Genesis that not only accommodates the false claims of evolutionists but also denies the literal Genesis interpretation of early Earth history, including human origins and the global Flood.

Much of Dr. Walton&rsquos success is linked to the enthusiastic endorsements of theistic evolutionists since his paradigm promotes molecules-to-man evolution. He even serves on the advisory council of the theistic evolutionary organization BioLogos. 8

Interpreting Genesis with Ancient Pagan Culture

The foundation of Dr. Walton&rsquos argument is a novel scheme to interpret the Genesis account of origins and Noah&rsquos Flood within the context of ancient Near Eastern pagan culture and mythology (Sumerian, Babylonian, Egyptian, etc.). Walton proposes that thanks to Near Eastern archaeology over the years, along with his own interpretation of the ancient writings of these cultures, we can now finally understand what the Bible is really saying. Our literal, straightforward perspective of Genesis has supposedly been flawed the past few thousand years, but because of Walton&rsquos insight into ancient pagan beliefs, we finally have a reliable framework for understanding Genesis. And quite fortuitously for Walton and his friends, this paradigm also allows for millions of years of hypothetical evolution.

But the plot thickens. Walton&rsquos ideas have implications for the gospel message. According to his origins story, Adam and Eve were not a literal original ancestral human couple but merely selected individual archetypes representing a population of humans who had evolved from apes over millions of years. Apparently, when humans had evolved to the point where God thought they were useful, the Lord commissioned them to bring order from the disordered and anciently evolved creation.

The whole idea of the interplay between chaos (disorder) and order, a popular concept in pagan philosophies, is also a centrally occurring theme in Walton&rsquos system of biblical exegesis. Along this line of reasoning, there&rsquos no room for the original sin of Adam and Eve as the Bible defines it, but merely the entry of disorder into the world&mdashor the entry of more disorder, if you follow Walton&rsquos logic. According to Walton, Satan&mdashwho deceived Eve in the garden to disobey God&mdashis defined as one of the &ldquochaos creatures&rdquo who &ldquohave no will of their own&hellipno morality. They&rsquore not good or evil.&rdquo 9

God&rsquos Cosmic Temple

Based on Walton&rsquos premise that we should interpret Genesis through the lens of ancient Near Eastern pagan culture, the creation events would have been interpreted at the time it was written as a &ldquofunctional&rdquo creation, not a material one. Many scholars who have studied ancient Near Eastern literature, however, dispute this idea of ancient people looking at the world through purely &ldquofunctional&rdquo eyes. 10 Nevertheless, Walton views the creation week in Genesis 1 as nothing more than a mystical initiation ritual in which God instantiates functional significance upon His &ldquocosmic temple&rdquo&mdashthe evolved earth and its biosphere.

In other words, God&rsquos process prior to this inauguration ritual involved millions of years of evolution accompanied by death, violence, and suffering. And according to Walton, at the end of this cosmic temple ritual, God proclaimed the evolved corrupt, violent world &ldquogood.&rdquo In fact, in a recent podcast posted on the BioLogos website, Walton stated, &ldquoWhy is there hunger in the world, why do children suffer, why is there illness, why is there this [COVID-19] pandemic. God created the world as it is&hellip.These are things that God in His wisdom has made the world this way.&rdquo 9

Walton&rsquos pagan overlay on the Genesis creation week is best described in the text of the leading book in his Lost World series. In a chapter titled &ldquoThe Seven Days of Genesis 1 Relate to the Cosmic Temple Inauguration,&rdquo he says:

We have many inauguration texts from the ancient world, the most detailed being the dedication of the temple of Ningirsu by Gudea about 2100 B.C. One of the first things to note is that at the inauguration the &ldquodestiny&rdquo and the powers of the temple are assigned&hellip.This is the ultimate function-giving act in the ancient world. Likewise the roles of the functionaries are proclaimed and they are installed. 11

So much for interpreting Scripture with Scripture and taking Genesis as the literal historical narrative that its Hebrew grammar clearly conveys. 12 In contrast, Walton seems more adept at rehearsing the details of ancient pagan myth and ritual. His idea of a purely &ldquofunctional&rdquo view of how we should interpret Genesis is eerily similar to the views of early Jewish and Christian Gnostics, who believed our physical/material reality is actually evil and only the spiritual world is good. This led to all sorts of early church doctrinal heresies. And like Walton&rsquos claims of a new framework to interpret Genesis, the Gnostics believed that only they had a true understanding of the Scriptures that came through a mystical extrabiblical enlightenment and special knowledge. Walton&rsquos obvious Gnostic-like themes have been noticed by other conservative creationist critics. 7

Interestingly, in Walton&rsquos &ldquoMy Advice to Students&rdquo YouTube video, he says nothing about seminary or Bible school students studying the biblical languages of Hebrew and Greek but instead encourages them to learn &ldquothe research languages,&rdquo especially German. 13 Why German instead of the original languages of the Bible, you may ask? Because German is the language of one of the primary fountainheads of what has been called &ldquohigher textual criticism&rdquo and &ldquohigher rationalism,&rdquo whose universities and liberal researchers have published copious amounts of Bible-doubting literature over the past 300 years. In other words, Walton tells students not to focus on studying the Bible itself but rather the opinions of Bible-doubting scholars.

Walton&rsquos Flood Was Archetypal and Local

Walton dismisses a literal Genesis global flood in his book The Lost World of the Flood, which he co-authored with another liberal Old Testament scholar, Tremper Longman III. 14 They attempt to persuade Christians to abandon the Genesis Flood as an actual global catastrophe in favor of a local, isolated flood. But this is a tough sell on both a biblical and scientific level.

From a biblical perspective, Genesis&rsquo description of the Flood clearly indicates that the entire world was flooded. Genesis 7:19 states, &ldquoAnd the waters prevailed exceedingly on the earth, and all the high hills under the whole heaven were covered.&rdquo Jesus taught the global scope of Noah&rsquos Flood, saying that the Flood came and &ldquodestroyed them all&rdquo (Luke 17:26-27, referring to everyone not on the Ark). Without exception, other New Testament writers referred to a historical flood, treating it as a global judgment on all humanity except for those preserved in the Ark (Hebrews 11:7 1 Peter 3:20 2 Peter 2:5). Amazingly, Walton and Longman themselves recognize that the text of Genesis does indeed describe a global catastrophe, even listing seven reasons given in the text as proof. 15 Yet, they cast the Scriptures aside in the cause of supporting the scientifically flawed hypothesis of evolutionary geology.

Not only does the idea of a local flood have no support from Scripture, but the majority of the earth&rsquos surface is covered in catastrophically produced sedimentary (water-deposited) rock layers called megasequences in the form of sandstone, limestone, and shale. ICR geologist Dr. Tim Clarey has mapped the megasequences that were progressively laid down over the year-long global Flood across the continents in violent cycles of deposition. 16 These flood-based rock layers cover entire regions of continents and are identical in their composition and sequence deposition globally. Thus, the hard science of structural geology all over the earth literally screams &ldquoglobal flood,&rdquo and there is no reason whatever for Christians to entertain an anti-scriptural view of a local flood to appease secular evolutionists.

Walton&rsquos Lost World Is a Lost Cause

Walton&rsquos theological rabbit hole gets deeper and more convoluted as you read his books and watch his videos. I urge readers to follow up with the writings of other creationist critics whose in-depth reviews tackle other angles of Walton&rsquos work. 1-7

Suffice it to say, Walton&rsquos views are satisfying to neither atheists nor Bible-believing Christians. Molecules-to-man evolution is not proven science and doesn&rsquot need the sort of mystical sophistry put forth by people like Walton to persuade Christians to compromise with it. We don&rsquot observe macroevolution happening today, nor do we find any evidence of it in the fossil record in the form of one creature morphing into a fundamentally different creature. Stephen Jay Gould, one of most notable paleontologists of the modern era, stated:

The extreme rarity of transitional forms in the fossil record persists as the trade secret of paleontology. The evolutionary trees that adorn our textbooks have data only at the tips and nodes of their branches the rest is inference, however reasonable, not the evidence of fossils. 17

In this quote, the phrase &ldquoextreme rarity&rdquo is evolutionary lingo for &ldquoabsence.&rdquo The absence of transitional forms is a huge problem for the evolutionary paradigm.

Nor does genetics offer any proof of evolution, as Walton claims in his books. The centerpiece of theoretical human evolution, the alleged 98 to 99% human-chimp DNA similarity, has been thoroughly debunked in the past few years, as well as the so-called &ldquochromosome 2 fusion.&rdquo 18,19 And empirical molecular genetic &ldquoclocks&rdquo show that human mitochondrial and Y-chromosome DNA variations fit perfectly with a single ancestral couple, Adam and Eve, and a 6,000-year biblical timeline. 20

As to why we have evil and corruption in the world, it&rsquos not because God created the world this way as part of a &ldquogood&rdquo creation over millions of years of evolution. The Bible is quite clear on the subject, and we don&rsquot need some convoluted and esoteric theology to explain it. Romans 5:12 plainly tells us, &ldquoThrough one man sin entered the world, and death through sin, and thus death spread to all men, because all sinned.&rdquo

The curse on creation&mdashthe entry of sin, corruption, and evil into the world&mdashis also clearly stated in Genesis 3:17-18: &ldquoCursed is the ground for your sake in toil you shall eat of it all the days of your life. Both thorns and thistles it shall bring forth for you.&rdquo And again in Romans 8:20: &ldquoFor the creation was subjected to futility, not willingly, but because of Him who subjected it in hope.&rdquo But there is good news, as 1 Corinthians 15:22 tells us: &ldquoFor as in Adam all die, even so in Christ all shall be made alive.&rdquo By turning from our destructive lives and placing our faith in the Lord Jesus Christ who gave Himself as a sacrifice for our sins, we can be restored in our relationship with God the Father and have eternal life.

Our Creator did not make a cosmic temple. He created a perfect world that was cursed with death when Adam sinned. Genesis and the gospel are clear. Like a house of cards, Walton&rsquos &ldquotemple&rdquo collapses under biblical and scientific pressure.


Signature of a Primitive Genetic Code in Ancient Protein Lineages

The genetic code is the syntactic foundation underlying the structure and function of every protein in the history of the biological world. Its highly ordered degenerate complexity suggests an incremental evolution, the result of a combination of selective, mechanistic, and random processes. These evolutionary processes are still poorly understood and remain an open question in the study of early life on Earth. We perform a compositional analysis of ribosomal proteins and ATPase subunits in bacterial and archaeal lineages, using conserved positions that came and remained under purifying selection before and up to the most recent common ancestor. An observable shift in amino acid usage at these conserved positions likely provides an untapped window into the history of protein sequence space, allowing events of genetic code expansion to be identified. We identify Cys, Glu, Phe, Ile, Lys, Val, Trp, and Tyr as recent additions to the genetic code, with Asn, Gln, Gly, and Leu among the more ancient. Our observations are consistent with a scenario in which genetic code expansion primarily favored amino acids that promoted an increase in polypeptide size and functionality. We propose that this expansion would have been critical in the takeover of many RNA-mediated processes, as well as the addition of novel biological functions inaccessible to an RNA-based physiology, such as crossing lipid membranes. Thus, expansion of the genetic code likely set the stage for the transition from RNA-based to protein-based life.

This is a preview of subscription content, access via your institution.


Methods

The common ancestor of acceptor arms (Figure 4) was reconstructed from the phylogenetic trees of Bacteria, Archaea and Eukarya as detailed in [20]. A combination of manual sequence alignment, Neighbor-Joining phylogenetic tree reconstruction method with Tamura-Nei distances and manual Parsimony-based ancestral state reconstruction was used.

The selected amino acid binding sites, comprising 337 independently derived sequences (18,551 nucleotides in total) directed at eight amino acids, are described in [12]. The curated sequence libraries are available directly from Yarus et al [12].

The statistical significance of over- or under-representation of anticodon/codon triplets was ascertained using chi-square or similar (G-test, exact binomial) two-sided statistical tests.


Related

Lack of diversity in genetic research could be costing us our health

Human Nature

CRISPR Gene-Editing Reality Check

Alissa Greenberg: Were you always interested in science and genetics? What drew you to this area of study?

Krystal Tsosie: When you’re Navajo in particular, there aren’t that many Indigenous people or Native Americans in the education pipeline and higher education. So of the higher degrees that were encouraged from people like myself growing up, either you were encouraged to become a doctor, a lawyer, an engineer, or an educator. And I was on the route of becoming a physician. I just loved understanding what it was that caused disease.

I was actually starting off in the cancer biology track, but there was a point in time where I realized if I wanted to pursue a career in cancer biology, that I would encounter the dilemma of, how do I innovate technologies that would not benefit my people? Because even if in my lifetime I were to develop something that could help somebody with cancer, chances are that. it wouldn't be applied in a rural tribal clinic setting. Like, how can I deal with the guilt of undergoing several years of education and research and not have it benefit my people?

So I returned to Arizona State University and did a master’s in bioethics. It was an interesting time because they were dealing with the aftereffects of the Havasupai case and that fiasco.

Havasu Falls, on the Havasupai Indian Reservation, near the Grand Canyon in Arizona. The Havasupai name means "people of the blue-green water." Image Credit: Frank Kehren, Flickr

AG: Can you say more about that case and what made it a fiasco?

KT: In the early 2000s an ASU researcher was doing work related to Type 2 diabetes markers in the Havasupai Nation. The Havasupai people are geographically isolated at the base of the Grand Canyon. And they collected blood samples from individuals and ended up using them to study other things besides diabetes, such as schizophrenia, which is a charged condition, and also started publishing their origins—stories that didn't quite match their own cultural stories because they themselves believe that they originated in the base of the Grand Canyon.

This was in combination with a lot of other discussions that were ongoing in global Indigenous communities. As of now, for instance, the Navajo Nation has a moratorium on genetic research, as do a number of tribes in the U.S. I'm not sure if you're familiar with UNDRIP, which is the United Nations Declaration on the Rights of Indigenous Peoples it was a response to just the number of large-scale diversity genome projects that were ongoing in Indigenous communities, particularly in Central and South America. Over 600-plus tribal nations around the world went to the United Nations to ask them to stop these genome diversity projects.

In particular, the National Genographic Project was denounced as a “vampire project” because they would helicopter in, collect blood samples, promise medical interventions that would help these communities, but they hadn’t really returned. So kind of like vampires in the night coming and taking the blood—that’s where that imagery comes from.

And if you look at, for instance, the 1000 Genomes Projects or the Human Genome Diversity Project, these are two major large-scale diversity projects that have made their information openly accessible to researchers worldwide. It was an effort to sort of democratize research, but what has happened is that a number of major companies have utilized that information to develop commercial platforms such as AncestryDNA. There is a huge interest in collecting Indigenous biomarkers, and there's a profit component there. The fact is that non-Indigenous entities are deriving revenue from Indigenous biomarkers, and at this point that hasn't really translated to medical benefits to the people that have actually contributed that information.

There's a level of expertise that has needed to be developed locally for Indigenous peoples to make these decisions for themselves, to self-determine. And we're starting to get to that point because now we have more Indigenous scientists. But there still aren't that many of us.

One thing I always say is that Indigenous peoples are not anti-science we're anti-exploitation. Science, as much as we like to idealize it, is not purely objective.

AG: In your Twitter bio you use the phrase “Decolonize DNA.” I'm curious what that phrase means to you. Is that related?

KT: To decolonize DNA is not anti-science, and it's not a rewriting of the fundamentals of DNA. One thing I always say is that Indigenous peoples are not anti-science we're anti-exploitation. Science, as much as we like to idealize it, is not purely objective. There’s subjectivity in the types of questions that we choose to pursue, the types of questions our agencies fund. And then also the decisions that we make in terms of who to include and who not to include in studies also creates subjectivity. And also how those results are interpreted. Because if they don't properly take into account all the historical societal factors at play, then we are ignoring some key, potentially colonial factors that relate to health.

AG: Do you have an example that might illustrate that idea?

KT: Not everything is genetically mediated that causes disease. But it's easy to think in those terms because that's probably the easiest bit of information to collect that relates to disease—the biological factors. But disease is complex. There's gene-environmental interactions that are at play. We know that socioeconomic factors play a huge role in disease.

Alcoholism is something that's really charged and is an example. There have been over 230-plus publications in PubMed alone that try to look to see why Native Americans are supposedly genetically at greater risk for alcoholism. But then that totally ignores the history of harm that has been perpetrated upon us, the lack of mental health and preventative-health measures, the lack of social programs for treatment of alcoholism. That's a perfect example of how skipping directly toward DNA as a cause for everything is potentially harmful and could lead to exacerbating negative stereotypes of a people.

CRISPR-Cas9 is a tool that lets scientists cut and insert small pieces of DNA at precise areas along a DNA strand. | Image Credit: Ernesto del Aguila III, National Human Genome Research Institute, NIH

Type 2 diabetes has been heavily studied in Indigenous peoples in the southwest and also in American Samoa. And a huge portion of this narrative for a long period of genetic history has been that we are genetically predisposed to this disease. But this disease didn’t exist in our communities until very recently. So there's these other factors like a forced diet that was imposed upon us forcible change to our ways of living and our ways of providing food for ourselves a removal of our lands that doesn't allow us to pursue our traditional forms of agriculture an imposition of a westernized form of diet. These are like actual contributors of health that are being overly conflated with genetics, when in reality there could be other social, cultural, colonial factors at play.

AG: How would you apply this idea of decolonizing DNA to CRISPR?

KT: We have to be really careful that we’re not overly simplifying our narratives related to evolutionary adaptation and mutations. Like, the term “mutation” is one that's not really well understood. A mutation is meant to be a change in the genetic code that differs from normal. But then what exactly is normal? The term that a number of us use is “polymorphism,” which is a common variation that's existent in at least 1% of the population. And even this is problematic because right now, even with our efforts to diversify genome studies, over 81% of participants in genome-wide association studies are of European descent. When we’re talking about genome diversity, a mutation or a polymorphism might be an evolutionary adaptation for a certain group of people in response to certain environmental conditions, and it could be protective in some cases. We don't have enough information about whether or not it’s adaptive in different conditions for different populations.

That is what I want to ask people who are such advocates of using this technology in living human beings. What is the ideal? Is there one?

I'm also really concerned about using germline editing as a solution for defining what constitutes a normal human being. These evaluating judgments ignore the rights of those with disabilities. It presents disability as something that must be corrected, when in reality, millions of people with a spectrum of conditions live healthy, fulfilling lives. That's something that I really am proud to see in the autism spectrum community, a cognizance that what we call “normal” should probably be changed. I also love and appreciate Down syndrome patients who are really advocating for their rights to live with their own agency and autonomy as adults. Like, what is this ideal that people are looking for? That is what I want to ask people who are such advocates of using this technology in living human beings. What is the ideal? Is there one?

AG: You write frequently about biocolonialism. Is this what you mean?

KT: I use it in the context of commercial exploitation of biomarkers. To other Indigenous studies scholars, biocolonialism can also mean the forcible introduction of genetic variation that negatively impacts us. So, for instance, this could be introducing diseases that didn't really exist in our communities. It could also mean changing our reproductive dynamics through genocidal acts.

AG: Can you explain that a little more?

KT: Basically a lot of population genetics is statistical. There’s a lot of assumptions at play there one of the assumptions is that humans meet randomly. But things like genocide are non-random events. There are some disorders that are recessive gene mutations that can be prevalent in Indigenous communities and are probably more so now, post-genocidal events, just because a huge portion of the population is now not reproducing. I'm trying not to say just “dead,” but…yeah. Dead.

Researchers decoding the cassava genome. Scientists have used CRISPR-Cas9 to edit the genes of agricultural crops including tomatoes, citrus fruit, cacao, and more. Image Credit: 2013CIAT/NeilPalmer, Flickr

AG: So how do we do better? I read one of your papers in which you and your coauthors are talking about principles for ethical engagement in genomic research. Can you talk a little about those?

You want to be able to acknowledge that the participants involved in studies have knowledge and experiences that are informative and valuable and therefore should be included in the research process—particularly if there are risks and benefits that are going to be affecting them and not outside communities.

And this is just a way of stating that if you are going to be collecting biomarkers that not only identify an individual, but also have an effect on the community, then you really should be rethinking these ethical questions—not just at the individual level, but at the group level. In Western ethics, a lot of the questions of whether the benefits outweigh the risks are centered on the individual. But in reality, especially when it's related to DNA—and DNA is something that's inherited and shared by members of a similar group—then really that question should be applied to everyone in that community.

AG: You talk about the importance of cultural consistency in ethical genomics practice as well. What does that term mean? Why is it important?

KT: First, we have to acknowledge that there are thousands of Indigenous communities around the world and each one has their own cultural ethic. So what one community might decide is within their culture ethic may not be the same as a different community. And so when we work with Indigenous communities, one of the things we want to ensure is not only is this research beneficial to them, and potentially outweighs the risks—but also, are we ensuring that the research is a question that they're culturally comfortable with, that isn't going to impede or infringe on existing cultural beliefs?

I'll give the example of migration stories. Many tribes along the Pacific Coast might be more amenable toward looking at population evolution involving their community, because they already have a creation story that states that they came from peoples that traveled from a distance. So they might look to genetics as a possible means of bridging their cultural knowledge with this genetic knowledge. Whereas with other groups, like the Havasupai, who believe they originated at the base of the Grand Canyon, these other narratives might be culturally conflicting.

* There is no way to ethically procure a full picture of global migration based on DNA without the explicit consent of Indigenous communities….What we think we know about global migration from DNA is still informed by archaeological, cultural, and linguistic data that may be misinterpreted or siloed within Western constructs and biases of history—and may itself be subject to scrutiny for pilfering of sacred sites and knowledge that have venerated meaning for Indigenous communities and descendants today.

As much as I find these questions related to new emerging technologies to be fascinating, we still have the fundamental challenge of just giving healthcare to people! I wish we could acknowledge that more.

AG: What does it mean to you as an Indigenous geneticist that the foundations of this area of study, and of STEM in general, are so profoundly white and male? How do you balance giving this technological power to the people and keeping it for people who have been educated about it, when there's fundamental inequalities around who gets to be educated and what they learn?

KT: This notion of prioritizing wisdom is a colonial concept. In our communities, until very recently, we didn't have Ph.D.s. We revered our elders and the wisdom that they conferred to us, which was derived from their cultural teachings and also their lived experiences. And we can't discount that. We can't just come into a community and say, “Oh, I have this Ph.D.” That's meaningless. And that's gonna require a humbling of the patriarchy that is in science currently.

And just as a distinct statement, I really wish that as much money as we are pushing on precision medicine initiatives in this country, I wish we could just allocate some of that money to preventative health. There was an editorial cartoon in one of our tribal newspapers. It's a skeleton waiting in an Indian Health Services clinic. It just says “Waiting room, IHS.” And it's true. Like, how can we talk about the next advances in precision medicine when we don't even have enough clinics in our tribal communities and also in our Black neighborhoods? If there’s anything that COVID has shown us, it's that there are huge inequities in healthcare. These are huge structural barriers that exist relating to inequitable access to healthcare clinics and preventative health. As much as I find these questions related to new emerging technologies to be fascinating, we still have the fundamental challenge of just giving healthcare to people! I wish we could acknowledge that more.

AG: What would it take to use technologies like CRISPR ethically in your opinion?

KT: Personally, I think CRISPR can be a powerful tool as it exists in many laboratories. But there's a huge gap between the rate of technological advances and also how we discuss the ethical implication of those advances. We need to pause, is really my viewpoint. We need to really ask ourselves: What are the steps at which this technology can be exploited? And then how do we create guidelines to prevent that exploitation?

What I’m specifically talking about is germline editing. There's just so much we don't understand about the genome. There's concerns about off-target effects. That basically means that the CRISPR system could have an effect on other genetic locations than what we originally intended. That speaks to the fact that there are genetic repeats throughout the genome that could be very similar, that we don't quite have full information about.

There are also what’s called “bystander effects,” in which we don't fully understand how the body's normal base editing repair mechanisms act, because they don't always act in a perfect way they’re very error-prone. They can introduce mutations that we don't intend. They can introduce multiple mutations at the site that maybe we intended but might have a different effect. We don't know the effect on how those cell-repair mechanisms might affect the protein’s overall function and how that change to the protein might have an effect on biological pathways, which are very complex. And then there's the simple fact that, even if it affects the one offspring, there’s other future downstream changes and effects that future offspring have to contend with.

We haven’t really spent the ethical time discussing those questions. And at this point in time, we still know very little about the genome. For instance, people who are of non-European descent, what their genomes might look like, or about gene-environment interactions. Until we have the full picture of what this could potentially look like in a live human being, I think we should pause.

AG: What do you think is missing from the conversations or ethical debates? Is there anything else that you feel like people aren't talking about that they should be talking about?

KT: What this means for communities that are historically left out of these conversations. What this means for individuals who have disabilities. What it means socially and culturally as a society when we make a standard of “normal.”

It does lend itself to a eugenics discussion. It's not a slippery slope argument because that argument is kind of a fallacy. There are intermediary steps that get you from point A to point Z, but we have to account for all those intermediary steps.

AG: The “slippery slope argument” that you probably hear the most in this context is designer babies. What do you make of the people who say if we keep going the way we're going, that's going to become standard?

KT: This is why I advocate for a pause, anticipating these situations beforehand so that we can put regulations in place to prevent those situations.

AG: So that's the important thing, that if we're thoughtful enough about this, then it doesn't have to be a slippery slope? We can get some traction, basically?


Background

Arguably, one of the most profound, fundamental features of all life forms existing on earth is that, with several minor variations, they share the same genetic code. This standard code is a mapping of 64 codons onto a set of 20 amino acids and the stop signal (Fig. 1). Ever since the standard code was deciphered [1–3], the interplay of the evolutionary forces that shaped the structure of the code has been a subject of debate [3, 4]. The main general features that do not depend upon details of the code are: (i) there are four distinct bases in mRNA – two pyrimidines (U, C) and two purines (A, G), (ii) each codon is a triplet of bases, thus forming 64 (4 3 ) codons, and (iii) 20 standard protein amino acids are encoded (with the notable exception of selenocysteine and pyrrolysine, for which subsets of organisms have evolved special coding schemes [5]).

The standard genetic code. The codon series are shaded in accordance with the PRS (Polar Requirement Scale) values [6], which is a measure of an amino acid's hydrophobicity: the greater hydrophobicity the darker the shading.

The structure of the genetic code is manifestly nonrandom [3]. Given that there are 64 codons for only 20 amino acids, most of the amino acids are encoded by more than one codon, i.e., the standard code is highly redundant the two exceptions are methionine and tryptophan, each of which is encoded by a single codon. The codon series that code for the same amino acid are, with the single exception of serine, arranged in blocks in the code table and the corresponding codons differ only in the third base position, with the exceptions of arginine and leucine, for which the codon series differ in the first position (Fig. 1). The importance of the nucleotides in the three codon positions dramatically varies: 69% of the point mutations in the third codon position are synonymous, only 4% of the mutations in the first position are synonymous, and none of the point mutations in the second position are synonymous. The structure of the code also, obviously, reflects physicochemical similarities between amino acids e.g., all codons with a U in the second position code for hydrophobic amino acids (see Fig. 1 where the blocks of synonymous codons are colored with respect to the polar requirement scale [6] (PRS), which is a measure of hydrophobicity). The finer structure of the code comes into view if synonymous codon series that differ by purines or pyrimidines are compared [7]. Related amino acids show a strong tendency to be assigned related codons [3, 4, 8]. Generally, the standard code is thought to conform with the principles of optimal coding, i.e., the structure of the code appears to be such that it is robust with respect to point mutations, translation errors, and shifts in the reading frame. The block structure of the code is considered to be a necessary condition of this robustness [9].

The fundamental question is how these regularities of the standard genetic code came into being. One of the leading hypotheses is that the primordial code had to reduce errors in translation in order to provide for the efficient synthesis of functional proteins. 'At sufficiently early stages in evolution the fundamental information-transferring processes, i.e., translation, replication, and transcription, must have been error-ridden' [8], so the subsequent evolution is thought to have been driven by selection for an arrangement of the code table that would be increasingly robust to translational misreading – the translation-error hypothesis of code evolution [8, 10–15]. The initial evidence for the translation-error hypothesis consists of the aforementioned fact that related codons (codons that differ by one base only), typically, code for related amino acids (amino acids with similar physicochemical properties) and the experimental observations that translational errors occur more frequently in the first and third positions of codons [15–18]. The latter data seem to emphasize the connection between the structure of the code and its robustness to errors of translation as opposed to mutational robustness because, in the latter case, there would be no difference between the effects of mutations in the three positions of the codon. However, Sella and Ardell have argued that minimization of the effect of mutations could be an equally, if not more, important force behind the evolution of the structure of the code than minimization of the effect of translation errors [7, 19, 20].

Quantitative evidence in support of the translation-error hypothesis has been inferred from comparisons of the standard code with random alternative codes. According to the specified rules (see Results), for each code, a score is calculated, which is used as a measure of the robustness of the code to amino acid replacements induced by translational errors. Often, this score is called "code fitness" [21–23] although it actually represents a measure of "error cost", which is inversely related to "fitness" (i.e., the smaller the score the more robust – or fit – is the respective code) in other instances, the mathematical formulation was transformed such that fitness was calculated directly (the greater the number the fitter the code) [22].

The first Monte Carlo simulation to compare the standard code with random, alternative codes has been described by Alff-Steinberger [11] and indicated that the standard code outperforms most of the random codes if the differential effects of misreading in the first or third base position are taken into account (two sets of 200 codes each were produced). The first reasonably reliable numerical estimates of the fraction of random codes that are more robust than the standard code have been obtained by Haig and Hurst [12] who showed that, if the PRS is employed as the measure of physicochemical similarity of amino acids, the probability of a random code to be fitter than the standard code is p HH≈ 10 -4 . The code error cost score depends on the exact adopted rules such that different cost functions, obviously, have the potential to produce different results. Using a refined cost function that took into account the non-uniformity of codon positions and the assumed transition-transversion bias of translation, Freeland and Hurst [24] have shown that the fraction of random codes that outperforms the standard one is p FH≈ 10 -6 , i.e., "the genetic code is one in a million". Subsequent analyses have yielded even higher estimates of the robustness of the standard code to translation errors [21–23].

Of course, the hypothesis that the code evolved to maximize robustness to errors of translation [14] is by no means the only plausible scenario of the code evolution. The frozen accident hypothesis proposed in Crick's seminal paper [3] posits that, after the primordial genetic code expanded to incorporate all 20 modern amino acids, any change in the code would be lethal, thus ruling out further evolution of the code. The stereochemical hypothesis that can be traced back to the early work of Gamow [3, 25–31] postulates that codon assignments for particular amino acids are determined by a physicochemical affinity that exists between the amino acids and the cognate nucleotide triplets (codons or anticodons). Under this hypothesis, the minimization of the effect of translation errors characteristic of the standard code is thought to be an epiphenomenon of purely stereochemical constraints (e.g., similar codons display affinity to amino acids of similar bulk and PRS). This hypothesis implies that the code did not evolve or, in a weak form, that it evolved minimally, adjusting the stereochemical assignments. The stereochemical hypothesis, at least, in its strong form, is readily experimentally testable. However, despite extensive experimentation in this area [32], the reality and relevance of any affinities between amino acid and cognate triplets, codons or anticodons, remain questionable (see [33] for a recent discussion).

The coevolution hypothesis [34–38] posits that the structure of the standard code reflects the biosynthetic pathways of amino acid formation. The coevolution hypothesis agrees with the translation robustness hypothesis in that the genetic code had substantially evolved but differs in defining the main evolutionary process that shaped the standard code table. According to this scenario, the code coevolved with amino acid biosynthesis pathways, i.e., during the evolution of the code, subsets of codons for precursor amino acids have been reassigned to encode product amino acids in the same pathways such that related codons encode metabolically close amino acids. The robustness of the code to translation errors, then, is a byproduct of this coevolutionary process inasmuch as metabolically linked amino acids also tend to be physicochemically similar. However, it has been shown that the coevolution scenario alone does not account for the observed degree of translational error-minimization of the standard code [21, 39].

Two major objections to the translation-error hypothesis have been raised [40, 41]: (i) although the estimates of p HHand p FHindicate that the standard code is unusual in its robustness to translational errors, the number of alternative codes that are fitter than the standard one is still huge, (ii) the minimization percentage (the relative optimization level reached during genetic code evolution) for the standard code would have been higher (62% for the standard code according to Di Giulio's calculations [42]) if the selection on amino acid distances were the main force. The debate between the translation-error and the coevolution scenarios of the code evolution remains unresolved. The proponents of the translation-error hypothesis reasonably counter the above objections by showing that the distribution of the code error scores (fitness values) has a Gaussian-like shape where the better, more robust codes form a long tail such that the process of adaptation is non-linear, so approaching the absolute minimum is highly improbable [14, 30, 40, 41].

The different hypotheses on code evolution, including the stereochemical hypothesis in its weaker form, are not exclusive. Indeed, as noticed by Knight et al. [30]: "the combination of stereochemical interactions and strong selection could have channeled biosynthetic expansion to produce the current repertoire of 20 coded amino acids". Regardless of the relative contributions of the different evolutionary forces to the organization of the standard code, the high level of the code optimization with respect to errors of translation is in need of an explanation. Here, we describe an analysis of the code fitness landscape using particular error cost functions for codes constructed from blocks that are inherent to the standard genetic code. Evidence is presented that the standard code is a partially optimized version of a random code with the same block structure.


Does the genetic code reveal a Wow! signal? A team of scientists claim evidence for "intelligent design"

The term “Wow! signal” represents a decade-long mystery in astronomy: In 1977, the astrophysicist Jerry Ehman received a strong narrowband radio signal on the frequency of the 21 cm line of hydrogen with the gigantic big-ear radio telescope in Delaware in the US state of Ohio. There seemed to be no natural explanation for the persisting 72 s signal from the Sagittarius constellation, which was 30 standard deviations above the background noise. The profile of the strong, narrow-band signal corresponds to that of a communication antenna. Its discoverer highlighted the signal on the printout with a red circle and added an expression of astonishment by noting “Wow!” In the period that followed, there was a lively debate among experts as to whether it was an artificially generated or a natural signal. Although objects like pul-sars seem to be a possible source, the controversy continues to this day.

A few years ago, a team of Kazakh scientists published a paper claiming to have discovered a Wow! signal in the genetic code of living beings. The code structure, so they claimed, is statistically so striking that its origin could be explained only by the presumption of an intelligent actor. The paper, published in Icarus, was somewhat off-topic since Icarus is a journal for planetary science rather than molecular biology or evolution. Since then, the paper has been celebrated as evidence for the extraterrestrial origin of the genetic code by support-ers of ancient astronauts and by the intelligent design movement see, for example, here, here, and here, as well as this arti-cle in a German creationist journal. What’s it all about?

Ominous divisor 37 as a design signal

The authors of the paper are the mathematician Vladimir shCherbak from Al-Farabi University in Kazakhstan and the physicist Maxim Makukov from the Fessenkow Astrophysical Institute. The publisher of Icarus, a refereed journal, is the American Astronomical Society (AAS), an association of professional astronomers. The journal is mainly de-voted to initial publications in planetary scienceand is neither esoteric nor religious.

In order to track down the alleged Wow! signal, shCherbak and Makukov first divided the 20 genetically encoded amino acids into two groups: In the first group, they listed all amino acids that are sufficiently defined by the first two bases of a base triplet (meaning that any change of the third base would not change the amino acid) (Fig. 1). A base triplet or codon is a sequence of three bases (“letters”) on the genetic molecule DNA (or on the RNA) coding for a specific amino acid. The authors placed the remaining amino acids into a second group. Then they calculated the nucleon number (molecular weight) of every amino acid in both groups, as well as the nucleon number of every amino acid basic structure (“block nucleons”) and side chain. Finally, they sorted the numbers and added them up.

After this operation, shCherbak and Makukov came across numerical ratios in which the divisor 37 appears again and again: In the first group, the nucleon sum of all side chains is 333 (= 37 x 3 ² ), that of all “block nucleons” 592 (= 37 x 4 ² ), and the total 925 = (37 x 5 ² ). Moreover: “With 037 cancelled out, this leads to 3 ² + 4 ² = 5 ² – representation of the Egyptian triangle…” (p. 6). The number 37 also appears in the second group the total number of nucleons in the amino acids is 1110 (= 30 x 37).

Fig. 1. Structure of the side chains of those eight amino acids which are clearly defined by their first two codon bases. Side chains and their nucleon numbers (molecular weights) are shown in ascending order. The total is 333. However, this requires targeted manipulation (here: with proline). Further explanation in the text.

shCherbak and Makukov discuss various other examples using different sorting and exchange operations. Another one is discussed in Fig. 2. In all those cases, the divisor 37 appears prominently.

Fig. 2 There are 24 codons, each with three very different bases (A, T, G, and C). shCherbak and Makukov arranged those codons in such a way that their nucleic bases are interchanged according to certain rules: Within each “block” of 3 codons, the first nucleic base is shifted from one row to the next one onto the third position. From block to block, A is exchanged for G, and T for C (and vice versa). On the left and right column, the codons are arranged mirror-imaged. Next to each codon, it is noted which amino acid it codes for, and the molecular weight of the side chains and whole molecules is noted. In total, there are numbers that are completely divisible by 37. Source: shCherbak and Makukov, p. 5.

According to the authors, the frequent appearance of “37”, the emergence of “Egyptian triangles”, etc. are far beyond the statistical significance of randomly generated patterns. Thus, these appearances cannot be explained by natural processes since they are not relevant for biological function. The number “0” is also represented, as a corresponding symbol for start/stop codons. The authors claim: “Nature is indifferent to numerical languages contrived by intelligence to represent quantities, including zero”. Such a “privileged numerical system is therefore a reliable sign of artificiality” (p. 4). Ergo: “Whatever the actual reason behind the decimal system in the code, it appears that it was invented outside the solar system already several billions [of] years ago” (p. 8).

A method reminiscent of the Bible Code

Fig. 3. The Bible Code. For detailed explanations see text. Source: https://de.wikipedia.org/wiki/Bibelcode. See also here, pp. 102-104.

Prothero and Callahan (p. 127) note that the approach of the authors shows clear similarities with the method of the so-called Bible code, which is as also called equidistant letter spacing (ELS). It serves to track down presumed “hidden messages” and prophecies in biblical texts. How does this method work?

To recognize the “message”, a rectangular grid with a constant but arbitrary number of columns and rows is placed over the text, and a freely selectable number of letters is skipped from a starting point in the horizontal and vertical direction. Then meaningful words or word constellations are sought (Fig. 3). According to this method, world historical events such as the Holocaust or 9/11 are said to have been predicted.

The problem is: Since the dimensions of the grid and the number of letters skipped when reading were freely defined, the “researchers” most likely would have come up with different messages, had they chosen the width of the grid differently. Nobody read and verified relevant information that had not already been known.

Prothero and Callahan criticize the fact that the authors use the same subjective method to track down the presumed design signals in the genetic code. shCherbak and Makukov have also freely (meaning: arbitrarily) defined the logical criteria for dividing the amino acids into certain groups as well as the transformation and exchange rules according to which the “magic” sums appear. For example, if we would exchange A for T and G for C (rather than A for G and T for C) in Fig. 2, we would receive completely different sums and divisors. In other words, the authors get only results that they have already produced by their own rules.

The interpretation of start/stop codons as a symbol for numerical 0 is just as arbitrary. And of course, if we interpret start and stop codons as numerical 0 on metaphysical grounds, it is not surprising, that we “discover” a “privileged numerical system”, including 0. Garbage in, garbage out.

The choice of molecular weights (nucleon numbers) of side chains, basic structures and whole molecules as an object of investigation is also arbitrary. It is as arbitrary as the determination of a certain number of letters to be skipped in the Bible code, which is needed to find meaningful word combinations.

Design signals generated by targeted manipulations

There is another, more serious objection to shCherbak and Makukov’s method, which Prothero and Callahan do not mention: It works frequently only by a trick.

The amino acid proline does not fit into the scheme, so the authors modify it. To do so, they formally remove an H atom from the side chain of proline and transfer it to the secondary amino group (see Fig. 4). This makes proline the primary amino acid. The purpose of this operation is to formally “standardize” the basic structures of all amino acids, as the authors mention. This is because proline is the only secondary amino acid in the genetic code.

But “standardizing” means nothing other than purposefully changing facts to introduce the divisor 37 through the back door! In contrast to the other amino acids, the molecular weight of the proline basic structure (73) cannot be divided exactly by 37. This also applies, as discussed in Fig. 1, for the nucleon sum of the side chains. That undesirable fact only changes through the appropriate manipulation of numbers. Without it, the authors would never come up with the “magic” numbers 333 and 592. Strictly speaking, none of the examples in which the amino acid proline occurs, works “smoothly” in the sense of the authors.

To make matters worse, this formal H transfer would lead to an impossible molecule (see Fig. 4). The authors even implicitly admit this by stating that the H transfer in proline “can be simulated only in the mind of a recipient to achieve the array of amino acids with uniform structure. Such nucleon transfer thus appears artificial”. However, exactly this artificial operation, so the authors continue, “seems to be its destination: it protects the patterns from any natural explanation” (p. 3). In other words, targeted manipulations generate artifacts whose origin logically eludes any natural explanation! If that’s not a fatal circular argument, what then?

Fig. 4 The clever trick of “standardizing” the nucleon number of proline: The authors formally transfer an H atom from the side chain to the secondary amino group (A). Thus, the molecule becomes the primary amino acid with a basic structure whose mo-lecular weight (74) is completely divisible by 37. Also, the side chain is open and has the “desired” molecular weight of 41 (B). But the molecule would now be in a chemically impossible state with an electron sextet on the terminal carbon atom (indicated by the two dots). Only on the basis of this extremely unstable carbene molecule, the nucleon sums are completely divisible by 37 (Fig. 1). Correctly, the authors should either have added two H atoms to the side chain – or left it as it is. By the way: They could have removed an H atom also from the alpha C atom of the basic structure (C, D). In none of these cases their method would have worked. This shows that the alleged “Wow! signal” is just an artifact of the authors. Without their manipulation, there would be no “magic” numbers.

The power of numerology

Incidentally, it is not surprising that in the (numerical) cosmos, which functions under certain rules and laws, certain order patterns like Egyptian triangles, golden triangles, Fibonacci numbers, ascending numerical sequences, prime numbers, etc. can be found which, as brute facts, neither allow an explanation nor even demand one.

Let’s, for example, look at the natural three-digit numbers with identical digits and form their digit sums (111 = 3, 222 = 6, 333 = 9, etc.). If we divide these numbers by their digit sums, the result is in all cases … 37. Another example: 81 stable chemical elements exist. 81 may be represented formally as a “prime number cross” (3 x 3 ³ ). The reciprocal of the number 81 can in turn be approximated by the decimal fraction 0.0123456789 (10) (11) (12)… (here, 10, 11, 12 are the next digits the parentheses are for convenience). And if we look at Romanesco broccoli, we’ll see that its appearance is strictly mathematical: Its conical turrets are copies of the entire inflorescence, which is known as self-similarity or a fractal structure. Each of those turrets is also a so-called Fibonacci spiral, a sequence of numbers that ensures that one turret of vegetables never grows vertically above another.

Why should these pretty patterns, of which we could construct endless examples, represent design signals? Does it even make sense to ask how probable their “accidental” emergence is? Why 37 rather than 38 or 42? Why not π or e? Why not an alternating sequence of numbers? What if the laws were different? And what message are the “signals” supposed to convey? Shall we assume that a creator liked to line the world up with the number 3 or π and to introduce Fibonacci numbers into the cosmos? Such metaphysical interpretations cannot seriously pass as empirical arguments! All such patterns have no more objectively recognizable meaning (or semantic content) than the “Egyptian triangle” or the “appearance” of 37 in the genetic code.

Last but not least: The more often we try out different mathematical transformations, the higher is the likelihood to find any mathematical pattern. We neither know the total set of possibilities nor how many unsuccessful trials the authors made to find their patterns. We only know that the number of potential mathematical transformations is infinite. Any number of counterexamples without any pattern can be constructed.

In a word, the approach of shCherbak and Makukov is pure number mysticism. They could only read an intelligent pattern from those numbers after they inserted it into the genetic code by convenient operations, targeted manipulations and reckless interpretations. Therefore, we have to agree with Prothero and Callahan (p. 128), who summarize:

The result is a paper that, despite the impressive credentials of its authors and the soundness of the journal it was printed in, is essentially without substance.

The mathematician Underwood Dudley (1997) shows in his book Numerology in an entertaining way what nonsense can be done with such number games.


Scientists find biology's optimal 'molecular alphabet' may be preordained

An international and interdisciplinary team working at the Earth-Life Science Institute (ELSI) at the Tokyo Institute of Technology has modeled the evolution of one of biology's most fundamental sets of building blocks and found that it may have special properties that helped bootstrap itself into its modern form.

All life, from bacteria to blue whales to human beings, uses an almost universal set of 20 coded amino acids (CAAs) to construct proteins. This set was likely "canonicalized" or standardized during early evolution before this, smaller amino acid sets were gradually expanded as organisms developed new synthetic proofreading and coding abilities. The new study, led by Melissa Ilardo, now at the University of Utah, explored how this set evolution might have occurred.

There are millions of possible types of amino acids that could be found on Earth or elsewhere in the Universe, each with its own distinctive chemical properties. Indeed, scientists have found these unique chemical properties are what give biological proteins, the large molecules that do much of life's catalysis, their own unique capabilities. The team had previously measured how the CAA set compares to random sets of amino acids and found that only about 1 in a billion random sets had chemical properties as unusually distributed as those of the CAAs.

The team thus set out to ask the question of what earlier, smaller coded sets might have been like in terms of their chemical properties. There are many possible subsets of the modern CAAs or other presently uncoded amino acids that could have comprised the earlier sets. The team calculated the possible ways of making a set of 3-20 amino acids using a special library of 1913 structurally diverse "virtual" amino acids they computed and found there are 10 48 ways of making sets of 20 amino acids. In contrast, there are only

10 19 grains of sand on Earth, and only

10 24 stars in the entire Universe. "There are just so many possible amino acids, and so many ways to make combinations of them, a computational approach was the only comprehensive way to address this question," says team member Jim Cleaves of ELSI. "Efficient implementations of algorithms based on appropriate mathematical models allow us to handle even astronomically huge combinatorial spaces," adds co-author Markus Meringer of the Deutsches Zentrum für Luft- und Raumfahrt.

As this number is so large, they used statistical methods to compare the adaptive value of the combined physicochemical properties of the modern CAA set with those of billions of random sets of 3-20 amino acids. What they found was that the CAAs may have been selectively kept during evolution due to their unique adaptive chemical properties, which help them to make optimal proteins, in turn helping organisms that could produce those proteins become more fit.

They found that even hypothetical sets containing only one or a few modern CAAs were especially adaptive. It was difficult to find sets even among a multitude of alternatives that have the unique chemical properties of the modern CAA set. These results suggest that each time a modern CAA was discovered and embedded in biology's toolkit during evolution, it provided an adaptive value unusual among a huge number of alternatives, and each selective step may have helped bootstrap the developing set to include still more CAAs, ultimately leading to the modern set.

If true, the researchers speculate, it might mean that even given a large variety of starting points for developing coded amino acid sets, biology might end up converging on a similar set. As this model was based on the invariant physical and chemical properties of the amino acids themselves, this could mean that even Life beyond Earth might be very similar to modern Earth life. Co-author Rudrarup Bose, now of the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden, further hypothesizes that "Life may not be just a set of accidental events. Rather, there may be some universal laws governing the evolution of life."